Now that you're familiar with Kernel basics, let's see how the kernel allows you to run Semantic Skills and Semantic Functions stored on disk.
A Semantic Skill is a collection of Semantic Functions, where each function is defined with natural language that can be provided with a text file.
Refer to our glossary for an in-depth guide to the terms.
The repository includes some examples under the samples folder.
For instance, this is the Joke function part of the FunSkill skill:
WRITE EXACTLY ONE JOKE or HUMOROUS STORY ABOUT THE TOPIC BELOW.
JOKE MUST BE:
- G RATED
- WORKPLACE/FAMILY SAFE
NO SEXISM, RACISM OR OTHER BIAS/BIGOTRY.
BE CREATIVE AND FUNNY. I WANT TO LAUGH.
+++++
{{$input}}
+++++
Note the special {{$input}}
token, which is a variable that is automatically passed when invoking the function, commonly referred to as a "function parameter".
We'll explore later how functions can accept multiple variables, as well as invoke other functions.
In the same folder you'll notice a second config.json file. The file is optional, and is used to set some parameters for large language models like Temperature, TopP, Stop Sequences, etc.
{
"schema": 1,
"type": "completion",
"description": "Generate a funny joke",
"completion": {
"max_tokens": 500,
"temperature": 0.5,
"top_p": 0.5
}
}
Given a semantic function defined by these files, this is how to load and use a file based semantic function.
Load and configure the kernel, as usual, loading also the AI service settings defined in the Setup notebook:
!python -m pip install semantic-kernel==0.3.10.dev0
import semantic_kernel as sk
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion, OpenAIChatCompletion
kernel = sk.Kernel()
useAzureOpenAI = False
# Configure AI service used by the kernel
if useAzureOpenAI:
deployment, api_key, endpoint = sk.azure_openai_settings_from_dot_env()
kernel.add_chat_service("chat_completion", AzureChatCompletion("gpt-35-turbo", endpoint, api_key))
else:
api_key, org_id = sk.openai_settings_from_dot_env()
kernel.add_chat_service("chat-gpt", OpenAIChatCompletion("gpt-3.5-turbo", api_key, org_id))
Import the skill and all its functions:
# note: using skills from the samples folder
skills_directory = "../../samples/skills"
funFunctions = kernel.import_semantic_skill_from_directory(skills_directory, "FunSkill")
jokeFunction = funFunctions["Joke"]
How to use the skill functions, e.g. generate a joke about "time travel to dinosaur age":
result = jokeFunction("time travel to dinosaur age")
print(result)
# You can also invoke functions asynchronously
# result = await jokeFunction.invoke_async("time travel to dinosaur age")
# print(result)
Great, now that you know how to load a skill from disk, let's show how you can create and run a semantic function inline.