Now that you're familiar with Kernel basics, let's see how the kernel allows you to run Prompt Plugins and Prompt Functions stored on disk.
A Prompt Plugin is a collection of Semantic Functions, where each function is defined with natural language that can be provided with a text file.
Refer to our glossary for an in-depth guide to the terms.
The repository includes some examples under the samples folder.
For instance, this is the Joke function part of the FunPlugin plugin:
WRITE EXACTLY ONE JOKE or HUMOROUS STORY ABOUT THE TOPIC BELOW.
JOKE MUST BE:
- G RATED
- WORKPLACE/FAMILY SAFE
NO SEXISM, RACISM OR OTHER BIAS/BIGOTRY.
BE CREATIVE AND FUNNY. I WANT TO LAUGH.
+++++
{{$input}}
+++++
Note the special {{$input}}
token, which is a variable that is automatically passed when invoking the function, commonly referred to as a "function parameter".
We'll explore later how functions can accept multiple variables, as well as invoke other functions.
In the same folder you'll notice a second config.json file. The file is optional, and is used to set some parameters for large language models like Temperature, TopP, Stop Sequences, etc.
{
"schema": 1,
"description": "Generate a funny joke",
"execution_settings": {
"default": {
"max_tokens": 1000,
"temperature": 0.9,
"top_p": 0.0,
"presence_penalty": 0.0,
"frequency_penalty": 0.0
}
},
"input_variables": [
{
"name": "input",
"description": "Joke subject",
"default": ""
},
{
"name": "style",
"description": "Give a hint about the desired joke style",
"default": ""
}
]
}
Given a prompt function defined by these files, this is how to load and use a file based prompt function.
Load and configure the kernel, as usual, loading also the AI service settings defined in the Setup notebook:
!python -m pip install semantic-kernel==1.0.3
from services import Service
# Select a service to use for this notebook (available services: OpenAI, AzureOpenAI, HuggingFace)
selectedService = Service.OpenAI
from semantic_kernel import Kernel
kernel = Kernel()
service_id = None
if selectedService == Service.OpenAI:
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
service_id = "default"
kernel.add_service(
OpenAIChatCompletion(service_id=service_id, ai_model_id="gpt-3.5-turbo-1106"),
)
elif selectedService == Service.AzureOpenAI:
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion
service_id = "default"
kernel.add_service(
AzureChatCompletion(service_id=service_id),
)
Import the plugin and all its functions:
# note: using plugins from the samples folder
plugins_directory = "../../../prompt_template_samples/"
funFunctions = kernel.add_plugin(parent_directory=plugins_directory, plugin_name="FunPlugin")
jokeFunction = funFunctions["Joke"]
How to use the plugin functions, e.g. generate a joke about "time travel to dinosaur age":
result = await kernel.invoke(jokeFunction, input="travel to dinosaur age", style="silly")
print(result)
Great, now that you know how to load a plugin from disk, let's show how you can create and run a prompt function inline.