The Planner is one of the fundamental concepts of the Semantic Kernel.
It makes use of the collection of native and semantic functions that have been registered to the kernel and using AI, will formulate a plan to execute the given ask.
From our own testing, planner works best with more powerful models like gpt4
but sometimes you might get working plans with cheaper models like gpt-35-turbo
. We encourage you to implement your own versions of the planner and use different models that fit your user needs.
Read more about planner here.
Import Semantic Kernel SDK from pypi.org
# Note: if using a virtual environment, do not run this cell
%pip install -U semantic-kernel
from semantic_kernel import __version__
__version__
Initial configuration for the notebook to run properly.
# Make sure paths are correct for the imports
import os
import sys
notebook_dir = os.path.abspath("")
parent_dir = os.path.dirname(notebook_dir)
grandparent_dir = os.path.dirname(parent_dir)
sys.path.append(grandparent_dir)
Let's get started with the necessary configuration to run Semantic Kernel. For Notebooks, we require a .env
file with the proper settings for the model you use. Create a new file named .env
and place it in this directory. Copy the contents of the .env.example
file from this directory and paste it into the .env
file that you just created.
NOTE: Please make sure to include GLOBAL_LLM_SERVICE
set to either OpenAI, AzureOpenAI, or HuggingFace in your .env file. If this setting is not included, the Service will default to AzureOpenAI.
Add your OpenAI Key key to your .env
file (org Id only if you have multiple orgs):
GLOBAL_LLM_SERVICE="OpenAI"
OPENAI_API_KEY="sk-..."
OPENAI_ORG_ID=""
OPENAI_CHAT_MODEL_ID=""
OPENAI_TEXT_MODEL_ID=""
OPENAI_EMBEDDING_MODEL_ID=""
The names should match the names used in the .env
file, as shown above.
Add your Azure Open AI Service key settings to the .env
file in the same folder:
GLOBAL_LLM_SERVICE="AzureOpenAI"
AZURE_OPENAI_API_KEY="..."
AZURE_OPENAI_ENDPOINT="https://..."
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME="..."
AZURE_OPENAI_TEXT_DEPLOYMENT_NAME="..."
AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME="..."
AZURE_OPENAI_API_VERSION="..."
The names should match the names used in the .env
file, as shown above.
For more advanced configuration, please follow the steps outlined in the setup guide.
We will load our settings and get the LLM service to use for the notebook.
from services import Service
from samples.service_settings import ServiceSettings
service_settings = ServiceSettings.create()
# Select a service to use for this notebook (available services: OpenAI, AzureOpenAI, HuggingFace)
selectedService = (
Service.AzureOpenAI
if service_settings.global_llm_service is None
else Service(service_settings.global_llm_service.lower())
)
print(f"Using service type: {selectedService}")
Let's define some imports that will be used in this example.
from semantic_kernel.contents.chat_history import ChatHistory # noqa: F401
from semantic_kernel.functions.kernel_arguments import KernelArguments # noqa: F401
from semantic_kernel.prompt_template.input_variable import InputVariable # noqa: F401
Define your ASK. What do you want the Kernel to do?
ask = """
Tomorrow is Valentine's day. I need to come up with a few short poems.
She likes Shakespeare so write using his style. She speaks French so write it in French.
Convert the text to uppercase."""
The planner needs to know what plugins are available to it. Here we'll give it access to the SummarizePlugin
and WriterPlugin
we have defined on disk. This will include many semantic functions, of which the planner will intelligently choose a subset.
You can also include native functions as well. Here we'll add the TextPlugin.
from semantic_kernel.connectors.ai.open_ai import OpenAIChatPromptExecutionSettings
from semantic_kernel.core_plugins.text_plugin import TextPlugin
from semantic_kernel.functions.kernel_function_from_prompt import KernelFunctionFromPrompt
from semantic_kernel.kernel import Kernel
kernel = Kernel()
service_id = None
if selectedService == Service.OpenAI:
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
service_id = "default"
kernel.add_service(
OpenAIChatCompletion(
service_id=service_id,
),
)
elif selectedService == Service.AzureOpenAI:
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion
service_id = "default"
kernel.add_service(
AzureChatCompletion(
service_id=service_id,
),
)
plugins_directory = "../../../prompt_template_samples/"
summarize_plugin = kernel.add_plugin(plugin_name="SummarizePlugin", parent_directory=plugins_directory)
writer_plugin = kernel.add_plugin(
plugin_name="WriterPlugin",
parent_directory=plugins_directory,
)
text_plugin = kernel.add_plugin(plugin=TextPlugin(), plugin_name="TextPlugin")
shakespeare_func = KernelFunctionFromPrompt(
function_name="Shakespeare",
plugin_name="WriterPlugin",
prompt="""
{{$input}}
Rewrite the above in the style of Shakespeare.
""",
prompt_execution_settings=OpenAIChatPromptExecutionSettings(
service_id=service_id,
max_tokens=2000,
temperature=0.8,
),
description="Rewrite the input in the style of Shakespeare.",
)
kernel.add_function(plugin_name="WriterPlugin", function=shakespeare_func)
for plugin_name, plugin in kernel.plugins.items():
for function_name, function in plugin.functions.items():
print(f"Plugin: {plugin_name}, Function: {function_name}")
To build more advanced planners, we need to introduce a proper Plan object that can contain all the necessary state and information needed for high quality plans.
To see what that object model is, look at (https://github.com/microsoft/semantic-kernel/blob/main/python/semantic_kernel/planners/plan.py)
The sequential planner is an XML-based step-by-step planner. You can see the prompt used for it here.
from semantic_kernel.planners import SequentialPlanner
planner = SequentialPlanner(kernel, service_id)
sequential_plan = await planner.create_plan(goal=ask)
To see the steps that the Sequential Planner will take, we can iterate over them and print their descriptions
print("The plan's steps are:")
for step in sequential_plan._steps:
print(
f"- {step.description.replace('.', '') if step.description else 'No description'} using {step.metadata.fully_qualified_name} with parameters: {step.parameters}"
)
Let's ask the sequential planner to execute the plan.
result = await sequential_plan.invoke(kernel)
print(result)
The Function Calling Stepwise Planner is based off the paper from MRKL (Modular Reasoning, Knowledge and Language) and is similar to other papers like ReACT (Reasoning and Acting in Language Models). At the core, the stepwise planner allows for the AI to form "thoughts" and "observations" and execute actions based off those to achieve a user's goal. This continues until all required functions are complete and a final output is generated.
Please note that the Function Calling Stepwise Planner uses OpenAI function calling, and so it can only use either the AzureChatCompletion or the OpenAIChatCompletion service.
from semantic_kernel.kernel import Kernel
kernel = Kernel()
service_id = None
if selectedService == Service.OpenAI:
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
service_id = "default"
kernel.add_service(
OpenAIChatCompletion(
service_id=service_id,
),
)
elif selectedService == Service.AzureOpenAI:
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion
service_id = "default"
kernel.add_service(
AzureChatCompletion(
service_id=service_id,
),
)
Let's create a sample EmailPlugin
that simulates handling a request to get_email_address()
and send_email()
.
from typing import Annotated
from semantic_kernel.functions.kernel_function_decorator import kernel_function
class EmailPlugin:
"""
Description: EmailPlugin provides a set of functions to send emails.
Usage:
kernel.add_plugin(EmailPlugin(), plugin_name="email")
Examples:
{{email.SendEmail}} => Sends an email with the provided subject and body.
"""
@kernel_function(name="SendEmail", description="Given an e-mail and message body, send an e-email")
def send_email(
self,
subject: Annotated[str, "the subject of the email"],
body: Annotated[str, "the body of the email"],
) -> Annotated[str, "the output is a string"]:
"""Sends an email with the provided subject and body."""
return f"Email sent with subject: {subject} and body: {body}"
@kernel_function(name="GetEmailAddress", description="Given a name, find the email address")
def get_email_address(
self,
input: Annotated[str, "the name of the person"],
):
email = ""
if input == "Jane":
email = "janedoe4321@example.com"
elif input == "Paul":
email = "paulsmith5678@example.com"
elif input == "Mary":
email = "maryjones8765@example.com"
else:
email = "johndoe1234@example.com"
return email
We'll add this new plugin to the kernel.
kernel.add_plugin(plugin_name="EmailPlugin", plugin=EmailPlugin())
Let's also add a couple more plugins.
from semantic_kernel.core_plugins.math_plugin import MathPlugin
from semantic_kernel.core_plugins.time_plugin import TimePlugin
kernel.add_plugin(plugin_name="MathPlugin", plugin=MathPlugin())
kernel.add_plugin(plugin_name="TimePlugin", plugin=TimePlugin())
We will define our FunctionCallingStepPlanner and the questions we want to ask.
from semantic_kernel.planners.function_calling_stepwise_planner import (
FunctionCallingStepwisePlanner,
FunctionCallingStepwisePlannerOptions,
)
questions = [
"What is the current hour number, plus 5?",
"What is 387 minus 22? Email the solution to John and Mary.",
"Write a limerick, translate it to Spanish, and send it to Jane",
]
options = FunctionCallingStepwisePlannerOptions(
max_iterations=10,
max_tokens=4000,
)
planner = FunctionCallingStepwisePlanner(service_id=service_id, options=options)
Let's loop through the questions and invoke the planner.
for question in questions:
result = await planner.invoke(kernel, question)
print(f"Q: {question}\nA: {result.final_answer}\n")
# Uncomment the following line to view the planner's process for completing the request
# print(f"Chat history: {result.chat_history}\n")