In this example, we show how you can build a simple chat bot by sending and updating the kernel arguments with your requests.
We introduce the Kernel Arguments object which in this demo functions similarly as a key-value store that you can use when running the kernel.
The chat history is local (i.e. in your computer's RAM) and not persisted anywhere beyond the life of this Jupyter session.
In future examples, we will show how to persist the chat history on disk so that you can bring it into your applications.
In this chat scenario, as the user talks back and forth with the bot, the chat context gets populated with the history of the conversation. During each new run of the kernel, the kernel arguments and chat history can provide the AI with its variables' content.
!python -m pip install semantic-kernel==1.0.3
from services import Service
# Select a service to use for this notebook (available services: OpenAI, AzureOpenAI, HuggingFace)
selectedService = Service.OpenAI
from semantic_kernel import Kernel
kernel = Kernel()
service_id = None
if selectedService == Service.OpenAI:
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
service_id = "oai_chat_gpt"
kernel.add_service(
OpenAIChatCompletion(service_id=service_id, ai_model_id="gpt-3.5-turbo-1106"),
)
elif selectedService == Service.AzureOpenAI:
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion
service_id = "aoai_chat_completion"
kernel.add_service(
AzureChatCompletion(service_id=service_id),
)
Let's define a prompt outlining a dialogue chat bot.
prompt = """
ChatBot can have a conversation with you about any topic.
It can give explicit instructions or say 'I don't know' if it does not have an answer.
{{$history}}
User: {{$user_input}}
ChatBot: """
Register your semantic function
from semantic_kernel.connectors.ai.open_ai.prompt_execution_settings.open_ai_prompt_execution_settings import (
OpenAIChatPromptExecutionSettings,
)
from semantic_kernel.prompt_template.input_variable import InputVariable
from semantic_kernel.prompt_template import PromptTemplateConfig
if selectedService == Service.OpenAI:
execution_settings = OpenAIChatPromptExecutionSettings(
service_id=service_id,
ai_model_id="gpt-3.5-turbo-1106",
max_tokens=2000,
temperature=0.7,
)
elif selectedService == Service.AzureOpenAI:
execution_settings = OpenAIChatPromptExecutionSettings(
service_id=service_id,
ai_model_id="gpt-35-turbo",
max_tokens=2000,
temperature=0.7,
)
prompt_template_config = PromptTemplateConfig(
template=prompt,
name="chat",
template_format="semantic-kernel",
input_variables=[
InputVariable(name="user_input", description="The user input", is_required=True),
InputVariable(name="history", description="The conversation history", is_required=True),
],
execution_settings=execution_settings,
)
chat_function = kernel.add_function(
function_name="chat",
plugin_name="chatPlugin",
prompt_template_config=prompt_template_config,
)
from semantic_kernel.contents import ChatHistory
chat_history = ChatHistory()
chat_history.add_system_message("You are a helpful chatbot who is good about giving book recommendations.")
Initialize the Kernel Arguments
from semantic_kernel.functions import KernelArguments
arguments = KernelArguments(user_input="Hi, I'm looking for book suggestions", history=chat_history)
Chat with the Bot
response = await kernel.invoke(chat_function, arguments)
print(response)
Update the history with the output
chat_history.add_assistant_message(str(response))
Keep Chatting!
async def chat(input_text: str) -> None:
# Save new message in the context variables
print(f"User: {input_text}")
# Process the user message and get an answer
answer = await kernel.invoke(chat_function, KernelArguments(user_input=input_text, history=chat_history))
# Show the response
print(f"ChatBot: {answer}")
chat_history.add_user_message(input_text)
chat_history.add_assistant_message(str(answer))
await chat("I love history and philosophy, I'd like to learn something new about Greece, any suggestion?")
await chat("that sounds interesting, what is it about?")
await chat("if I read that book, what exactly will I learn about Greek history?")
await chat("could you list some more books I could read about this topic?")
After chatting for a while, we have built a growing history, which we are attaching to each prompt and which contains the full conversation. Let's take a look!
print(chat_history)