This notebook shows how to make use of the new ChatCompletion API from OpenAI, popularized by ChatGPT. This API brings a new ChatML schema which is different from the TextCompletion API. While the text completion API expects input a prompt and returns a simple string, the chat completion API expects in input a Chat history and returns a new message:
messages=[
{ "role": "system", "content": "You are a helpful assistant."},
{ "role": "user", "content": "Who won the world series in 2020?"},
{ "role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{ "role": "user", "content": "Where was it played?"}
]
Note that there are three message types:
Your app is responsible for adding information to the chat history and maintaining this object. The Chat Completion API is stateless, and returns only new messages, that your app can use, e.g. to execute commands, generate images, or simply continue the conversation.
When deciding between which one to use, know that ChatGPT models (i.e. gpt-3.5-turbo) are optimized for chat applications and have been fine-tuned for instruction-following and dialogue. As such, for creating semantic plugins with the Semantic Kernel, users may still find the TextCompletion model better suited for certain use cases.
The code below shows how to setup SK with ChatGPT, how to manage the Chat history object, and to make things a little more interesting asks ChatGPT to reply with image descriptions instead so we can have a dialog using images, leveraging DALL-E 3 integration.
// Usual setup: importing Semantic Kernel SDK and SkiaSharp, used to display images inline.
#r "nuget: Microsoft.SemanticKernel, 1.11.1"
#r "nuget: SkiaSharp, 2.88.3"
#!import config/Settings.cs
#!import config/Utils.cs
#!import config/SkiaUtils.cs
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.TextToImage;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.OpenAI;
The notebook uses:
using Kernel = Microsoft.SemanticKernel.Kernel;
#pragma warning disable SKEXP0001, SKEXP0010
// Load OpenAI credentials from config/settings.json
var (useAzureOpenAI, model, azureEndpoint, apiKey, orgId) = Settings.LoadFromFile();
// Configure the two AI features: OpenAI Chat and DALL-E 3 for image generation
var builder = Kernel.CreateBuilder();
if(useAzureOpenAI)
{
builder.AddAzureOpenAIChatCompletion("gpt-35-turbo", azureEndpoint, apiKey);
builder.AddAzureOpenAITextToImage("dall-e-3", azureEndpoint, apiKey);
}
else
{
builder.AddOpenAIChatCompletion("gpt-3.5-turbo", apiKey, orgId);
builder.AddOpenAITextToImage(apiKey, orgId);
}
var kernel = builder.Build();
// Get AI service instance used to generate images
var dallE = kernel.GetRequiredService<ITextToImageService>();
// Get AI service instance used to manage the user chat
var chatGPT = kernel.GetRequiredService<IChatCompletionService>();
Before starting the chat, we create a new chat object with some instructions, which are included in the chat history.
The instructions tell OpenAI what kind of chat we want to have, in this case we ask to reply with "image descriptions", so that we can chain ChatGPT with DALL-E 3.
var systemMessage = "You're chatting with a user. Instead of replying directly to the user"
+ " provide a description of a cartoonish image that expresses what you want to say."
+ " The user won't see your message, they will see only the image."
+ " Describe the image with details in one sentence.";
var chat = new ChatHistory(systemMessage);
Run the following code to start the chat. The chat consists of a loop with these main steps:
Note: to stop the chat in VS Code press ESC on the kyboard or the "stop" button on the left side.
#pragma warning disable SKEXP0001
while (true)
{
// 1. Ask the user for a message. The user enters a message. Add the user message into the Chat History object.
var userMessage = await InteractiveKernel.GetInputAsync("Your message");
Console.WriteLine($"User: {userMessage}");
chat.AddUserMessage(userMessage);
// 2. Send the chat object to AI asking to generate a response. Add the bot message into the Chat History object.
var assistantReply = await chatGPT.GetChatMessageContentAsync(chat, new OpenAIPromptExecutionSettings());
chat.AddAssistantMessage(assistantReply.Content);
// 3. Show the reply as an image
Console.WriteLine($"\nBot:");
var imageUrl = await dallE.GenerateImageAsync(assistantReply.Content, 1024, 1024);
await SkiaUtils.ShowImage(imageUrl, 1024, 1024);
Console.WriteLine($"[{assistantReply}]\n");
}