The Semantic Kernel SDK can be imported from the following nuget feed:
#r "nuget: Microsoft.SemanticKernel, 1.23.0"
After adding the nuget package, you can instantiate the kernel:
using Microsoft.SemanticKernel;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Logging.Abstractions;
using Microsoft.Extensions.DependencyInjection;
using Kernel = Microsoft.SemanticKernel.Kernel;
// Inject your logger
// see Microsoft.Extensions.Logging.ILogger @ https://learn.microsoft.com/dotnet/core/extensions/logging
ILoggerFactory myLoggerFactory = NullLoggerFactory.Instance;
var builder = Kernel.CreateBuilder();
builder.Services.AddSingleton(myLoggerFactory);
var kernel = builder.Build();
When using the kernel for AI requests, the kernel needs some settings like URL and credentials to the AI models.
The SDK currently supports OpenAI, Azure OpenAI and HuggingFace. It's also possible to create your own connector and use AI provider of your choice.
If you need an Azure OpenAI key, go here.
Kernel.CreateBuilder()
.AddAzureOpenAIChatCompletion(
"my-finetuned-model", // Azure OpenAI *Deployment Name*
"https://contoso.openai.azure.com/", // Azure OpenAI *Endpoint*
"...your Azure OpenAI Key...", // Azure OpenAI *Key*
serviceId: "Azure_curie" // alias used in the prompt templates' config.json
)
.AddOpenAIChatCompletion(
"gpt-4o-mini", // OpenAI Model Name
"...your OpenAI API Key...", // OpenAI API key
"...your OpenAI Org ID...", // *optional* OpenAI Organization ID
serviceId: "OpenAI_davinci" // alias used in the prompt templates' config.json
);
When working with multiple backends and multiple models, the first backend defined is also the "default" used in these scenarios:
Great, now that you're familiar with setting up the Semantic Kernel, let's see how we can use it to run prompts.