Jason Haley

Ramblings from an Independent Consultant

Updated Semantic Kernel Blogs and Github Repo

This weekend I upgraded the semantic-kernel-getting-started repository to the latest code.

It was mostly to move to .NET 9 (I’d forgotten it was still all .NET 8) and the latest version of the Semantic Kernel libraries. However there were a few things that needed changed:

Planners Were Deprecated

The Planners were deprecated last year (more information here: Semantic Kernel: Package previews, Graduations & Deprecations).

This means the blog posts Semantic Kernel Hello World Planners Part 1 and Semantic Kernel Hello World Planners Part 2 are not longer relevant. I’ve updated those blog posts indicating the deprecation and have removed the code projects from the solution in Github (the code is still there but not loaded with the solution).


Semantic Kernel Hello World WebSearchEnginePlugin

UPDATE: The Bing Search APIS are being deprecated. See the announcement for more details: Bing Search APIs retiring on August 11, 2025. I have updated the post and corresponding Github code to use the Brave connector instead.

A couple of weeks ago I thought I’d written my last of these blogs, mainly due to me getting more in depth with Semantic Kernel. However, after I watched Will Velida’s video Using Bing Search API in the Semantic Kernel SDK … I couldn’t help but wonder what the API calls were behind the scenes. Will does a great job at explaining how to use the plugin and the Bing resource needed to make calls to the search API, so I won’t get into that part of it - I want to focus on the usefulness and API calls made by the plugin.


Semantic Kernel Hello World Planners Part 2 - (Deprecated)

UPDATE: Planners have been deprecated. See this Semantic Kernel blog for more detail: Semantic Kernel: Package previews, Graduations & Deprecations

Last week in the Semantic Kernel Hello World Planners Part 1 entry, I used the Handlebars planner to implement the sample Hello World functionality and then looked at the token difference between using a saved plan vs. generating a plan. In this entry I use the Function Calling Stepwise Planner to create the sample Hello World functionality and compare it to the implementation in the Semantic Kernel Hello World Plugins Part 3 entry.


Semantic Kernel Hello World Planners Part 1 - (Deprecated)

UPDATE: Planners have been deprecated. See this Semantic Kernel blog for more detail: Semantic Kernel: Package previews, Graduations & Deprecations

A few weeks ago in the Semantic Kernel Hello World Plugins Part 3 blog entry, I showed how to use OpenAI Function Calling. The last half of that entry was all about how to view the response and request JSON going back and forth to OpenAI, which detailed four API calls. In this entry I look at using the Handlebars Planner to accomplish the same functionality. Then I’ll show the request and response JSON for both using a saved plan as well as having the LLM create a plan and end with a token usage comparison.


Semantic Kernel Hello World Plugins Part 3

Last week I blogged Part 2 showing the creation of a native function plugin, in this post I want to take that native function a step further and use the OpenAI Function calling. This will allow us to not provide the current date when making the call to get a historic daily fact and have OpenAI call a function to get the current date.

I’ve added the HelloWorld.Plugin3.Console project to the GitHub repo for the code in this blog entry.


Semantic Kernel Hello World Plugins Part 2

Two weeks ago I blogged Part 1, in which I moved the prompt to a prompt template. In this part, I implement a native function that will take in the current date and make the call to the LLM.

I’ve put the code for this blog in the HelloWorld.Plugin2.Console project in the same repo as the other SK entries: semantic-kernel-getting-started.

Semantic Kernel Plugin: Native Function

There is a good Microsoft Learn module: Give your AI agent skills that walks you through the details of what a native function is and how to implement them.


Semantic Kernel Hello World Plugins Part 1

A couple of weeks ago, in my last entry I created a simple Hello World application with Semantic Kernel. Since then, I’ve worked my way through the MS Learning path: APL-2005 Develop AI agents using Azure OpenAI and the Semantic Kernel SDK - which I highly recommend if you are also learning SK.

In this entry I’m going to start with the code from the last entry and extract the prompt to a plugin. I’ve put the code for this blog in the same repo as the last entry: semantic-kernel-getting-started


Semantic Kernel Hello World

This past Thursday night after the Virtual Boston Azure meetup, Bill Wilder (@codingoutloud) created an AI mini-workshop (hands on) for the attendees that were interested in getting hands on with code using the Azure OpenAI API.

This post is me using the same idea but with Semantic Kernel.

OpenAI Chat Hello World C#

Bill provided the following code for us to get a simple OpenAI chat working:

using Azure;
using Azure.AI.OpenAI;


string? key = "...";
string? endpoint = "...";
string? deployment = "...";

// output today's date just for fun
Console.WriteLine($"\n----------------- DEBUG INFO -----------------");
var today = DateTime.Now.ToString("MMMM dd");
Console.WriteLine($"Today is {today}");
Console.WriteLine("----------------------------------------------");


var client = new OpenAIClient(new Uri(endpoint), new AzureKeyCredential(key));

// TODO: CHALLENGE 1: does the AI respond accurately to this prompt? How to fix?
var prompt = $"Tell me an interesting fact from world about an event " +
            $"that took place on {today}. " +
            $"Be sure to mention the date in history for context.";

CompletionsOptions completionsOptions = new()
{
    Temperature = 0.7f,
    DeploymentName = deployment,
    Prompts = { prompt },
    MaxTokens = 250,  // PLEASE DON'T MAKE LARGER THAN 250 (but see what happens at 25)
};

Response<Completions> completionsResponse = client.GetCompletions(completionsOptions);

Console.WriteLine($"\nPROMPT: \n\n{prompt}");

int i = 0;
foreach (var choice in completionsResponse.Value.Choices)
{    
    Console.WriteLine($"\nRESPONSE {++i}/{completionsResponse.Value.Choices.Count}:" +
        $"{choice.Text}");
}

Console.WriteLine($"\n----------------- DEBUG INFO -----------------");
Console.WriteLine($"Tokens used: {completionsResponse.Value.Usage.CompletionTokens}/{completionsOptions.MaxTokens}");
Console.WriteLine("----------------------------------------------");

When you run this code (you’ll of course need to add in you own values for the key, endpoint and deployment), you will get a response like this: