Exploring Microsoft’s .NET AI Template: Complete Walkthrough
Today we’re going to explore Microsoft’s .NET AI Template, which was released back in March. This template includes two project types — AI Chat Web App and Local MCP Server Console App.
In this walkthrough, we’ll focus on the AI Chat Web App, install the template, see what it includes, create a project in Visual Studio, and run it to see how it works in action.
Installing the .NET AI Template
Let’s begin by installing the AI template.
Open your terminal and run the following command:
dotnet new install Microsoft.Extensions.AI.Templates
This installs the templates locally. Once the installation is complete, you’ll see two new templates available:
-
AI Chat Web App
-
Local MCP Server Console App
These templates are accessible from Visual Studio, Visual Studio Code (with the C# Dev Kit), or directly through the command line using:
dotnet new aichatweb
or
dotnet new mcpserver
to create a project in your working directory.
For today’s demo, we’ll use the AI Chat Web App inside Visual Studio.
Creating a Project in Visual Studio
Let’s switch over to Visual Studio.
On the Create a new project screen, open the All project types dropdown. You’ll now notice a new category called AI.
Select AI, and you’ll find two templates:
-
AI Chat Web App
-
Local MCP Server Console App
Choose AI Chat Web App and click Next.
Give your project a name — let’s call it ChatApp1
. Choose a location and click Create.
After naming your project, you can select both an AI model provider and a vector store.
Available AI service providers include:
-
Azure OpenAI
-
GitHub Models
-
Alma
-
OpenAI Platform
For this example, we’ll use GitHub Models with a local vector store, as it’s the easiest option for getting started.
Click Create, and Visual Studio will scaffold everything automatically — including a Blazor-based chat interface, a Data folder, and Service setup.
Configuring the GitHub Token
Next, we need to configure the GitHub token.
Inside the project’s README
file, you’ll see an instruction like this:
"GitHubModelsToken": "your-token"
This token needs to be added to your project’s secrets.json
file.
So open secrets.json
and paste your GitHub token using the same JSON format shown in the README.
Exploring the Project Structure
Now that we’ve added the GitHub token, let’s take a closer look at what was generated.
-
The Pages folder contains Blazor pages such as
Chat.razor
— the main chat interface. -
The Services folder holds the logic behind the conversation flow.
-
Inside
wwwroot/data
, you’ll find sample PDF files. These files are used for data ingestion and semantic search.
When you run the app, it reads these PDFs, builds embeddings, and allows the chat to answer questions based on that content.
Understanding Program.cs
Let’s open the Program.cs
file. Here, the AI services and vector store are automatically configured.
-
The GPT-4.0 Mini model is used as the LLM.
-
Text-embedding-3-small is used for embeddings.
-
SQLite serves as the vector database.
All this setup is generated automatically — no manual configuration is needed.
How Data Ingestion Works
There are several key classes in the project responsible for handling data flow.
DataIngestor Class
This class reads and processes PDF files in the wwwroot/data
folder.
It extracts text from each PDF, generates embeddings using the selected model, and stores them in the SQLite vector database.
This essentially turns your documents into searchable vector data that the AI can reference later.
SemanticSearch Class
During chat interactions, this class takes the user’s question, converts it into an embedding, compares it with stored document vectors, and retrieves the most relevant pieces of information.
The chatbot then uses those results to generate a natural-language answer, often including citations that show where the information came from.
Together, these classes enable the app to answer based on your documents’ content — not just random guesses.
Running the Application
Let’s run the application and see it in action.
When the app starts, it automatically processes the sample PDFs in the wwwroot/data
folder and generates embeddings.
Your browser will open with a clean Blazor-based chat interface.
Now you can start asking questions, such as:
“What’s included in the survival kit?”
The app will:
-
Process your question
-
Run a semantic search
-
Generate an answer with citations indicating the source file and page.
All of this happens automatically — no extra configuration required.
Using Your Own Data
Let’s now make the chatbot use your own documents.
Navigate to the wwwroot/data
folder — here you’ll find two default PDFs.
To use your own content, simply drop a PDF into this folder.
For example, convert one of your articles (say, about EF Core pagination) into a PDF and place it in the folder.
Then rerun the application.
The DataIngestor
class will automatically detect the new file, extract text, create embeddings, and add them to the vector database.
Now, if you ask:
“Which pagination method is suitable for large datasets?”
The chat will respond using content from your article, explaining that keyset pagination is more efficient than offset pagination for large datasets because it avoids scanning skipped rows and performs faster on big tables.
You’ll also see citations showing exactly which file and page the data came from.
Extending Chatbot Functionality
Now that your app is running, let’s see how to extend its behavior.
The project is built using Microsoft.Extensions.AI, which allows you to plug in custom C# functions that the chatbot can call.
This means you can add new capabilities — for instance, fetching live data, connecting APIs, or triggering actions.
Example: Custom GetWeather Function
Inside the Chat.razor
file, you can define a new C# method like:
string GetWeather(string city) {
// returns a mock weather condition
return city switch {
"London" => "Drizzle",
"Paris" => "Sunny",
_ => "Cloudy"
};
}
Then register it in the OnInitialized
method by updating the chat options tool list.
Once done, the chatbot can call this function whenever a user asks about the weather.
If you now ask:
“What’s the weather in London?”
The chatbot calls the GetWeather
function and replies:
“The weather in London is drizzle.”
This demonstrates how seamlessly the chatbot can integrate with your own C# logic.
You can easily extend this to connect with real APIs, databases, or even IoT devices.
Conclusion
And that’s it — a complete walkthrough of the .NET AI Template in action.
This template is a powerful starting point if you want to build your own:
-
AI assistant
-
Documentation bot
-
Custom AI-powered web app using .NET
Related Posts
Leave a Reply Cancel reply
Service
Categories
- DEVELOPMENT (113)
- DEVOPS (54)
- FRAMEWORKS (36)
- IT (25)
- QA (14)
- SECURITY (14)
- SOFTWARE (13)
- UI/UX (6)
- Uncategorized (8)