Skip to content

Commit d4d30bb

Browse files
authored
Updated meta data for AI (#36485)
1 parent a1f7171 commit d4d30bb

2 files changed

Lines changed: 19 additions & 17 deletions

File tree

aspnetcore/migration/fx-to-core/tooling.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ ai-usage: ai-assisted
44
author: wadepickett
55
description: Learn how to upgrade ASP.NET Framework MVC, Web API, or Web Forms projects to ASP.NET Core using migration tooling.
66
ms.author: wpickett
7+
ms.collection: ce-skilling-ai-copilot
78
ms.date: 12/04/2025
89
uid: migration/fx-to-core/tooling
910
---
@@ -15,7 +16,7 @@ To upgrade ASP.NET Framework applications (MVC, Web API, and Web Forms) to ASP.N
1516

1617
The GitHub Copilot app modernization agent is a Visual Studio extension that leverages AI to simplify the process of upgrading legacy .NET applications. By integrating with GitHub Copilot Chat, this tool analyzes your solution to generate upgrade plans and assists in rewriting code to support ASP.NET Core. It streamlines the migration workflow by reducing manual effort, identifying dependencies, and providing interactive, automated guidance for modernizing your codebase. To learn how to upgrade your ASP.NET apps using the recommended tooling, see [How to upgrade a .NET app with GitHub Copilot app modernization](/dotnet/core/porting/github-copilot-app-modernization/how-to-upgrade-with-github-copilot).
1718

18-
If your .NET Framework project has supporting libraries in the solution that are required, they should be upgraded to .NET Standard 2.0, if possible. For more information, see [Upgrade supporting libraries](xref:migration/fx-to-core/start#upgrade-supporting-libraries).
19+
If your .NET Framework project has supporting libraries in the solution that are required, upgrade them to .NET Standard 2.0, if possible. For more information, see [Upgrade supporting libraries](xref:migration/fx-to-core/start#upgrade-supporting-libraries).
1920

2021
> [!IMPORTANT]
2122
> .NET Upgrade Assistant is officially deprecated. Use the [GitHub Copilot app modernization chat agent](/dotnet/core/porting/github-copilot-app-modernization/overview) instead, which is included with Visual Studio 2026 and Visual Studio 2022 17.14.16 or later. This agent analyzes your projects and dependencies, produces a step-by-step migration plan with targeted recommendations and automated code fixes, and commits each change so you can validate or roll back. It automates common porting tasks—updating project files, replacing deprecated APIs, and resolving build issues—so you can modernize faster with less manual effort.

aspnetcore/tutorials/ai-powered-group-chat/ai-powered-group-chat.md

Lines changed: 17 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,14 @@ title: Sample AI-Powered Group Chat with SignalR and OpenAI
33
author: kevinguo-ed
44
description: A tutorial explaining how SignalR and OpenAI are used together to build an AI-powered group chat
55
ms.author: wpickett
6-
ms.date: 03/19/2025
6+
ms.collection: ce-skilling-ai-copilot
7+
ms.date: 12/11/2025
78
uid: tutorials/ai-powered-group-chat
89
---
910

1011
# AI-Powered Group Chat sample with SignalR and OpenAI
1112

12-
The AI-Powered Group Chat sample demonstrates how to integrate OpenAI's capabilities into a real-time group chat application using ASP.NET Core SignalR.
13+
The AI-Powered Group Chat sample demonstrates how to integrate OpenAI's capabilities into a real-time group chat application by using ASP.NET Core SignalR.
1314

1415
* View or download [the complete sample code](https://github.com/microsoft/SignalR-Samples-AI/tree/main/AIStreaming).
1516

@@ -25,18 +26,18 @@ This sample uses OpenAI for generating intelligent, context-aware responses and
2526

2627
## Dependencies
2728

28-
Either Azure OpenAI or OpenAI can be used for this project. Make sure to update the `endpoint` and `key` in `appsettings.json`. `OpenAIExtensions` reads the configuration when the app starts, and the configuration values for `endpoint` and `key` are required to authenticate and use either service.
29+
You can use either Azure OpenAI or OpenAI for this project. Make sure to update the `endpoint` and `key` in `appsettings.json`. `OpenAIExtensions` reads the configuration when the app starts. You need to provide configuration values for `endpoint` and `key` to authenticate and use either service.
2930

3031
### [OpenAI](#tab/open-ai)
3132

32-
To build this application, you will need the following:
33+
To build this application, you need the following resources:
3334
* ASP.NET Core: To create the web application and host the SignalR hub.
3435
* [SignalR](https://www.nuget.org/packages/Microsoft.AspNetCore.SignalR.Client): For real-time communication between clients and the server.
3536
* [OpenAI Client](https://www.nuget.org/packages/OpenAI): To interact with OpenAI's API for generating AI responses.
3637

3738
### [Azure OpenAI](#tab/azure-open-ai)
3839

39-
To build this application, you will need the following:
40+
To build this application, you need the following resources:
4041
* ASP.NET Core: To create the web application and host the SignalR hub.
4142
* [SignalR](https://www.nuget.org/packages/Microsoft.AspNetCore.SignalR.Client): For real-time communication between clients and the server.
4243
* [Azure OpenAI](https://www.nuget.org/packages/Azure.AI.OpenAI): `Azure.AI.OpenAI`
@@ -55,17 +56,17 @@ The following diagram highlights the step-by-step communication and processing i
5556

5657
In the previous diagram:
5758

58-
* The Client sends instructions to the Server, which then communicates with OpenAI to process these instructions.
59-
* OpenAI responds with partial completion data, which the Server forwards back to the Client. This process repeats multiple times for an iterative exchange of data between these components.
59+
* The client sends instructions to the server. The server communicates with OpenAI to process these instructions.
60+
* OpenAI responds with partial completion data. The server forwards this data back to the client. This process repeats multiple times for an iterative exchange of data between these components.
6061

6162
### SignalR Hub integration
6263

6364
The `GroupChatHub` class manages user connections, message broadcasting, and AI interactions.
6465

65-
When a user sends a message starting with `@gpt`:
66+
When a user sends a message that starts with `@gpt`:
6667

67-
* The hub forwards it to OpenAI, which generates a response.
68-
* The AI's response is streamed back to the group in real-time.
68+
* The hub forwards the message to OpenAI, which generates a response.
69+
* The AI's response streams back to the group in real time.
6970

7071
The following code snippet demonstrates how the `CompleteChatStreamingAsync` method streams responses from OpenAI incrementally:
7172

@@ -88,15 +89,15 @@ await foreach (var completion in
8889

8990
In the previous code:
9091

91-
* `chatClient.CompleteChatStreamingAsync(messagesIncludeHistory)` initiates the streaming of AI responses.
92+
* `chatClient.CompleteChatStreamingAsync(messagesIncludeHistory)` starts streaming AI responses.
9293
* The `totalCompletion.Append(content)` line accumulates the AI's response.
93-
* If the length of the buffered content exceeds 20 characters, the buffered content is sent to the clients using `Clients.Group(groupName).SendAsync`.
94+
* If the buffered content length exceeds 20 characters, the hub sends the buffered content to clients by using `Clients.Group(groupName).SendAsync`.
9495

95-
This ensures that the AI's response is delivered to the users in real-time, providing a seamless and interactive chat experience.
96+
By using this approach, the AI's response reaches users in real time, creating a seamless and interactive chat experience.
9697

9798
### Maintain context with history
9899

99-
Every request to [OpenAI's Chat Completions API](https://platform.openai.com/docs/guides/chat-completions) is stateless. OpenAI doesn't store past interactions. In a chat app, what a user or an assistant has said is important for generating a response that's contextually relevant. To achieve this, include chat history in every request to the Completions API.
100+
Every request to [OpenAI's Chat Completions API](https://platform.openai.com/docs/guides/chat-completions) is stateless. OpenAI doesn't store past interactions. In a chat app, what a user or an assistant says is important for generating a response that's contextually relevant. To achieve this relevance, include chat history in every request to the Completions API.
100101

101102
The `GroupHistoryStore` class manages chat history for each group. It stores messages posted by both the users and AI assistants, ensuring that the conversation context is preserved across interactions. This context is crucial for generating coherent AI responses.
102103

@@ -141,6 +142,6 @@ if (totalCompletion.Length - lastSentTokenLength > 20)
141142

142143
This project opens up exciting possibilities for further enhancement:
143144
1. **Advanced AI features**: Leverage other OpenAI capabilities like sentiment analysis, translation, or summarization.
144-
1. **Incorporating multiple AI agents**: You can introduce multiple AI agents with distinct roles or expertise areas within the same chat. For example, one agent might focus on text generation while the other provides image or audio generation. This can create a richer and more dynamic user experience where different AI agents interact seamlessly with users and each other.
145-
1. **Share chat history between server instances**: Implement a database layer to persist chat history across sessions, allowing conversations to resume even after a disconnect. Beyond SQL or NO SQL based solutions, you can also explore using a caching service like Redis. It can significantly improve performance by storing frequently accessed data, such as chat history or AI responses, in memory. This reduces latency and offloads database operations, leading to faster response times, particularly in high-traffic scenarios.
145+
1. **Incorporating multiple AI agents**: You can introduce multiple AI agents with distinct roles or expertise areas within the same chat. For example, one agent might focus on text generation while the other provides image or audio generation. This approach creates a richer and more dynamic user experience where different AI agents interact seamlessly with users and each other.
146+
1. **Share chat history between server instances**: Implement a database layer to persist chat history across sessions, allowing conversations to resume even after a disconnect. Beyond SQL or NoSQL based solutions, you can also explore using a caching service like Redis. It can significantly improve performance by storing frequently accessed data, such as chat history or AI responses, in memory. This reduces latency and offloads database operations, leading to faster response times, particularly in high-traffic scenarios.
146147
1. **Leveraging Azure SignalR Service**: [Azure SignalR Service](/azure/azure-signalr/signalr-overview) provides scalable and reliable real-time messaging for your application. By offloading the SignalR backplane to Azure, you can scale out the chat application easily to support thousands of concurrent users across multiple servers. Azure SignalR also simplifies management and provides built-in features like automatic reconnections.

0 commit comments

Comments
 (0)