You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: aspnetcore/migration/fx-to-core/tooling.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,6 +4,7 @@ ai-usage: ai-assisted
4
4
author: wadepickett
5
5
description: Learn how to upgrade ASP.NET Framework MVC, Web API, or Web Forms projects to ASP.NET Core using migration tooling.
6
6
ms.author: wpickett
7
+
ms.collection: ce-skilling-ai-copilot
7
8
ms.date: 12/04/2025
8
9
uid: migration/fx-to-core/tooling
9
10
---
@@ -15,7 +16,7 @@ To upgrade ASP.NET Framework applications (MVC, Web API, and Web Forms) to ASP.N
15
16
16
17
The GitHub Copilot app modernization agent is a Visual Studio extension that leverages AI to simplify the process of upgrading legacy .NET applications. By integrating with GitHub Copilot Chat, this tool analyzes your solution to generate upgrade plans and assists in rewriting code to support ASP.NET Core. It streamlines the migration workflow by reducing manual effort, identifying dependencies, and providing interactive, automated guidance for modernizing your codebase. To learn how to upgrade your ASP.NET apps using the recommended tooling, see [How to upgrade a .NET app with GitHub Copilot app modernization](/dotnet/core/porting/github-copilot-app-modernization/how-to-upgrade-with-github-copilot).
17
18
18
-
If your .NET Framework project has supporting libraries in the solution that are required, they should be upgraded to .NET Standard 2.0, if possible. For more information, see [Upgrade supporting libraries](xref:migration/fx-to-core/start#upgrade-supporting-libraries).
19
+
If your .NET Framework project has supporting libraries in the solution that are required, upgrade them to .NET Standard 2.0, if possible. For more information, see [Upgrade supporting libraries](xref:migration/fx-to-core/start#upgrade-supporting-libraries).
19
20
20
21
> [!IMPORTANT]
21
22
> .NET Upgrade Assistant is officially deprecated. Use the [GitHub Copilot app modernization chat agent](/dotnet/core/porting/github-copilot-app-modernization/overview) instead, which is included with Visual Studio 2026 and Visual Studio 2022 17.14.16 or later. This agent analyzes your projects and dependencies, produces a step-by-step migration plan with targeted recommendations and automated code fixes, and commits each change so you can validate or roll back. It automates common porting tasks—updating project files, replacing deprecated APIs, and resolving build issues—so you can modernize faster with less manual effort.
Copy file name to clipboardExpand all lines: aspnetcore/tutorials/ai-powered-group-chat/ai-powered-group-chat.md
+17-16Lines changed: 17 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,13 +3,14 @@ title: Sample AI-Powered Group Chat with SignalR and OpenAI
3
3
author: kevinguo-ed
4
4
description: A tutorial explaining how SignalR and OpenAI are used together to build an AI-powered group chat
5
5
ms.author: wpickett
6
-
ms.date: 03/19/2025
6
+
ms.collection: ce-skilling-ai-copilot
7
+
ms.date: 12/11/2025
7
8
uid: tutorials/ai-powered-group-chat
8
9
---
9
10
10
11
# AI-Powered Group Chat sample with SignalR and OpenAI
11
12
12
-
The AI-Powered Group Chat sample demonstrates how to integrate OpenAI's capabilities into a real-time group chat application using ASP.NET Core SignalR.
13
+
The AI-Powered Group Chat sample demonstrates how to integrate OpenAI's capabilities into a real-time group chat application by using ASP.NET Core SignalR.
13
14
14
15
* View or download [the complete sample code](https://github.com/microsoft/SignalR-Samples-AI/tree/main/AIStreaming).
15
16
@@ -25,18 +26,18 @@ This sample uses OpenAI for generating intelligent, context-aware responses and
25
26
26
27
## Dependencies
27
28
28
-
Either Azure OpenAI or OpenAI can be used for this project. Make sure to update the `endpoint` and `key` in `appsettings.json`. `OpenAIExtensions` reads the configuration when the app starts, and the configuration values for `endpoint` and `key` are required to authenticate and use either service.
29
+
You can use either Azure OpenAI or OpenAI for this project. Make sure to update the `endpoint` and `key` in `appsettings.json`. `OpenAIExtensions` reads the configuration when the app starts. You need to provide configuration values for `endpoint` and `key` to authenticate and use either service.
29
30
30
31
### [OpenAI](#tab/open-ai)
31
32
32
-
To build this application, you will need the following:
33
+
To build this application, you need the following resources:
33
34
* ASP.NET Core: To create the web application and host the SignalR hub.
34
35
*[SignalR](https://www.nuget.org/packages/Microsoft.AspNetCore.SignalR.Client): For real-time communication between clients and the server.
35
36
*[OpenAI Client](https://www.nuget.org/packages/OpenAI): To interact with OpenAI's API for generating AI responses.
36
37
37
38
### [Azure OpenAI](#tab/azure-open-ai)
38
39
39
-
To build this application, you will need the following:
40
+
To build this application, you need the following resources:
40
41
* ASP.NET Core: To create the web application and host the SignalR hub.
41
42
*[SignalR](https://www.nuget.org/packages/Microsoft.AspNetCore.SignalR.Client): For real-time communication between clients and the server.
@@ -55,17 +56,17 @@ The following diagram highlights the step-by-step communication and processing i
55
56
56
57
In the previous diagram:
57
58
58
-
* The Client sends instructions to the Server, which then communicates with OpenAI to process these instructions.
59
-
* OpenAI responds with partial completion data, which the Server forwards back to the Client. This process repeats multiple times for an iterative exchange of data between these components.
59
+
* The client sends instructions to the server. The server communicates with OpenAI to process these instructions.
60
+
* OpenAI responds with partial completion data. The server forwards this data back to the client. This process repeats multiple times for an iterative exchange of data between these components.
60
61
61
62
### SignalR Hub integration
62
63
63
64
The `GroupChatHub` class manages user connections, message broadcasting, and AI interactions.
64
65
65
-
When a user sends a message starting with `@gpt`:
66
+
When a user sends a message that starts with `@gpt`:
66
67
67
-
* The hub forwards it to OpenAI, which generates a response.
68
-
* The AI's response is streamed back to the group in real-time.
68
+
* The hub forwards the message to OpenAI, which generates a response.
69
+
* The AI's response streams back to the group in realtime.
69
70
70
71
The following code snippet demonstrates how the `CompleteChatStreamingAsync` method streams responses from OpenAI incrementally:
71
72
@@ -88,15 +89,15 @@ await foreach (var completion in
88
89
89
90
In the previous code:
90
91
91
-
*`chatClient.CompleteChatStreamingAsync(messagesIncludeHistory)`initiates the streaming of AI responses.
92
+
*`chatClient.CompleteChatStreamingAsync(messagesIncludeHistory)`starts streaming AI responses.
92
93
* The `totalCompletion.Append(content)` line accumulates the AI's response.
93
-
* If the length of the buffered content exceeds 20 characters, the buffered content is sent to the clients using `Clients.Group(groupName).SendAsync`.
94
+
* If the buffered content length exceeds 20 characters, the hub sends the buffered content to clients by using `Clients.Group(groupName).SendAsync`.
94
95
95
-
This ensures that the AI's response is delivered to the users in real-time, providing a seamless and interactive chat experience.
96
+
By using this approach, the AI's response reaches users in realtime, creating a seamless and interactive chat experience.
96
97
97
98
### Maintain context with history
98
99
99
-
Every request to [OpenAI's Chat Completions API](https://platform.openai.com/docs/guides/chat-completions) is stateless. OpenAI doesn't store past interactions. In a chat app, what a user or an assistant has said is important for generating a response that's contextually relevant. To achieve this, include chat history in every request to the Completions API.
100
+
Every request to [OpenAI's Chat Completions API](https://platform.openai.com/docs/guides/chat-completions) is stateless. OpenAI doesn't store past interactions. In a chat app, what a user or an assistant says is important for generating a response that's contextually relevant. To achieve this relevance, include chat history in every request to the Completions API.
100
101
101
102
The `GroupHistoryStore` class manages chat history for each group. It stores messages posted by both the users and AI assistants, ensuring that the conversation context is preserved across interactions. This context is crucial for generating coherent AI responses.
102
103
@@ -141,6 +142,6 @@ if (totalCompletion.Length - lastSentTokenLength > 20)
141
142
142
143
This project opens up exciting possibilities for further enhancement:
143
144
1.**Advanced AI features**: Leverage other OpenAI capabilities like sentiment analysis, translation, or summarization.
144
-
1.**Incorporating multiple AI agents**: You can introduce multiple AI agents with distinct roles or expertise areas within the same chat. For example, one agent might focus on text generation while the other provides image or audio generation. This can create a richer and more dynamic user experience where different AI agents interact seamlessly with users and each other.
145
-
1.**Share chat history between server instances**: Implement a database layer to persist chat history across sessions, allowing conversations to resume even after a disconnect. Beyond SQL or NO SQL based solutions, you can also explore using a caching service like Redis. It can significantly improve performance by storing frequently accessed data, such as chat history or AI responses, in memory. This reduces latency and offloads database operations, leading to faster response times, particularly in high-traffic scenarios.
145
+
1.**Incorporating multiple AI agents**: You can introduce multiple AI agents with distinct roles or expertise areas within the same chat. For example, one agent might focus on text generation while the other provides image or audio generation. This approach creates a richer and more dynamic user experience where different AI agents interact seamlessly with users and each other.
146
+
1.**Share chat history between server instances**: Implement a database layer to persist chat history across sessions, allowing conversations to resume even after a disconnect. Beyond SQL or NoSQL based solutions, you can also explore using a caching service like Redis. It can significantly improve performance by storing frequently accessed data, such as chat history or AI responses, in memory. This reduces latency and offloads database operations, leading to faster response times, particularly in high-traffic scenarios.
146
147
1.**Leveraging Azure SignalR Service**: [Azure SignalR Service](/azure/azure-signalr/signalr-overview) provides scalable and reliable real-time messaging for your application. By offloading the SignalR backplane to Azure, you can scale out the chat application easily to support thousands of concurrent users across multiple servers. Azure SignalR also simplifies management and provides built-in features like automatic reconnections.
0 commit comments