You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: 0_Azure/3_AzureAI/AIFoundry/demos/2_AIFoundrySDLCStrategy/README.md
+10-9Lines changed: 10 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,15 +13,6 @@ Last updated: 2025-01-02
13
13
> [!NOTE]
14
14
> If you require additional information on Cloud and the SDLC process, please visit this [repository](https://github.com/brown9804/CloudDevOps_LPath?tab=readme-ov-file#cloud-devops---learning-path). It contains content not only on SDLC but also on DevOps practices.
15
15
16
-
> [!IMPORTANT]
17
-
> This overview provides an example of how to create an infrastructure that enables efficient and secure delivery of AI models into different solutions. By setting up AI Foundry with RBAC, using Azure API Management, and implementing monitoring and analytics, you can ensure your AI models are accessible, manageable, and perform well across different environments. Please ensure to adjust the infrastructure, networking, and other configurations as required. <br/>
18
-
> 1.**Set Up Resource Group and AI Foundry Hub**: Create a centralized hub for managing your AI resources. <br/>
19
-
> 2.**Create Projects for Different Environments**: Organize your AI models into development, testing, and production projects. <br/>
20
-
> 3.**Implement RBAC**: Assign roles to users based on their profile to manage access and permissions. <br/>
21
-
> 4.**Expose Models as APIs**: Use Azure API Management to make your AI models accessible via APIs. <br/>
22
-
> 5.**Provide Documentation and Support**: Ensure users have the necessary resources and support to integrate the APIs. <br/>
23
-
> 6.**Implement Monitoring and Analytics**: Track the performance and usage of your AI models to ensure they meet your needs.
24
-
25
16
<details>
26
17
<summary><b>List of References </b> (Click to expand)</summary>
27
18
@@ -36,6 +27,7 @@ Last updated: 2025-01-02
36
27
-[GenAIOps with prompt flow and Azure DevOps](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/how-to-end-to-end-azure-devops-with-prompt-flow?view=azureml-api-2)
37
28
-[Integrate prompt flow with DevOps for LLM-based applications](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/how-to-integrate-with-llm-app-devops?view=azureml-api-2&tabs=cli)
38
29
-[GenAIOps with prompt flow and GitHub](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/how-to-end-to-end-llmops-with-prompt-flow?view=azureml-api-2)
30
+
-[What is Azure AI Foundry?](https://learn.microsoft.com/en-us/azure/ai-foundry/what-is-azure-ai-foundry#which-type-of-project-do-i-need) - Which type of project do I need? Hub or Project
39
31
40
32
</details>
41
33
@@ -54,6 +46,15 @@ Last updated: 2025-01-02
54
46
55
47
</details>
56
48
49
+
> [!IMPORTANT]
50
+
> This overview provides an example of how to create an infrastructure that enables efficient and secure delivery of AI models into different solutions. By setting up AI Foundry with RBAC, using Azure API Management, and implementing monitoring and analytics, you can ensure your AI models are accessible, manageable, and perform well across different environments. Please ensure to adjust the infrastructure, networking, and other configurations as required. <br/>
51
+
> 1.**Set Up Resource Group and AI Foundry Hub**: Create a centralized hub for managing your AI resources. <br/>
52
+
> 2.**Create Projects for Different Environments**: Organize your AI models into development, testing, and production projects. <br/>
53
+
> 3.**Implement RBAC**: Assign roles to users based on their profile to manage access and permissions. <br/>
54
+
> 4.**Expose Models as APIs**: Use Azure API Management to make your AI models accessible via APIs. <br/>
55
+
> 5.**Provide Documentation and Support**: Ensure users have the necessary resources and support to integrate the APIs. <br/>
56
+
> 6.**Implement Monitoring and Analytics**: Track the performance and usage of your AI models to ensure they meet your needs.
0 commit comments