No longer hardcoding models#31
Conversation
Co-authored-by: Copilot <copilot@github.com>
@microsoft-github-policy-service agree company="Microsoft" |
There was a problem hiding this comment.
Pull request overview
This PR makes the Promptions chat demo configurable so it can target different OpenAI/Azure/OpenAI-compatible endpoints and models via environment variables instead of hardcoded model IDs.
Changes:
- Added optional env vars for OpenAI base URL, API version, and model selection.
- Updated
ChatServiceto read the model (and endpoint settings) from env vars and use it for both streaming and non-streaming chat calls. - Extended
.env.exampleto document the new configuration knobs.
Reviewed changes
Copilot reviewed 2 out of 3 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| apps/promptions-chat/src/vite-env.d.ts | Adds TypeScript typings for new optional VITE_OPENAI_* env vars. |
| apps/promptions-chat/src/services/ChatService.ts | Reads model/base URL/API version from env and uses the configured model instead of hardcoded values. |
| apps/promptions-chat/.env.example | Adds example entries for base URL, API version, and model. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| this.model = import.meta.env.VITE_OPENAI_MODEL || process.env.OPENAI_MODEL || "gpt-4.1"; | ||
|
|
||
| this.client = new OpenAI({ | ||
| apiKey, | ||
| ...(baseURL ? { baseURL } : {}), | ||
| ...(apiVersion ? { defaultQuery: { "api-version": apiVersion } } : {}), | ||
| ...(baseURL ? { defaultHeaders: { "api-key": apiKey } } : {}), |
There was a problem hiding this comment.
defaultHeaders: { "api-key": apiKey } is Azure OpenAI–specific, but it's currently enabled for any custom baseURL. For non-Azure OpenAI-compatible endpoints, sending an api-key header can be incorrect/unexpected; consider gating the Azure header/query behavior behind a dedicated flag (or only enabling it when an Azure-specific setting like VITE_OPENAI_API_VERSION is provided) rather than tying it solely to baseURL.
| this.model = import.meta.env.VITE_OPENAI_MODEL || process.env.OPENAI_MODEL || "gpt-4.1"; | |
| this.client = new OpenAI({ | |
| apiKey, | |
| ...(baseURL ? { baseURL } : {}), | |
| ...(apiVersion ? { defaultQuery: { "api-version": apiVersion } } : {}), | |
| ...(baseURL ? { defaultHeaders: { "api-key": apiKey } } : {}), | |
| const isAzureOpenAI = Boolean(apiVersion); | |
| this.model = import.meta.env.VITE_OPENAI_MODEL || process.env.OPENAI_MODEL || "gpt-4.1"; | |
| this.client = new OpenAI({ | |
| apiKey, | |
| ...(baseURL ? { baseURL } : {}), | |
| ...(isAzureOpenAI ? { defaultQuery: { "api-version": apiVersion } } : {}), | |
| ...(isAzureOpenAI ? { defaultHeaders: { "api-key": apiKey } } : {}), |
| const baseURL = import.meta.env.VITE_OPENAI_BASE_URL || process.env.OPENAI_BASE_URL; | ||
| const apiVersion = import.meta.env.VITE_OPENAI_API_VERSION || process.env.OPENAI_API_VERSION; | ||
| this.model = import.meta.env.VITE_OPENAI_MODEL || process.env.OPENAI_MODEL || "gpt-4.1"; |
There was a problem hiding this comment.
process.env.* fallbacks here will never resolve in this Vite app because vite.config.ts defines process.env as an empty object at build time. This makes the OPENAI_* fallbacks dead code and can mislead users into thinking Node-style env vars are supported; consider removing the process.env branches (or changing the Vite define strategy) and adjusting the error/help text accordingly.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Signed-off-by: Jack Williams <4489219+jack-williams@users.noreply.github.com>
|
Closing in favour of #37 |
Summary
Made it easier to connect your own model to the Promptions text app/demo.