Use AzureOpenAI client and clean up env config in chat & image apps#36
Closed
jack-williams wants to merge 7 commits intomainfrom
Closed
Use AzureOpenAI client and clean up env config in chat & image apps#36jack-williams wants to merge 7 commits intomainfrom
jack-williams wants to merge 7 commits intomainfrom
Conversation
Co-authored-by: Copilot <copilot@github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Signed-off-by: Jack Williams <4489219+jack-williams@users.noreply.github.com>
Replace manual headers/defaultQuery hacks on the base OpenAI client with the dedicated AzureOpenAI client when a baseURL is configured. This avoids sending the API key twice (once via apiKey -> Authorization, once via defaultHeaders.api-key) and uses the SDK's supported Azure auth path. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Vite's vite.config.ts defines process.env as {}, so process.env.OPENAI_* references were inlined as undefined at build time and never did anything. Drop the dead fallbacks and read configuration only from import.meta.env.VITE_OPENAI_*, which is the supported way to surface env vars to a Vite client bundle.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Comment out optional VITE_OPENAI_BASE_URL / _API_VERSION / _MODEL placeholders in .env.example so a copied file does not silently activate the Azure OpenAI client with a bogus endpoint, and align the example model with the code default (gpt-4.1). Add an Optional configuration section to the README documenting the new env vars used by the chat app. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Apply the same fixes to apps/promptions-image:
- Drop the dead process.env.OPENAI_API_KEY fallback (Vite defines process.env as {} so it never resolved).
- Read optional VITE_OPENAI_BASE_URL / _API_VERSION / _MODEL from import.meta.env, with the same AzureOpenAI client switch when a base URL is configured.
- Use the configured chat model in streamChat instead of the hardcoded gpt-4.1.
- Mirror the .env.example layout (commented-out optional vars) so copying it does not silently activate the Azure path with a placeholder URL.
- Update vite-env.d.ts with the new optional types.
Generalize the README Optional configuration section to cover both apps.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…client # Conflicts: # apps/promptions-chat/.env.example # apps/promptions-chat/src/services/ChatService.ts # apps/promptions-image/.env.example # apps/promptions-image/src/services/ImageService.ts
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes a few issues in how the chat & image apps configure their OpenAI client.
Changes
Use the dedicated Azure OpenAI client when a base URL is set
Previously the code passed both
apiKeyand a manualdefaultHeaders: { "api-key": apiKey }to the OpenAI SDK whenever a base URL was set, which sent the key twice on Azure requests (once asAuthorization: Bearerand once asapi-key). That's redundant on Azure and breaks non-Azure OpenAI-compatible endpoints.The PR switches to the SDK's
AzureOpenAIclass whenVITE_OPENAI_BASE_URLis configured, which uses the correctapi-keyauth header andapi-versionquery param. Otherwise it falls back to the standardOpenAIclient.Drop the dead
process.env.OPENAI_*fallbacksvite.config.tsdefinesprocess.envas{}, so everyprocess.env.OPENAI_*reference inChatService/ImageServicewas being inlined asundefinedat build time and contributed nothing. The fallbacks looked plausible (a user might think settingOPENAI_BASE_URLwould work) but silently did nothing. Configuration is now read only fromimport.meta.env.VITE_OPENAI_*, the supported way to surface env vars to a Vite client bundle.Tidy
.env.examplefor both appsOptional
VITE_OPENAI_BASE_URL/VITE_OPENAI_API_VERSION/VITE_OPENAI_MODELlines are commented out instead of having placeholder values. Previously, copying.env.exampleto.envverbatim would pass a placeholder string likeyour_openai_base_url_hereas a truthybaseURL, activating the Azure path with a bogus endpoint.Apply the same fixes to the image app
apps/promptions-image/src/services/ImageService.tshad the same deadprocess.envfallback. It now uses the sameAzureOpenAIswitch and reads model/base URL/api-version fromimport.meta.env.streamChatuses the configured chat model instead of the hardcodedgpt-4.1.Document the optional config in the README
A new ""Optional configuration"" section covers
VITE_OPENAI_MODEL,VITE_OPENAI_BASE_URL, andVITE_OPENAI_API_VERSIONfor both apps.Background
This started as feedback on #31 (now superseded by #23 on main). Co-authoring with @based-jace since they did the original groundwork in #31.
Verification
yarn typecheckpassesyarn buildpassesyarn prettier:checkpasses