Problem
When connecting a provider like Azure or Together AI as a custom provider, the "Fetch models" button returns the entire model catalog (100+ models) rather than just the models the user has access to or deployed.
This causes two problems:
-
Hard crash against the 50-model validation limit: The backend ArrayMaxSize(50) validation rejects the request with a raw error message "models must contain no more than 50 elements", which is unclear and confusing for users who just clicked a button.
-
Poor UX even below the limit: Even if we raised the cap, having 50+ model rows in the form that the user needs to manually delete one by one to keep only the 3-5 they actually want is not a good experience.
Affected providers (non-exhaustive)
- Azure OpenAI: returns full catalog, not just user deployments
- Together AI: 100+ models
- Fireworks: 50+ models
Providers where it works well (small model lists)
- DeepSeek (~5), Groq (~15), LM Studio (only loaded models), Ollama (only pulled models)
Suggested improvements
- Truncate the probe result to 50 and show a clear message: "Found 130 models, showing the first 50. Remove the ones you don't need."
- Or better: let the user search and pick models from the fetched list instead of dumping them all into the form.
- Improve the error message when the limit is hit. The current raw validation error is not user-friendly.
Problem
When connecting a provider like Azure or Together AI as a custom provider, the "Fetch models" button returns the entire model catalog (100+ models) rather than just the models the user has access to or deployed.
This causes two problems:
Hard crash against the 50-model validation limit: The backend
ArrayMaxSize(50)validation rejects the request with a raw error message"models must contain no more than 50 elements", which is unclear and confusing for users who just clicked a button.Poor UX even below the limit: Even if we raised the cap, having 50+ model rows in the form that the user needs to manually delete one by one to keep only the 3-5 they actually want is not a good experience.
Affected providers (non-exhaustive)
Providers where it works well (small model lists)
Suggested improvements