|
| 1 | +# Azure OpenAI Responses API Test Workflow |
| 2 | + |
| 3 | +This GitHub Action tests the `responses-basic-aoai-v1.py` script to ensure it returns a valid response from Azure OpenAI. |
| 4 | + |
| 5 | +## How to Run |
| 6 | + |
| 7 | +1. Go to the **Actions** tab in the GitHub repository |
| 8 | +2. Select **"Test Azure OpenAI Responses API"** workflow |
| 9 | +3. Click **"Run workflow"** button |
| 10 | +4. Click **"Run workflow"** to confirm |
| 11 | + |
| 12 | +## Required Environment Secrets |
| 13 | + |
| 14 | +The workflow uses the following secrets from the `responses` environment: |
| 15 | + |
| 16 | +- `AZURE_OPENAI_API_KEY` - Your Azure OpenAI API key |
| 17 | +- `AZURE_OPENAI_V1_API_ENDPOINT` - Your Azure OpenAI v1 API endpoint (e.g., `https://your-resource.openai.azure.com/openai/v1/`) |
| 18 | +- `AZURE_OPENAI_API_MODEL` - The model name to use (e.g., `gpt-4o`) |
| 19 | + |
| 20 | +## Test Results |
| 21 | + |
| 22 | +The workflow generates test artifacts that include: |
| 23 | + |
| 24 | +### JSON Results (`test-results.json`) |
| 25 | +```json |
| 26 | +{ |
| 27 | + "test_last_run_date": "2025-07-20T23:01:29Z", |
| 28 | + "output": "Why don't scientists trust atoms? Because they make up everything!", |
| 29 | + "pass_fail": "PASS", |
| 30 | + "error_code": "" |
| 31 | +} |
| 32 | +``` |
| 33 | + |
| 34 | +### Human-Readable Summary (`test-summary.txt`) |
| 35 | +``` |
| 36 | +Azure OpenAI Responses API Test Results |
| 37 | +======================================== |
| 38 | +Test Run Date: 2025-07-20T23:01:29Z |
| 39 | +Result: PASS |
| 40 | +Error Code: |
| 41 | +
|
| 42 | +Output: |
| 43 | +Why don't scientists trust atoms? Because they make up everything! |
| 44 | +``` |
| 45 | + |
| 46 | +## Test Criteria |
| 47 | + |
| 48 | +The test passes if: |
| 49 | +- The Python script executes without errors |
| 50 | +- The script produces output |
| 51 | +- The output contains valid string content (not empty, no error indicators) |
| 52 | + |
| 53 | +The test fails if: |
| 54 | +- Environment variables are missing |
| 55 | +- The script fails to execute |
| 56 | +- No output is produced |
| 57 | +- Output contains error indicators (error, exception, traceback, failed, none, null) |
| 58 | + |
| 59 | +## Artifacts |
| 60 | + |
| 61 | +Test results are uploaded as artifacts with: |
| 62 | +- **Name**: `azure-openai-test-results` |
| 63 | +- **Retention**: 30 days |
| 64 | +- **Contents**: Both JSON and human-readable results |
| 65 | + |
| 66 | +## Troubleshooting |
| 67 | + |
| 68 | +If the workflow fails: |
| 69 | +1. Check that all required environment secrets are set in the `responses` environment |
| 70 | +2. Verify that the Azure OpenAI service is accessible |
| 71 | +3. Review the workflow logs for specific error messages |
| 72 | +4. Check the uploaded artifacts for detailed test results |
0 commit comments