Skip to content

Commit 12679e9

Browse files
authored
Merge pull request #5 from guygregory/copilot/fix-3-2
Create GitHub Action to test Azure OpenAI Responses API script
2 parents 4cde6a8 + d25a3d2 commit 12679e9

3 files changed

Lines changed: 222 additions & 101 deletions

File tree

.github/workflows/README.md

Lines changed: 72 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,72 @@
1+
# Azure OpenAI Responses API Test Workflow
2+
3+
This GitHub Action tests the `responses-basic-aoai-v1.py` script to ensure it returns a valid response from Azure OpenAI.
4+
5+
## How to Run
6+
7+
1. Go to the **Actions** tab in the GitHub repository
8+
2. Select **"Test Azure OpenAI Responses API"** workflow
9+
3. Click **"Run workflow"** button
10+
4. Click **"Run workflow"** to confirm
11+
12+
## Required Environment Secrets
13+
14+
The workflow uses the following secrets from the `responses` environment:
15+
16+
- `AZURE_OPENAI_API_KEY` - Your Azure OpenAI API key
17+
- `AZURE_OPENAI_V1_API_ENDPOINT` - Your Azure OpenAI v1 API endpoint (e.g., `https://your-resource.openai.azure.com/openai/v1/`)
18+
- `AZURE_OPENAI_API_MODEL` - The model name to use (e.g., `gpt-4o`)
19+
20+
## Test Results
21+
22+
The workflow generates test artifacts that include:
23+
24+
### JSON Results (`test-results.json`)
25+
```json
26+
{
27+
"test_last_run_date": "2025-07-20T23:01:29Z",
28+
"output": "Why don't scientists trust atoms? Because they make up everything!",
29+
"pass_fail": "PASS",
30+
"error_code": ""
31+
}
32+
```
33+
34+
### Human-Readable Summary (`test-summary.txt`)
35+
```
36+
Azure OpenAI Responses API Test Results
37+
========================================
38+
Test Run Date: 2025-07-20T23:01:29Z
39+
Result: PASS
40+
Error Code:
41+
42+
Output:
43+
Why don't scientists trust atoms? Because they make up everything!
44+
```
45+
46+
## Test Criteria
47+
48+
The test passes if:
49+
- The Python script executes without errors
50+
- The script produces output
51+
- The output contains valid string content (not empty, no error indicators)
52+
53+
The test fails if:
54+
- Environment variables are missing
55+
- The script fails to execute
56+
- No output is produced
57+
- Output contains error indicators (error, exception, traceback, failed, none, null)
58+
59+
## Artifacts
60+
61+
Test results are uploaded as artifacts with:
62+
- **Name**: `azure-openai-test-results`
63+
- **Retention**: 30 days
64+
- **Contents**: Both JSON and human-readable results
65+
66+
## Troubleshooting
67+
68+
If the workflow fails:
69+
1. Check that all required environment secrets are set in the `responses` environment
70+
2. Verify that the Azure OpenAI service is accessible
71+
3. Review the workflow logs for specific error messages
72+
4. Check the uploaded artifacts for detailed test results
Lines changed: 130 additions & 101 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
---
12
name: Test Azure OpenAI Responses API
23

34
on:
@@ -7,111 +8,139 @@ jobs:
78
test-responses-api:
89
runs-on: ubuntu-latest
910
environment: responses # Use the 'responses' environment for secrets
10-
11+
1112
steps:
12-
- name: Checkout repository
13-
uses: actions/checkout@v4
14-
15-
- name: Set up Python
16-
uses: actions/setup-python@v4
17-
with:
18-
python-version: '3.11'
19-
20-
- name: Install dependencies
21-
run: |
22-
python -m pip install --upgrade pip
23-
pip install -r requirements.txt
24-
25-
- name: Test Azure OpenAI Responses API
26-
env:
27-
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
28-
AZURE_OPENAI_V1_API_ENDPOINT: ${{ secrets.AZURE_OPENAI_V1_API_ENDPOINT }}
29-
AZURE_OPENAI_API_MODEL: ${{ secrets.AZURE_OPENAI_API_MODEL }}
30-
run: |
31-
echo "Testing responses-basic-aoai-v1.py script..."
32-
33-
# Create test results directory
34-
mkdir -p test-results
35-
36-
# Get current timestamp
37-
timestamp=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
38-
echo "Test run timestamp: $timestamp"
39-
40-
# Run the script and capture output
41-
python responses-basic-aoai-v1.py > output.txt 2>&1
42-
exit_code=$?
43-
44-
# Initialize test result variables
45-
pass_fail="PASS"
46-
error_code=""
47-
output_text=""
48-
49-
# Check if script executed successfully
50-
if [ $exit_code -eq 0 ]; then
51-
echo "✅ Script executed successfully"
52-
53-
# Check if output was generated and capture it
54-
if [ -s output.txt ]; then
55-
output_text=$(cat output.txt)
56-
echo "✅ Script produced output:"
57-
echo "$output_text"
58-
59-
# Test whether response.output_text contains a valid string
60-
# Valid means: non-empty, no error indicators, and actual content
61-
if [ -n "$output_text" ] && ! echo "$output_text" | grep -qi "error\|exception\|traceback\|failed\|none\|null"; then
62-
echo "✅ Output contains valid string content"
63-
pass_fail="PASS"
13+
- name: Checkout repository
14+
uses: actions/checkout@v4
15+
16+
- name: Set up Python
17+
uses: actions/setup-python@v5
18+
with:
19+
python-version: '3.11'
20+
21+
- name: Install dependencies
22+
run: |
23+
python -m pip install --upgrade pip
24+
pip install -r requirements.txt
25+
26+
- name: Test Azure OpenAI Responses API
27+
env:
28+
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
29+
AZURE_OPENAI_V1_API_ENDPOINT: ${{ secrets.AZURE_OPENAI_V1_API_ENDPOINT }}
30+
AZURE_OPENAI_API_MODEL: ${{ secrets.AZURE_OPENAI_API_MODEL }}
31+
run: |
32+
echo "Testing responses-basic-aoai-v1.py script..."
33+
34+
# Verify required environment variables are set
35+
if [ -z "$AZURE_OPENAI_API_KEY" ] || [ -z "$AZURE_OPENAI_V1_API_ENDPOINT" ] || [ -z "$AZURE_OPENAI_API_MODEL" ]; then
36+
echo "❌ Error: Required environment variables are not set"
37+
echo "AZURE_OPENAI_API_KEY: ${AZURE_OPENAI_API_KEY:+set}"
38+
echo "AZURE_OPENAI_V1_API_ENDPOINT: ${AZURE_OPENAI_V1_API_ENDPOINT:+set}"
39+
echo "AZURE_OPENAI_API_MODEL: ${AZURE_OPENAI_API_MODEL:+set}"
40+
exit 1
41+
fi
42+
43+
# Verify jq is available
44+
if ! command -v jq > /dev/null; then
45+
echo "❌ Error: jq is not available"
46+
exit 1
47+
fi
48+
49+
echo "✅ Environment check passed"
50+
51+
# Create test results directory
52+
mkdir -p test-results
53+
54+
# Get current timestamp
55+
timestamp=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
56+
echo "Test run timestamp: $timestamp"
57+
58+
# Run the script and capture output
59+
echo "Running Python script..."
60+
python responses-basic-aoai-v1.py > output.txt 2>&1
61+
exit_code=$?
62+
63+
# Initialize test result variables
64+
pass_fail="PASS"
65+
error_code=""
66+
output_text=""
67+
68+
# Check if script executed successfully
69+
if [ $exit_code -eq 0 ]; then
70+
echo "✅ Script executed successfully"
71+
72+
# Check if output was generated and capture it
73+
if [ -s output.txt ]; then
74+
output_text=$(cat output.txt)
75+
echo "✅ Script produced output:"
76+
echo "$output_text"
77+
78+
# Test whether response.output_text contains a valid string
79+
# Valid means: non-empty, no error indicators, and actual content
80+
if [ -n "$output_text" ] && ! echo "$output_text" | grep -qi "error\|exception\|traceback\|failed\|none\|null"; then
81+
echo "✅ Output contains valid string content"
82+
pass_fail="PASS"
83+
else
84+
echo "❌ Output does not contain valid string content"
85+
pass_fail="FAIL"
86+
error_code="INVALID_OUTPUT"
87+
fi
6488
else
65-
echo "❌ Output does not contain valid string content"
89+
echo "❌ Script produced no output"
6690
pass_fail="FAIL"
67-
error_code="INVALID_OUTPUT"
91+
error_code="NO_OUTPUT"
6892
fi
6993
else
70-
echo "❌ Script produced no output"
94+
echo "❌ Script failed with exit code: $exit_code"
95+
echo "Error output:"
96+
cat output.txt
97+
output_text=$(cat output.txt)
7198
pass_fail="FAIL"
72-
error_code="NO_OUTPUT"
99+
error_code="SCRIPT_ERROR_$exit_code"
100+
fi
101+
102+
# Create test results JSON (using jq for proper JSON formatting)
103+
jq -n \
104+
--arg timestamp "$timestamp" \
105+
--arg output "$output_text" \
106+
--arg pass_fail "$pass_fail" \
107+
--arg error_code "$error_code" \
108+
'{
109+
test_last_run_date: $timestamp,
110+
output: $output,
111+
pass_fail: $pass_fail,
112+
error_code: $error_code
113+
}' > test-results/test-results.json
114+
115+
# Also create a human-readable summary
116+
echo "Azure OpenAI Responses API Test Results" > test-results/test-summary.txt
117+
echo "========================================" >> test-results/test-summary.txt
118+
echo "Test Run Date: $timestamp" >> test-results/test-summary.txt
119+
echo "Result: $pass_fail" >> test-results/test-summary.txt
120+
echo "Error Code: $error_code" >> test-results/test-summary.txt
121+
echo "" >> test-results/test-summary.txt
122+
echo "Output:" >> test-results/test-summary.txt
123+
echo "$output_text" >> test-results/test-summary.txt
124+
125+
# Display final results
126+
echo "=== Test Results ==="
127+
echo "Timestamp: $timestamp"
128+
echo "Pass/Fail: $pass_fail"
129+
echo "Error Code: $error_code"
130+
echo "Output: $output_text"
131+
132+
# Exit with error if test failed
133+
if [ "$pass_fail" = "FAIL" ]; then
134+
echo "❌ Test failed"
135+
exit 1
136+
else
137+
echo "🎉 Test completed successfully!"
73138
fi
74-
else
75-
echo "❌ Script failed with exit code: $exit_code"
76-
echo "Error output:"
77-
cat output.txt
78-
output_text=$(cat output.txt)
79-
pass_fail="FAIL"
80-
error_code="SCRIPT_ERROR_$exit_code"
81-
fi
82-
83-
# Create test results JSON (using jq for proper JSON formatting)
84-
jq -n \
85-
--arg timestamp "$timestamp" \
86-
--arg output "$output_text" \
87-
--arg pass_fail "$pass_fail" \
88-
--arg error_code "$error_code" \
89-
'{
90-
test_last_run_date: $timestamp,
91-
output: $output,
92-
pass_fail: $pass_fail,
93-
error_code: $error_code
94-
}' > test-results/test-results.json
95-
96-
# Display final results
97-
echo "=== Test Results ==="
98-
echo "Timestamp: $timestamp"
99-
echo "Pass/Fail: $pass_fail"
100-
echo "Error Code: $error_code"
101-
echo "Output: $output_text"
102-
103-
# Exit with error if test failed
104-
if [ "$pass_fail" = "FAIL" ]; then
105-
echo "❌ Test failed"
106-
exit 1
107-
else
108-
echo "🎉 Test completed successfully!"
109-
fi
110-
111-
- name: Upload test results artifact
112-
uses: actions/upload-artifact@v4
113-
if: always() # Upload artifact even if the test fails
114-
with:
115-
name: azure-openai-test-results
116-
path: test-results/
117-
retention-days: 30
139+
140+
- name: Upload test results artifact
141+
uses: actions/upload-artifact@v4
142+
if: always() # Upload artifact even if the test fails
143+
with:
144+
name: azure-openai-test-results
145+
path: test-results/
146+
retention-days: 30

.gitignore

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
# Python cache files
2+
__pycache__/
3+
*.py[cod]
4+
*$py.class
5+
6+
# Environment files
7+
.env
8+
9+
# Virtual environments
10+
venv/
11+
env/
12+
ENV/
13+
14+
# IDE files
15+
.vscode/
16+
.idea/
17+
18+
# OS files
19+
.DS_Store
20+
Thumbs.db

0 commit comments

Comments
 (0)