A minimal reproducible example demonstrating timeout behavior in the A2A Python SDK when handling long-running agent responses.
Client version used: 0.3.22
Problem: The SDK exhibits different timeout behaviors depending on:
- Server type (JSONRPC vs HTTP+JSON)
- Communication mode (streaming vs polling)
- Response timing (delays before/after first chunk)
This repo helps isolate and reproduce these behaviors for debugging.
- Python 3.11 or higher
uv
uv sync| Server Type | Command |
|---|---|
| JSONRPC (Starlette) | uv run server/main.py --port 10001 --type jsonrpc_starlette |
| JSONRPC (FastAPI) | uv run server/main.py --port 10001 --type jsonrpc_fastapi |
| HTTP+JSON | uv run server/main.py --port 10001 --type http_json |
# Streaming mode (default)
uv run client/main.py --port 10001
# Polling mode
uv run client/main.py --port 10001 --force-pollingRun the most common timeout scenario:
# Terminal 1 - Start server
uv run server/main.py --port 10001 --type jsonrpc_starlette
# Terminal 2 - Start client
uv run client/main.py --port 10001
# When prompted, enter: long_wait_before_first_responseuv run server/main.py --port 10001 --type jsonrpc_starlette # or jsonrpc_fastapi
uv run client/main.py --port 10001| Input | Expected behavior |
|---|---|
long_wait_before_first_response |
a timeout error should occur after several seconds |
long_wait_after_first_response |
the first 2 chunks (task and status update) should be received and then a timeout error should occur after several seconds |
fast_response |
works fine |
uv run server/main.py --port 10001 --type jsonrpc_starlette # or jsonrpc_fastapi
uv run client/main.py --port 10001 --force-polling| Input | Expected behavior |
|---|---|
long_wait_before_first_response |
a timeout error should occur after several seconds |
long_wait_after_first_response |
works fine |
fast_response |
works fine |
uv run server/main.py --port 10001 --type http_json
uv run client/main.py --port 10001| Input | Expected behavior |
|---|---|
long_wait_before_first_response |
works fine |
long_wait_after_first_response |
works fine |
fast_response |
works fine |
uv run server/main.py --port 10001 --type http_json
uv run client/main.py --port 10001 --force-polling| Input | Expected behavior |
|---|---|
long_wait_before_first_response |
a timeout error should occur after several seconds |
long_wait_after_first_response |
works fine |
fast_response |
works fine |
All test inputs should complete successfully regardless of server type or communication mode.
- JSONRPC servers experience timeouts with long waits
- HTTP+JSON servers handle all scenarios correctly
- Polling mode only times out on initial response delays
- Streaming mode times out on any significant delay
long_wait_before_first_response: 10s delay before any responselong_wait_after_first_response: 2 quick chunks, then 10s delayfast_response: All chunks sent immediately
The demo consists of three main components:
- server/agent_executor.py: Simulates different response timing patterns
- server/main.py: Configures server types (JSONRPC Starlette/FastAPI, HTTP+JSON)
- client/main.py: Client with streaming and polling modes
The client enumerates valid test inputs when started, allowing you to reproduce specific timeout scenarios.