Skip to content

benedict-khoo-sap/a2a-python-long-running-response-demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A2A SDK Long-running Response Demo

A minimal reproducible example demonstrating timeout behavior in the A2A Python SDK when handling long-running agent responses.

Client version used: 0.3.22

Problem: The SDK exhibits different timeout behaviors depending on:

  • Server type (JSONRPC vs HTTP+JSON)
  • Communication mode (streaming vs polling)
  • Response timing (delays before/after first chunk)

This repo helps isolate and reproduce these behaviors for debugging.

Table of Contents

Prerequisites

  • Python 3.11 or higher
  • uv

Setup

Install Dependencies

uv sync

Available Server Types

Server Type Command
JSONRPC (Starlette) uv run server/main.py --port 10001 --type jsonrpc_starlette
JSONRPC (FastAPI) uv run server/main.py --port 10001 --type jsonrpc_fastapi
HTTP+JSON uv run server/main.py --port 10001 --type http_json

Client Modes

# Streaming mode (default)
uv run client/main.py --port 10001

# Polling mode
uv run client/main.py --port 10001 --force-polling

Quick Start

Run the most common timeout scenario:

# Terminal 1 - Start server
uv run server/main.py --port 10001 --type jsonrpc_starlette

# Terminal 2 - Start client
uv run client/main.py --port 10001

# When prompted, enter: long_wait_before_first_response

Observed Behavior

JSONRPC Streaming - Timeout with Long Response Chunks

Setup

uv run server/main.py --port 10001 --type jsonrpc_starlette # or jsonrpc_fastapi
uv run client/main.py --port 10001

Test Cases

Input Expected behavior
long_wait_before_first_response a timeout error should occur after several seconds
long_wait_after_first_response the first 2 chunks (task and status update) should be received and then a timeout error should occur after several seconds
fast_response works fine

JSONRPC Polling - Timeout with Long First Response

Setup

uv run server/main.py --port 10001 --type jsonrpc_starlette # or jsonrpc_fastapi
uv run client/main.py --port 10001 --force-polling

Test Cases

Input Expected behavior
long_wait_before_first_response a timeout error should occur after several seconds
long_wait_after_first_response works fine
fast_response works fine

HTTP+JSON Streaming - No Timeouts

Setup

uv run server/main.py --port 10001 --type http_json
uv run client/main.py --port 10001

Test Cases

Input Expected behavior
long_wait_before_first_response works fine
long_wait_after_first_response works fine
fast_response works fine

HTTP+JSON Polling - Timeout with Long First Response

Setup

uv run server/main.py --port 10001 --type http_json
uv run client/main.py --port 10001 --force-polling

Test Cases

Input Expected behavior
long_wait_before_first_response a timeout error should occur after several seconds
long_wait_after_first_response works fine
fast_response works fine

Understanding the Results

Expected Behavior

All test inputs should complete successfully regardless of server type or communication mode.

Actual Behavior Summary

  • JSONRPC servers experience timeouts with long waits
  • HTTP+JSON servers handle all scenarios correctly
  • Polling mode only times out on initial response delays
  • Streaming mode times out on any significant delay

Test Input Descriptions

  • long_wait_before_first_response: 10s delay before any response
  • long_wait_after_first_response: 2 quick chunks, then 10s delay
  • fast_response: All chunks sent immediately

How It Works

The demo consists of three main components:

The client enumerates valid test inputs when started, allowing you to reproduce specific timeout scenarios.

About

A minimal reproducible example demonstrating timeout behavior in the A2A Python SDK when handling long-running agent responses.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages