Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ jobs:

- name: Install dependencies
run: |
for dir in examples/*/; do
for dir in getting_started/completion_config/*/ getting_started/agent_config/*/ features/*/; do
echo "Installing $dir"
poetry -C "$dir" install
done
56 changes: 27 additions & 29 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,38 +1,36 @@
# LaunchDarkly sample Python application
# LaunchDarkly AI SDK for Python - Examples

We've built a simple console application that demonstrates how LaunchDarkly's SDK works.

Below, you'll find the build procedure. For more comprehensive instructions, you can visit your [Quickstart page](https://docs.launchdarkly.com/home/ai-configs/quickstart) or the [Python reference guide](https://docs.launchdarkly.com/sdk/ai/python).

This demo requires Python 3.10 or higher.

## Build Instructions

This repository includes examples for `OpenAI`, `Bedrock`, `Gemini`, `LangChain`, `LangGraph`, `Judge`, and `Observability`. Depending on your preferred provider, you may have to take some additional steps.
| Package | PyPI | Docs |
| --- | --- | --- |
| [launchdarkly-server-sdk-ai](https://github.com/launchdarkly/python-server-sdk-ai/tree/main/packages/sdk/server-ai) | [![PyPI](https://img.shields.io/pypi/v/launchdarkly-server-sdk-ai)](https://pypi.org/project/launchdarkly-server-sdk-ai/) | [Reference](https://docs.launchdarkly.com/sdk/ai/python) |
| [launchdarkly-server-sdk-ai-openai](https://github.com/launchdarkly/python-server-sdk-ai/tree/main/packages/ai-providers/server-ai-openai) | [![PyPI](https://img.shields.io/pypi/v/launchdarkly-server-sdk-ai-openai)](https://pypi.org/project/launchdarkly-server-sdk-ai-openai/) | [Reference](https://docs.launchdarkly.com/sdk/ai/python) |
| [launchdarkly-server-sdk-ai-langchain](https://github.com/launchdarkly/python-server-sdk-ai/tree/main/packages/ai-providers/server-ai-langchain) | [![PyPI](https://img.shields.io/pypi/v/launchdarkly-server-sdk-ai-langchain)](https://pypi.org/project/launchdarkly-server-sdk-ai-langchain/) | [Reference](https://docs.launchdarkly.com/sdk/ai/python) |
| [launchdarkly-observability](https://github.com/launchdarkly/observability-sdk/tree/main/sdk/%40launchdarkly/observability-python) | [![PyPI](https://img.shields.io/pypi/v/launchdarkly-observability)](https://pypi.org/project/launchdarkly-observability/) | [Reference](https://docs.launchdarkly.com/sdk/observability/python) |

### General setup
Each example is a self-contained application you can run independently to explore LaunchDarkly's AI APIs hands-on. Pick one that matches your provider or use case, follow the README, and you'll be up and running in minutes.

1. [Create an AI Config](https://launchdarkly.com/docs/home/ai-configs/create) using the key specified in each example, or copy the key of existing AI Config in your LaunchDarkly project that you want to evaluate.
For more comprehensive instructions, visit the [Quickstart page](https://docs.launchdarkly.com/home/ai-configs/quickstart) or the [Python reference guide](https://docs.launchdarkly.com/sdk/ai/python).

1. Ensure you have [Poetry](https://python-poetry.org/) installed.
## Getting Started

1. Create a `.env` file in the repository root with at least your LaunchDarkly SDK key:
These examples show how to integrate LaunchDarkly AI with different providers using `completion_config` and `agent_config`.

```
LAUNCHDARKLY_SDK_KEY=your-launchdarkly-sdk-key
```
| Example | Description |
| --- | --- |
| [Bedrock](getting_started/completion_config/bedrock/) | `completion_config` with AWS Bedrock, metrics tracking |
| [Gemini](getting_started/completion_config/gemini/) | `completion_config` with Google Gemini, metrics tracking |
| [LangChain](getting_started/completion_config/langchain/) | `completion_config` with LangChain, async metrics tracking |
| [LangGraph Agent](getting_started/agent_config/langgraph_agent/) | `agent_config` with a single LangGraph ReAct agent, tool calling, metrics tracking |
| [LangGraph Multi-Agent](getting_started/agent_config/langgraph_multi_agent/) | `agent_config` with multiple LangGraph agents, custom StateGraph workflow, per-node metrics |
| [OpenAI](getting_started/completion_config/openai/) | `completion_config` with OpenAI, automatic metrics tracking |

Each example README describes the full set of environment variables needed. The `.env` file is loaded automatically when running any example.
## Features

### Examples
These examples demonstrate LaunchDarkly's managed APIs and standalone capabilities.

| Example | Description | README |
| --- | --- | --- |
| **OpenAI** | Single provider using OpenAI | [examples/openai](examples/openai/README.md) |
| **Bedrock** | Single provider using AWS Bedrock | [examples/bedrock](examples/bedrock/README.md) |
| **Gemini** | Single provider using Google Gemini | [examples/gemini](examples/gemini/README.md) |
| **LangChain** | Multiple providers via LangChain | [examples/langchain](examples/langchain/README.md) |
| **LangGraph Agent** | Single agent using LangGraph | [examples/langgraph_agent](examples/langgraph_agent/README.md) |
| **LangGraph Multi-Agent** | Multiple agents using LangGraph | [examples/langgraph_multi_agent](examples/langgraph_multi_agent/README.md) |
| **Judge** | Judge evaluation of AI responses | [examples/judge](examples/judge/README.md) |
| **Chat with Observability** | Observability plugin for AI chat monitoring | [examples/chat_observability](examples/chat_observability/README.md) |
| Example | Description |
| --- | --- |
| [Judge](features/judge/) | `create_judge` for standalone evaluation of AI responses |
| [Managed Agent](features/managed_agent/) | `create_agent` with tool calling, automatic metrics tracking, and judge evaluation |
| [Managed Agent Graph](features/managed_agent_graph/) | `create_agent_graph` with multi-node workflows, tool calling, per-node metrics, and judge evaluation |
| [Managed Model](features/managed_model/) | `create_model` with managed chat, automatic metrics tracking, and judge evaluation |
50 changes: 0 additions & 50 deletions examples/chat_observability/README.md

This file was deleted.

128 changes: 0 additions & 128 deletions examples/chat_observability/chat_observability_example.py

This file was deleted.

55 changes: 0 additions & 55 deletions examples/judge/README.md

This file was deleted.

36 changes: 36 additions & 0 deletions features/judge/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# Judge Example (Direct Evaluation)

This example demonstrates how to use LaunchDarkly's judge functionality to evaluate specific input/output pairs directly, without an associated chat session.

## Prerequisites

- Python 3.10 or higher
- [Poetry](https://python-poetry.org/) installed
- A [LaunchDarkly](https://launchdarkly.com/) account and SDK key
- API keys for the provider you want to use (OpenAI, Bedrock, or Gemini)

## Setup

1. Create the following config in your LaunchDarkly project. You can use a different key by setting the environment variable in your `.env`.

- [Create a Judge Config](https://launchdarkly.com/docs/home/ai-configs/judges) for evaluation. Default key: `sample-ai-judge`.

1. Create a `.env` file in this directory with the following variables:

```
LAUNCHDARKLY_SDK_KEY=your-launchdarkly-sdk-key
LAUNCHDARKLY_AI_JUDGE_KEY=sample-ai-judge
OPENAI_API_KEY=your-openai-api-key
```

1. Install the required dependencies:

```bash
poetry install
```

## Run

```bash
poetry run judge
```
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
from ldclient import Context
from ldclient.config import Config
from ldai import LDAIClient, AIJudgeConfigDefault
from ldobserve import ObservabilityConfig, ObservabilityPlugin

load_dotenv()

Expand All @@ -24,7 +25,11 @@ async def async_main():
print("*** Please set the LAUNCHDARKLY_SDK_KEY env first")
exit()

ldclient.set_config(Config(sdk_key))
ldclient.set_config(Config(sdk_key, plugins=[
ObservabilityPlugin(ObservabilityConfig(
service_name='hello-python-ai-judge',
))
]))

if not ldclient.get().is_initialized():
print("*** SDK failed to initialize. Please check your internet connection and SDK credential for any typo.")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,19 +5,16 @@ description = "Hello LaunchDarkly for Python AI - Judge"
authors = ["LaunchDarkly <dev@launchdarkly.com>"]
license = "Apache-2.0"
readme = "README.md"
packages = [
{include = "chat_judge_example.py"},
{include = "direct_judge_example.py"},
]
packages = [{include = "judge_example.py"}]

[tool.poetry.scripts]
chat-judge = "chat_judge_example:main"
direct-judge = "direct_judge_example:main"
judge = "judge_example:main"

[tool.poetry.dependencies]
python = "^3.10"
python-dotenv = ">=1.0.0"
launchdarkly-server-sdk-ai = ">=0.19.0"
launchdarkly-observability = ">=0.1.0"
launchdarkly-server-sdk-ai-openai = ">=0.5.0"
launchdarkly-server-sdk-ai-langchain = ">=0.6.0"
openai = ">=1.0.0"
Expand Down
File renamed without changes.
Loading
Loading