Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ jobs:

- name: Install dependencies
run: |
for dir in getting_started/completion_config/*/ getting_started/agent_config/*/ features/*/; do
for dir in getting_started/*/**/ features/*/; do
echo "Installing $dir"
poetry -C "$dir" install
done
26 changes: 13 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,24 +13,24 @@ For more comprehensive instructions, visit the [Quickstart page](https://docs.la

## Getting Started

These examples show how to integrate LaunchDarkly AI with different providers using `completion_config` and `agent_config`.
These examples show how to integrate LaunchDarkly AI with different providers.

| Example | Description |
| --- | --- |
| [Bedrock](getting_started/completion_config/bedrock/) | `completion_config` with AWS Bedrock, metrics tracking |
| [Gemini](getting_started/completion_config/gemini/) | `completion_config` with Google Gemini, metrics tracking |
| [LangChain](getting_started/completion_config/langchain/) | `completion_config` with LangChain, async metrics tracking |
| [LangGraph Agent](getting_started/agent_config/langgraph_agent/) | `agent_config` with a single LangGraph ReAct agent, tool calling, metrics tracking |
| [LangGraph Multi-Agent](getting_started/agent_config/langgraph_multi_agent/) | `agent_config` with multiple LangGraph agents, custom StateGraph workflow, per-node metrics |
| [OpenAI](getting_started/completion_config/openai/) | `completion_config` with OpenAI, automatic metrics tracking |
| Provider | Example | Description |
| --- | --- | --- |
| Bedrock | [Converse](getting_started/bedrock/converse/) | `completion_config` with AWS Bedrock Converse API, metrics tracking |
| Gemini | [Generate Content](getting_started/gemini/generate_content/) | `completion_config` with Google GenAI, metrics tracking |
| LangChain | [Invoke](getting_started/langchain/invoke/) | `completion_config` with LangChain, async metrics tracking |
| LangGraph | [ReAct Agent](getting_started/langgraph/react_agent/) | `agent_config` with a single LangGraph ReAct agent, tool calling, metrics tracking |
| LangGraph | [StateGraph](getting_started/langgraph/state_graph/) | `agent_config` with multiple LangGraph agents, custom StateGraph workflow, per-node metrics |
| OpenAI | [Chat Completions](getting_started/openai/chat_completions/) | `completion_config` with OpenAI, automatic metrics tracking |

## Features

These examples demonstrate LaunchDarkly's managed APIs and standalone capabilities.

| Example | Description |
| --- | --- |
| [Judge](features/judge/) | `create_judge` for standalone evaluation of AI responses |
| [Managed Agent](features/managed_agent/) | `create_agent` with tool calling, automatic metrics tracking, and judge evaluation |
| [Managed Agent Graph](features/managed_agent_graph/) | `create_agent_graph` with multi-node workflows, tool calling, per-node metrics, and judge evaluation |
| [Managed Model](features/managed_model/) | `create_model` with managed chat, automatic metrics tracking, and judge evaluation |
| [create_judge](features/create_judge/) | Standalone evaluation of AI responses |
| [create_agent](features/create_agent/) | Tool calling, automatic metrics tracking, and judge evaluation |
| [create_agent_graph](features/create_agent_graph/) | Multi-node workflows, tool calling, per-node metrics, and judge evaluation |
| [create_model](features/create_model/) | Managed chat, automatic metrics tracking, and judge evaluation |
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Managed Agent Example
# Create Agent Example

This example demonstrates how to use LaunchDarkly's managed agent functionality, which handles model creation, metric tracking, and judge evaluation dispatch automatically.
This example demonstrates how to use LaunchDarkly's `create_agent` method, which handles model creation, metric tracking, and judge evaluation dispatch automatically.

## Prerequisites

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ async def async_main():
)

if not agent:
print(f"*** Failed to create agent for key: {agent_config_key}")
print(f"AI config '{agent_config_key}' is disabled. Verify the config key exists in your LaunchDarkly project and is not targeting a disabled variation.")
return

sample_question = 'What is the weather in Tokyo?'
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@ description = "Hello LaunchDarkly for Python AI - Managed Agent"
authors = ["LaunchDarkly <dev@launchdarkly.com>"]
license = "Apache-2.0"
readme = "README.md"
packages = [{include = "managed_agent_example.py"}]
packages = [{include = "create_agent_example.py"}]

[tool.poetry.scripts]
agent = "managed_agent_example:main"
agent = "create_agent_example:main"

[tool.poetry.dependencies]
python = "^3.10"
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Managed Agent Graph Example
# Create Agent Graph Example

This example demonstrates how to use LaunchDarkly's managed agent graph functionality, which orchestrates multi-node agent workflows with automatic metric tracking at both the graph and per-node level.
This example demonstrates how to use LaunchDarkly's `create_agent_graph` method, which orchestrates multi-node agent workflows with automatic metric tracking at both the graph and per-node level.

## Prerequisites

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ async def async_main():
)

if not graph:
print(f"*** Failed to create agent graph for key: {graph_key}")
print(f"AI config '{graph_key}' is disabled. Verify the config key exists in your LaunchDarkly project and is not targeting a disabled variation.")
return

sample_question = 'Plan a trip to Tokyo next week. Find flights, hotels, and check the weather.'
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@ description = "Hello LaunchDarkly for Python AI - Managed Agent Graph"
authors = ["LaunchDarkly <dev@launchdarkly.com>"]
license = "Apache-2.0"
readme = "README.md"
packages = [{include = "managed_agent_graph_example.py"}]
packages = [{include = "create_agent_graph_example.py"}]

[tool.poetry.scripts]
agent-graph = "managed_agent_graph_example:main"
agent-graph = "create_agent_graph_example:main"

[tool.poetry.dependencies]
python = "^3.10"
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Judge Example (Direct Evaluation)
# Create Judge Example

This example demonstrates how to use LaunchDarkly's judge functionality to evaluate specific input/output pairs directly, without an associated chat session.
This example demonstrates how to use LaunchDarkly's `create_judge` method to evaluate specific input/output pairs directly, without an associated chat session.

## Prerequisites

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ async def async_main():
judge = aiclient.create_judge(judge_key, context)

if not judge:
print(f"*** Failed to create judge for key: {judge_key}")
print(f"AI config '{judge_key}' is disabled. Verify the config key exists in your LaunchDarkly project and is not targeting a disabled variation.")
return

input_text = 'You are a helpful assistant for the company LaunchDarkly. How can you help me?'
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@ description = "Hello LaunchDarkly for Python AI - Judge"
authors = ["LaunchDarkly <dev@launchdarkly.com>"]
license = "Apache-2.0"
readme = "README.md"
packages = [{include = "judge_example.py"}]
packages = [{include = "create_judge_example.py"}]

[tool.poetry.scripts]
judge = "judge_example:main"
judge = "create_judge_example:main"

[tool.poetry.dependencies]
python = "^3.10"
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Managed Model Example
# Create Model Example

This example demonstrates how to use LaunchDarkly's managed model functionality (`create_model`), which handles model creation, chat execution, and optional judge evaluation dispatch automatically.
This example demonstrates how to use LaunchDarkly's `create_model` method, which handles model creation, chat execution, and optional judge evaluation dispatch automatically.

## Prerequisites

Expand Down Expand Up @@ -32,5 +32,5 @@ This example demonstrates how to use LaunchDarkly's managed model functionality
## Run

```bash
poetry run managed-model
poetry run model
```
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ async def async_main():
})

if not chat:
print(f"*** Failed to create chat for key: {ai_config_key}")
print(f"AI config '{ai_config_key}' is disabled. Verify the config key exists in your LaunchDarkly project and is not targeting a disabled variation.")
return

sample_question = 'How can LaunchDarkly help me?'
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@ description = "Hello LaunchDarkly for Python AI - Managed Model"
authors = ["LaunchDarkly <dev@launchdarkly.com>"]
license = "Apache-2.0"
readme = "README.md"
packages = [{include = "managed_model_example.py"}]
packages = [{include = "create_model_example.py"}]

[tool.poetry.scripts]
managed-model = "managed_model_example:main"
model = "create_model_example:main"

[tool.poetry.dependencies]
python = "^3.10"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ def main():
)

if not config_value.enabled:
print("AI Config is disabled")
print(f"AI config '{ai_config_key}' is disabled. Verify the config key exists in your LaunchDarkly project and is not targeting a disabled variation.")
return

tracker = config_value.create_tracker()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ def main():
)

if not config_value.enabled:
print("AI Config is disabled")
print(f"AI config '{ai_config_key}' is disabled. Verify the config key exists in your LaunchDarkly project and is not targeting a disabled variation.")
return

tracker = config_value.create_tracker()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ async def async_main():
)

if not config_value.enabled:
print("AI Config is disabled")
print(f"AI config '{ai_config_key}' is disabled. Verify the config key exists in your LaunchDarkly project and is not targeting a disabled variation.")
return

tracker = config_value.create_tracker()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ def main():
agent_config = aiclient.agent_config(agent_config_key, context)

if not agent_config.enabled:
print("AI Agent Config is disabled")
print(f"AI config '{agent_config_key}' is disabled. Verify the config key exists in your LaunchDarkly project and is not targeting a disabled variation.")
return

langchain_provider = map_provider_to_langchain(agent_config.provider.name)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ def ai_node(
goto=END,
update={
"messages": state["messages"],
state_key: f"AI Config {config_key} is disabled. Node for {config_key} skipped."
state_key: f"AI config '{config_key}' is disabled. Verify the config key exists in your LaunchDarkly project and is not targeting a disabled variation."
}
)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ def main():
)

if not config_value.enabled:
print("AI Config is disabled")
print(f"AI config '{ai_config_key}' is disabled. Verify the config key exists in your LaunchDarkly project and is not targeting a disabled variation.")
return

tracker = config_value.create_tracker()
Expand Down
Loading