LombdaAgentSDK #58
Replies: 8 comments 11 replies
-
|
Okayyyyyy... Made conversion a breeze! ConvertStringToIntState inputState = new();
ConvertStringToIntState resultState = new();
inputState.AddTransition(_ => true, (result) => result.ToString(), resultState); // Middle Parameter is conversion delegate
resultState.AddTransition(_ => true, new ExitState());Also handles the type Safety even with the converted results which is nice. |
Beta Was this translation helpful? Give feedback.
-
|
Wouldn't it be better to use generics here ConvertState<TIn, TOut>? |
Beta Was this translation helpful? Give feedback.
-
|
Massive update pushed to master.. Finally made my way to actually making a basic UI to see implementation which required LombdaAgent the master controller for the whole verbose and streaming system as well as a master agent on that to control the conversation. Obsoleted the openai c# lib in favor for yours.. added a very bad UI window form for baby AGI.. and most important I had the AI make me a API for communicating with LombdaAgent which completes most of the work flow .. I want to have the API client stand alone and somehow (possibly another server) that controls the custom Agents people create without having to fork my API project and injecting theirs. I found with the structured Outputs I need to figure out 2 things..
New
Fixed
Known Issues
|
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
New interface for interacting with agents using MAUI and the API
(I created new repo for the maui project)
New updates to my LLMTornado ForkChallenges
|
Beta Was this translation helpful? Give feedback.
-
@BoHomola implemented smart upcasting of requests from /chat/completions to /responses endpoint. This allows us to use models that are exclusive to responses from the This also works with streaming, and native So basically, you can either continue doing this:
Or you can use just ResponseRequestParameters = new ResponseRequest
{
// config
}To [TornadoTest]
public static async Task StreamResponseSimpleTextUsingChat()
{
Conversation chat = Program.Connect().Chat.CreateConversation(new ChatRequest
{
Model = ChatModel.OpenAi.O4.V4Mini,
MaxTokens = 4000,
// using Responses endpoint is preferred, if the model supports it
ResponseRequestParameters = new ResponseRequest
{
Reasoning = new ReasoningConfiguration
{
Effort = ResponseReasoningEfforts.Medium,
Summary = ResponseReasoningSummaries.Auto
}
}
})
.AppendSystemMessage("You are a helpful assistant")
.AppendUserInput([
new ChatMessagePart("How to explain theory of relativity to a 15 years old student?")
]);
// try calling Serialize() on "chat" with and without "ResponseRequestParameters"
// and observe the url and body of the request
await chat.StreamResponseRich(new ChatStreamEventHandler
{
OnResponseEvent = (evt) =>
{
// "evt" is IResponseEvent
return ValueTask.CompletedTask;
},
MessageTokenHandler = (delta) =>
{
Console.Write(delta);
return ValueTask.CompletedTask;
},
ReasoningTokenHandler = (reasoningDelta) =>
{
Console.Write(reasoningDelta.Content);
return ValueTask.CompletedTask;
}
});
}I've also made good progress on the delegates - now we support almost any type, allow conditionally excluding certain parameters from the request, overriding/adding custom parameters to the request (e.g., in case you have an enum with many options, but want the model to choose only from a subset): [TornadoTest]
public static async Task TornadoFunction()
{
Conversation conversation = Program.Connect().Chat.CreateConversation(new ChatRequest
{
Model = ChatModel.OpenAi.Gpt41.V41,
Tools =
[
new Tool((
string location,
Continents continent,
ComplexClass cls,
List<string> names,
List<Person> people,
string[] popularGames,
string[,] wonGameOfCheckers3x3useXOchars,
Continents[] allContinents,
string[][] rpgInventoryItemsUseXForEmpty,
HashSet<int> setOfUniqueInts,
object someDataAboutGames,
DateTime dateBattleOfVerdunStarted,
ToolArguments args) => // ToolArguments is a special type that allows accessing the inferred arguments via IDictionary interface
{
// manual decoding example, normally not needed
if (args.TryGetArgument("people", out List<Person>? fetchedPeople))
{
foreach (Person person in fetchedPeople)
{
Console.WriteLine(person.Name);
}
}
return "";
}, new ToolMetadata
{
Params = [
new ToolParamDefinition("allContinents", new ToolParamListEnum("continents", true, [ nameof(Continents.Africa), nameof(Continents.Antarctica) ]))
],
Ignore = [ "wonGameOfCheckers3x3useXOchars" ]
})
],
ToolChoice = OutboundToolChoice.Required
});
conversation.AddUserMessage("Fill the provided JSON structure with mock data");
TornadoRequestContent serialized = conversation.Serialize(new ChatRequestSerializeOptions
{
Pretty = true
});
Console.Write(serialized);
var data = await conversation.GetResponseRich();
int z = 0;
}It should be ready to use in the next version 3.8 - I guess in one or two days. |
Beta Was this translation helpful? Give feedback.
-
|
@Johnny2x2 FYI, we now have our own page: https://llmtornado.ai |
Beta Was this translation helpful? Give feedback.
-
LlmTornado.Agents@lofcz I have completed the LLMTornado Agent Chat Integration from my lib to yours!! LlmTornado.Agents I'm still working out the Response Lib Testing.. Check it out let me know what you think! I added demos as well so you can try it out. also added a research agent demo for the state machine. Lombda.StateMachineState Machine now has it's own Repo and Nuget Demo CodeBasic [TornadoTest]
public static async Task BasicTornadoRun()
{
TornadoAgent agent = new(
Program.Connect(),
ChatModel.OpenAi.Gpt41.V41Mini,
"You are a useful assistant.");
var result = await TornadoRunner.RunAsync(agent, "What is 2+2?");
Console.WriteLine(result.Messages.Last().Content);
}Streaming[TornadoTest]
public static async Task RunHelloWorldStreaming()
{
TornadoAgent agent = new(
Program.Connect(),
ChatModel.OpenAi.Gpt41.V41Mini,
"Have fun");
// Enhanced streaming callback to handle the new ModelStreamingEvents system
void StreamingHandler(ModelStreamingEvents streamingEvent)
{
switch (streamingEvent.EventType)
{
case ModelStreamingEventType.OutputTextDelta:
if (streamingEvent is ModelStreamingOutputTextDeltaEvent deltaEvent)
{
Console.Write(deltaEvent.DeltaText); // Write the text delta directly
}
break;
default:
break;
}
}
var result = await TornadoRunner.RunAsync(agent, "Hello Streaming World!", streaming: true, streamingCallback: StreamingHandler);
}Tools and Text formating[TornadoTest]
public async Task RunBasicTornadoToolUse()
{
TornadoAgent agent = new TornadoAgent(Program.Connect(),
ChatModel.OpenAi.Gpt41.V41Mini,
"You are a useful assistant.",
tools: [GetCurrentWeather],
outputSchema: typeof(WeatherReport));
var result = await TornadoRunner.RunAsync(agent, "What is the weather in boston?");
Console.WriteLine(result.Messages.Last().Content.ParseJson<WeatherReport>());
} |
Beta Was this translation helpful? Give feedback.



Uh oh!
There was an error while loading. Please reload this page.
-
Hello Again! Figured I'd keep you up to date on my SDK LombdaAgentSDK here instead of on the Issue thread. I have pushed a Beta BabyAGI project with the self build principle. Just finished wiring it up tonight still playing around with it but it is "working".
Because of the typesafe nature of the state transitions sometimes switching between states can be a pain because you need to transition to a state to convert the data to the correct type.. I came up with a new primative to convert the data a bit easier without having to create a whole new state. I guess it is more flexible than just for conversion.
Added in a couple new things to get BabyAGI to work
Beta Was this translation helpful? Give feedback.
All reactions