You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I feel strongly that ChatCompletionClient, and the LLMMessage definitions do not belong in autogen_core. It may be the most convenient place to put them, but autogen_core is supposed to be "AI-agnostic", in the sense that it is all about the implementation of the actor-model architecture for agentic systems, and nothing else.
The only places where ChatCompletionClient, LLMMessage and ModelContext are actually used are in autogen_agentchat and autogen_magentic_one.
When building their own agents, developers might not want to use ChatCompletionClient, and may instead opt to use the OpenAI sdk directly, or perhaps use their own client.
autogen_core also has a dependency on openai, which doesn't make sense, again, because it is supposed to be unaware of that stuff.
There is no better time than now to migigate these mistakes early on. I strongly suggest to move ChatCompletionClient, all LLM-related definitions, and autogen_core.tools to an autogen_models package before the official release of 0.4.
autogen_agentchat can have a dependency on autogen_models, and autogen_models can have a dependency on autogen_core.
It's possible that autogen_models may not even need a dependency on autogen_core. In which case, autogen_agentchat can have a dependency on both autogen_models and autogen_core.
The text was updated successfully, but these errors were encountered:
Thanks for opening the discussion, when we started down the path of splitting up the packages we did consider more fine grained packages than what we've landed on here. However, we opted for simply core, agentchat and ext to reduce the number of places to search and reduce the burden of developing and maintaining. It's already a bit confusing to know which package you should be importing from for some things and we didn't want to make that worse.
The intention of the core package is to provide the foundational interfaces (and a few limited implementations) that all other packages and extensions implement and rely on. It includes the actor framework, but it is not meant to be an "AI-agnostic" package. We might need to update the descriptions to make this clearer.
You're right core is meant to be lean and the OpenAI dependency was a left over from an earlier time. I actually removed that dependecy only hours before you opened this issue (#4919).
I feel strongly that ChatCompletionClient, and the LLMMessage definitions do not belong in autogen_core. It may be the most convenient place to put them, but autogen_core is supposed to be "AI-agnostic", in the sense that it is all about the implementation of the actor-model architecture for agentic systems, and nothing else.
The only places where ChatCompletionClient, LLMMessage and ModelContext are actually used are in autogen_agentchat and autogen_magentic_one.
When building their own agents, developers might not want to use ChatCompletionClient, and may instead opt to use the OpenAI sdk directly, or perhaps use their own client.
autogen_core also has a dependency on openai, which doesn't make sense, again, because it is supposed to be unaware of that stuff.
There is no better time than now to migigate these mistakes early on. I strongly suggest to move ChatCompletionClient, all LLM-related definitions, and autogen_core.tools to an autogen_models package before the official release of 0.4.
autogen_agentchat can have a dependency on autogen_models, and autogen_models can have a dependency on autogen_core.
It's possible that autogen_models may not even need a dependency on autogen_core. In which case, autogen_agentchat can have a dependency on both autogen_models and autogen_core.
The text was updated successfully, but these errors were encountered: