diff --git a/.github/workflows/lfs-check.yml b/.github/workflows/lfs-check.yml index e2bcfb5668e8..4baae925de3c 100644 --- a/.github/workflows/lfs-check.yml +++ b/.github/workflows/lfs-check.yml @@ -10,6 +10,6 @@ jobs: uses: actions/checkout@v4 with: lfs: true - - name: Check Git LFS files for consistency + - name: "Check Git LFS files for consistency, if you see error like 'pointer: unexpectedGitObject ... should have been a pointer but was not', please install Git LFS locally, delete the problematic file, and then add it back again. This ensures it's properly tracked." run: | git lfs fsck diff --git a/autogen/agentchat/conversable_agent.py b/autogen/agentchat/conversable_agent.py index 35af969673b8..99b89ddf69b4 100644 --- a/autogen/agentchat/conversable_agent.py +++ b/autogen/agentchat/conversable_agent.py @@ -149,7 +149,13 @@ def __init__( ) # Take a copy to avoid modifying the given dict if isinstance(llm_config, dict): - llm_config = copy.deepcopy(llm_config) + try: + llm_config = copy.deepcopy(llm_config) + except TypeError as e: + raise TypeError( + "Please implement __deepcopy__ method for each value class in llm_config to support deepcopy." + " Refer to the docs for more details: https://microsoft.github.io/autogen/docs/topics/llm_configuration#adding-http-client-in-llm_config-for-proxy" + ) from e self._validate_llm_config(llm_config) diff --git a/notebook/agentchat_oai_assistant_function_call.ipynb b/notebook/agentchat_oai_assistant_function_call.ipynb index 878175420c6c..bc78819fb198 100644 --- a/notebook/agentchat_oai_assistant_function_call.ipynb +++ b/notebook/agentchat_oai_assistant_function_call.ipynb @@ -4,7 +4,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Chat with OpenAI Assistant using function call in AutoGen: OSS Insights for Advanced GitHub Data Analysis\n", + "# Chat with OpenAI Assistant using function call in AutoGen: OSS Insights for Advanced GitHub Data Analysis\n", "\n", "This Jupyter Notebook demonstrates how to leverage OSS Insight (Open Source Software Insight) for advanced GitHub data analysis by defining `Function calls` in AutoGen for the OpenAI Assistant. \n", "\n", @@ -14,12 +14,19 @@ "2. Defining an OpenAI Assistant Agent in AutoGen\n", "3. Fetching GitHub Insight Data using Function Call\n", "\n", - "### Requirements\n", + "## Requirements\n", "\n", "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", + "````{=mdx}\n", + ":::info Requirements\n", + "Install `pyautogen`:\n", "```bash\n", "pip install pyautogen\n", - "```" + "```\n", + "\n", + "For more information, please refer to the [installation guide](/docs/installation/).\n", + ":::\n", + "````" ] }, { @@ -36,7 +43,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Function Schema and Implementation\n", + "## Function Schema and Implementation\n", "\n", "This section provides the function schema definition and their implementation details. These functions are tailored to fetch and process data from GitHub, utilizing OSS Insight's capabilities." ] @@ -101,7 +108,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Defining an OpenAI Assistant Agent in AutoGen\n", + "## Defining an OpenAI Assistant Agent in AutoGen\n", "\n", "Here, we explore how to define an OpenAI Assistant Agent within the AutoGen. This includes setting up the agent to make use of the previously defined function calls for data retrieval and analysis." ] @@ -159,7 +166,18 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Fetching GitHub Insight Data using Function Call\n", + "````{=mdx}\n", + ":::tip\n", + "Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).\n", + ":::\n", + "````\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Fetching GitHub Insight Data using Function Call\n", "\n", "This part of the notebook demonstrates the practical application of the defined functions and the OpenAI Assistant Agent in fetching and interpreting GitHub Insight data." ] @@ -256,6 +274,13 @@ } ], "metadata": { + "front_matter": { + "description": "This Jupyter Notebook demonstrates how to leverage OSS Insight (Open Source Software Insight) for advanced GitHub data analysis by defining `Function calls` in AutoGen for the OpenAI Assistant.", + "tags": [ + "OpenAI Assistant", + "function call" + ] + }, "kernelspec": { "display_name": "autogen", "language": "python", diff --git a/notebook/agentchat_oai_assistant_groupchat.ipynb b/notebook/agentchat_oai_assistant_groupchat.ipynb index 603d2cf71d99..d38fed4cdaee 100644 --- a/notebook/agentchat_oai_assistant_groupchat.ipynb +++ b/notebook/agentchat_oai_assistant_groupchat.ipynb @@ -14,9 +14,16 @@ "## Requirements\n", "\n", "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", + "````{=mdx}\n", + ":::info Requirements\n", + "Install `pyautogen`:\n", "```bash\n", - "pip install \"pyautogen>=0.2.3\"\n", - "```" + "pip install pyautogen\n", + "```\n", + "\n", + "For more information, please refer to the [installation guide](/docs/installation/).\n", + ":::\n", + "````" ] }, { @@ -50,19 +57,11 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well).\n", - "\n", - "The config list looks like the following:\n", - "```python\n", - "config_list = [\n", - " {\n", - " \"model\": \"gpt-4\",\n", - " \"api_key\": \"\",\n", - " }, # OpenAI API endpoint for gpt-4\n", - "]\n", - "```\n", - "\n", - "Currently Azure OpenAI does not support assistant api. You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." + "````{=mdx}\n", + ":::tip\n", + "Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).\n", + ":::\n", + "````" ] }, { @@ -482,6 +481,13 @@ } ], "metadata": { + "front_matter": { + "description": "This Jupyter Notebook demonstrates how to use the GPTAssistantAgent in AutoGen's group chat mode, enabling collaborative task performance through automated chat with agents powered by LLMs, tools, or humans.", + "tags": [ + "OpenAI Assistant", + "group chat" + ] + }, "kernelspec": { "display_name": "Python 3", "language": "python", diff --git a/notebook/agentchat_oai_code_interpreter.ipynb b/notebook/agentchat_oai_code_interpreter.ipynb index 921165fdd6b3..a8aeb6147896 100644 --- a/notebook/agentchat_oai_code_interpreter.ipynb +++ b/notebook/agentchat_oai_code_interpreter.ipynb @@ -10,9 +10,16 @@ "## Requirements\n", "\n", "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", + "````{=mdx}\n", + ":::info Requirements\n", + "Install `pyautogen`:\n", "```bash\n", "pip install pyautogen\n", - "```" + "```\n", + "\n", + "For more information, please refer to the [installation guide](/docs/installation/).\n", + ":::\n", + "````" ] }, { @@ -52,19 +59,11 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well).\n", - "\n", - "The config list looks like the following:\n", - "```python\n", - "config_list = [\n", - " {\n", - " \"model\": \"gpt-4\",\n", - " \"api_key\": \"\",\n", - " }, # OpenAI API endpoint for gpt-4\n", - "]\n", - "```\n", - "\n", - "Currently Azure OpenAi does not support assistant api. You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/website/docs/llm_endpoint_configuration.ipynb) for full code examples of the different methods." + "````{=mdx}\n", + ":::tip\n", + "Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).\n", + ":::\n", + "````" ] }, { @@ -297,6 +296,13 @@ } ], "metadata": { + "front_matter": { + "description": "This Jupyter Notebook showcases the integration of the Code Interpreter tool which executes Python code dynamically within applications.", + "tags": [ + "OpenAI Assistant", + "code interpreter" + ] + }, "kernelspec": { "display_name": "Python 3", "language": "python", diff --git a/notebook/gpt_assistant_agent_function_call.ipynb b/notebook/gpt_assistant_agent_function_call.ipynb new file mode 100644 index 000000000000..6febb89cc9b4 --- /dev/null +++ b/notebook/gpt_assistant_agent_function_call.ipynb @@ -0,0 +1,566 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "hLnLsw8SaMa0" + }, + "source": [ + "# From Dad Jokes To Sad Jokes: Function Calling with GPTAssistantAgent\n", + "\n", + "Autogen allows `GPTAssistantAgent` to be augmented with \"tools\" — pre-defined functions or capabilities — that extend its ability to handle specific tasks, similar to how one might natively utilize tools in the [OpenAI Assistant's API](https://platform.openai.com/docs/assistants/tools).\n", + "\n", + "In this notebook, we create a basic Multi-Agent System using Autogen's `GPTAssistantAgent` to convert Dad jokes on a specific topic into Sad jokes. It consists of a \"Dad\" agent which has the ability to search the [Dad Joke API](https://icanhazdadjoke.com/api) and a \"Sad Joker\" agent which converts the Dad jokes into Sad jokes. The Sad Joker then writes the sad jokes into a txt file.\n", + "\n", + "In this process we demonstrate how to call tools and perform function calling for `GPTAssistantAgent`." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "9E3_0867da8p" + }, + "source": [ + "## Requirements\n", + "AutoGen requires Python 3.8 or newer. For this notebook, please install `pyautogen`:" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": { + "id": "pWFw6-8lMleD" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Requirement already satisfied: pyautogen in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (0.2.8)\n", + "Requirement already satisfied: openai>=1.3 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (1.6.1)\n", + "Requirement already satisfied: diskcache in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (5.6.3)\n", + "Requirement already satisfied: termcolor in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (2.4.0)\n", + "Requirement already satisfied: flaml in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (2.1.1)\n", + "Requirement already satisfied: python-dotenv in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (1.0.0)\n", + "Requirement already satisfied: tiktoken in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (0.5.2)\n", + "Requirement already satisfied: pydantic<3,>=1.10 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (2.5.3)\n", + "Requirement already satisfied: docker in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (7.0.0)\n", + "Requirement already satisfied: anyio<5,>=3.5.0 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from openai>=1.3->pyautogen) (4.2.0)\n", + "Requirement already satisfied: distro<2,>=1.7.0 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from openai>=1.3->pyautogen) (1.8.0)\n", + "Requirement already satisfied: httpx<1,>=0.23.0 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from openai>=1.3->pyautogen) (0.26.0)\n", + "Requirement already satisfied: sniffio in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from openai>=1.3->pyautogen) (1.3.0)\n", + "Requirement already satisfied: tqdm>4 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from openai>=1.3->pyautogen) (4.66.1)\n", + "Requirement already satisfied: typing-extensions<5,>=4.7 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from openai>=1.3->pyautogen) (4.9.0)\n", + "Requirement already satisfied: annotated-types>=0.4.0 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pydantic<3,>=1.10->pyautogen) (0.6.0)\n", + "Requirement already satisfied: pydantic-core==2.14.6 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pydantic<3,>=1.10->pyautogen) (2.14.6)\n", + "Requirement already satisfied: packaging>=14.0 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from docker->pyautogen) (23.2)\n", + "Requirement already satisfied: requests>=2.26.0 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from docker->pyautogen) (2.31.0)\n", + "Requirement already satisfied: urllib3>=1.26.0 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from docker->pyautogen) (2.1.0)\n", + "Requirement already satisfied: NumPy>=1.17.0rc1 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from flaml->pyautogen) (1.26.2)\n", + "Requirement already satisfied: regex>=2022.1.18 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from tiktoken->pyautogen) (2023.10.3)\n", + "Requirement already satisfied: idna>=2.8 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from anyio<5,>=3.5.0->openai>=1.3->pyautogen) (3.6)\n", + "Requirement already satisfied: certifi in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from httpx<1,>=0.23.0->openai>=1.3->pyautogen) (2023.11.17)\n", + "Requirement already satisfied: httpcore==1.* in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from httpx<1,>=0.23.0->openai>=1.3->pyautogen) (1.0.2)\n", + "Requirement already satisfied: h11<0.15,>=0.13 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from httpcore==1.*->httpx<1,>=0.23.0->openai>=1.3->pyautogen) (0.14.0)\n", + "Requirement already satisfied: charset-normalizer<4,>=2 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from requests>=2.26.0->docker->pyautogen) (3.3.2)\n", + "\n", + "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.3.2\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n", + "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n", + "Note: you may need to restart the kernel to use updated packages.\n" + ] + } + ], + "source": [ + "pip install pyautogen" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "jnH9U6MIdwUl" + }, + "source": [ + "Import Dependencies" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": { + "id": "Ga-yZeoBMzHs" + }, + "outputs": [], + "source": [ + "from typing import Annotated, Literal\n", + "\n", + "import requests\n", + "\n", + "import autogen\n", + "from autogen import UserProxyAgent\n", + "from autogen.agentchat.contrib.gpt_assistant_agent import GPTAssistantAgent\n", + "from autogen.function_utils import get_function_schema\n", + "\n", + "config_list = autogen.config_list_from_json(\n", + " env_or_file=\"OAI_CONFIG_LIST\",\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "02lZOEAQd1qi" + }, + "source": [ + "## Creating the Functions\n", + "We need to create functions for our Agents to call.\n", + "\n", + "This function calls the Dad Joke API with a search term that the agent creates and returns a list of dad jokes." + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": { + "id": "jcti0u08NJ2g" + }, + "outputs": [], + "source": [ + "def get_dad_jokes(search_term: str, page: int = 1, limit: int = 10) -> str:\n", + " \"\"\"\n", + " Fetches a list of dad jokes based on a search term.\n", + "\n", + " Parameters:\n", + " - search_term: The search term to find jokes about.\n", + " - page: The page number of results to fetch (default is 1).\n", + " - limit: The number of results to return per page (default is 20, max is 30).\n", + "\n", + " Returns:\n", + " A list of dad jokes.\n", + " \"\"\"\n", + " url = \"https://icanhazdadjoke.com/search\"\n", + " headers = {\"Accept\": \"application/json\"}\n", + " params = {\"term\": search_term, \"page\": page, \"limit\": limit}\n", + "\n", + " response = requests.get(url, headers=headers, params=params)\n", + "\n", + " if response.status_code == 200:\n", + " data = response.json()\n", + " jokes = [joke[\"joke\"] for joke in data[\"results\"]]\n", + " return jokes\n", + " else:\n", + " return f\"Failed to fetch jokes, status code: {response.status_code}\"" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": { + "id": "2FgsfBK1NsPj" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "['Where do cats write notes?\\r\\nScratch Paper!', 'It was raining cats and dogs the other day. I almost stepped in a poodle.', 'What do you call a group of disorganized cats? A cat-tastrophe.', 'I accidentally took my cats meds last night. Don’t ask meow.', 'What do you call a pile of cats? A Meowtain.', 'Animal Fact #25: Most bobcats are not named bob.']\n" + ] + } + ], + "source": [ + "# Example Dad Jokes Function Usage:\n", + "jokes = get_dad_jokes(\"cats\")\n", + "print(jokes)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "DC9D5bKEeoKP" + }, + "source": [ + "This function allows the Agents to write to a txt file." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": { + "id": "wXAA2MtoOS_w" + }, + "outputs": [], + "source": [ + "def write_to_txt(content: str, filename: str = \"dad_jokes.txt\"):\n", + " \"\"\"\n", + " Writes a formatted string to a text file.\n", + " Parameters:\n", + "\n", + " - content: The formatted string to write.\n", + " - filename: The name of the file to write to. Defaults to \"output.txt\".\n", + " \"\"\"\n", + " with open(filename, \"w\") as file:\n", + " file.write(content)" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": { + "id": "xAgcFXEHOfcl" + }, + "outputs": [], + "source": [ + "# Example Write to TXT Function Usage:\n", + "content = \"\\n\".join(jokes) # Format the jokes from the above example\n", + "write_to_txt(content)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Create Function Schemas\n", + "In order to use the functions within our GPTAssistantAgents, we need to generate function schemas. This can be done by using `get_function_schema`" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [], + "source": [ + "# Assistant API Tool Schema for get_dad_jokes\n", + "get_dad_jokes_schema = get_function_schema(\n", + " get_dad_jokes,\n", + " name=\"get_dad_jokes\",\n", + " description=\"Fetches a list of dad jokes based on a search term. Allows pagination with page and limit parameters.\",\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "The return type of the function 'write_to_txt' is not annotated. Although annotating it is optional, the function should return either a string, a subclass of 'pydantic.BaseModel'.\n" + ] + } + ], + "source": [ + "# Assistant API Tool Schema for write_to_txt\n", + "write_to_txt_schema = get_function_schema(\n", + " write_to_txt,\n", + " name=\"write_to_txt\",\n", + " description=\"Writes a formatted string to a text file. If the file does not exist, it will be created. If the file does exist, it will be overwritten.\",\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "sgpx2JQme2kv" + }, + "source": [ + "## Creating the Agents\n", + "In this section we create and configure our Dad and Sad Joker Agents" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "6X40-Sk6Pcs8" + }, + "source": [ + "### Set up the User Proxy" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": { + "id": "mEpxEaPdPSDp" + }, + "outputs": [], + "source": [ + "user_proxy = UserProxyAgent(\n", + " name=\"user_proxy\",\n", + " is_termination_msg=lambda msg: \"TERMINATE\" in msg[\"content\"],\n", + " human_input_mode=\"NEVER\",\n", + " max_consecutive_auto_reply=1,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "q4ym9KlMPenf" + }, + "source": [ + "### The Dad Agent\n", + "We create the Dad agent using `GPTAssistantAgent`, in order for us to enable the Dad to use the `get_dad_jokes` function we need to provide it the function's specification in our `llm_config`.\n", + "\n", + "We format the `tools` within our `llm_config` in the same format as provided in the [OpenAI Assistant tools docs](https://platform.openai.com/docs/assistants/tools/function-calling)." + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": { + "id": "kz0c_tVIPgi6" + }, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "OpenAI client config of GPTAssistantAgent(the_dad) - model: gpt-4-1106-preview\n", + "Matching assistant found, using the first matching assistant: {'id': 'asst_BLBUwYPugb1UR2jQMGAA7RtU', 'created_at': 1714660644, 'description': None, 'file_ids': [], 'instructions': \"\\n As 'The Dad', your primary role is to entertain by fetching dad jokes which the sad joker will transform into 'sad jokes' based on a given theme. When provided with a theme, such as 'plants' or 'animals', your task is as follows:\\n\\n 1. Use the 'get_dad_jokes' function to search for dad jokes related to the provided theme by providing a search term related to the theme. Fetch a list of jokes that are relevant to the theme.\\n 2. Present these jokes to the sad joker in a format that is clear and easy to read, preparing them for transformation.\\n\\n Remember, the team's goal is to creatively adapt the essence of each dad joke to fit the 'sad joke' format, all while staying true to the theme provided by the user.\\n \", 'metadata': {}, 'model': 'gpt-4-1106-preview', 'name': 'the_dad', 'object': 'assistant', 'tools': [ToolFunction(function=FunctionDefinition(name='get_dad_jokes', description='Fetches a list of dad jokes based on a search term. Allows pagination with page and limit parameters.', parameters={'type': 'object', 'properties': {'search_term': {'type': 'string', 'description': 'search_term'}, 'page': {'type': 'integer', 'default': 1, 'description': 'page'}, 'limit': {'type': 'integer', 'default': 10, 'description': 'limit'}}, 'required': ['search_term']}), type='function')]}\n" + ] + } + ], + "source": [ + "the_dad = GPTAssistantAgent(\n", + " name=\"the_dad\",\n", + " instructions=\"\"\"\n", + " As 'The Dad', your primary role is to entertain by fetching dad jokes which the sad joker will transform into 'sad jokes' based on a given theme. When provided with a theme, such as 'plants' or 'animals', your task is as follows:\n", + "\n", + " 1. Use the 'get_dad_jokes' function to search for dad jokes related to the provided theme by providing a search term related to the theme. Fetch a list of jokes that are relevant to the theme.\n", + " 2. Present these jokes to the sad joker in a format that is clear and easy to read, preparing them for transformation.\n", + "\n", + " Remember, the team's goal is to creatively adapt the essence of each dad joke to fit the 'sad joke' format, all while staying true to the theme provided by the user.\n", + " \"\"\",\n", + " overwrite_instructions=True, # overwrite any existing instructions with the ones provided\n", + " overwrite_tools=True, # overwrite any existing tools with the ones provided\n", + " llm_config={\n", + " \"config_list\": config_list,\n", + " \"tools\": [get_dad_jokes_schema],\n", + " },\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Next, we register the `get_dad_jokes` function with the Dad `GPTAssistantAgent`" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": {}, + "outputs": [], + "source": [ + "# Register get_dad_jokes with the_dad GPTAssistantAgent\n", + "the_dad.register_function(\n", + " function_map={\n", + " \"get_dad_jokes\": get_dad_jokes,\n", + " },\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "cpv2yiyqRWl2" + }, + "source": [ + "### The Sad Joker Agent\n", + "We then create and configure the Sad Joker agent in a similar manner to the Dad agent above." + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": { + "id": "vghN1WwLRXtW" + }, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "OpenAI client config of GPTAssistantAgent(the_sad_joker) - model: gpt-4-1106-preview\n", + "Matching assistant found, using the first matching assistant: {'id': 'asst_HzB75gkobafXZhkuIAmiBiai', 'created_at': 1714660668, 'description': None, 'file_ids': [], 'instructions': \"\\n As 'The Sad Joker', your unique role is to take dad jokes and creatively transform them into 'sad jokes'. When you receive a list of dad jokes, themed around topics like 'plants' or 'animals', you should:\\n\\n 1. Read through each dad joke carefully, understanding its theme and punchline.\\n 2. Creatively alter the joke to change its mood from humorous to somber or melancholic. This may involve tweaking the punchline, modifying the setup, or even completely reimagining the joke while keeping it relevant to the original theme.\\n 3. Ensure your transformations maintain a clear connection to the original theme and are understandable as adaptations of the dad jokes provided.\\n 4. Write your transformed sad jokes to a text file using the 'write_to_txt' function. Use meaningful file names that reflect the theme or the nature of the jokes within, unless a specific filename is requested.\\n\\n Your goal is not just to alter the mood of the jokes but to do so in a way that is creative, thoughtful, and respects the essence of the original humor. Remember, while the themes might be light-hearted, your transformations should offer a melancholic twist that makes them uniquely 'sad jokes'.\\n \", 'metadata': {}, 'model': 'gpt-4-1106-preview', 'name': 'the_sad_joker', 'object': 'assistant', 'tools': [ToolFunction(function=FunctionDefinition(name='write_to_txt', description='Writes a formatted string to a text file. If the file does not exist, it will be created. If the file does exist, it will be overwritten.', parameters={'type': 'object', 'properties': {'content': {'type': 'string', 'description': 'content'}, 'filename': {'type': 'string', 'default': 'dad_jokes.txt', 'description': 'filename'}}, 'required': ['content']}), type='function')]}\n" + ] + } + ], + "source": [ + "the_sad_joker = GPTAssistantAgent(\n", + " name=\"the_sad_joker\",\n", + " instructions=\"\"\"\n", + " As 'The Sad Joker', your unique role is to take dad jokes and creatively transform them into 'sad jokes'. When you receive a list of dad jokes, themed around topics like 'plants' or 'animals', you should:\n", + "\n", + " 1. Read through each dad joke carefully, understanding its theme and punchline.\n", + " 2. Creatively alter the joke to change its mood from humorous to somber or melancholic. This may involve tweaking the punchline, modifying the setup, or even completely reimagining the joke while keeping it relevant to the original theme.\n", + " 3. Ensure your transformations maintain a clear connection to the original theme and are understandable as adaptations of the dad jokes provided.\n", + " 4. Write your transformed sad jokes to a text file using the 'write_to_txt' function. Use meaningful file names that reflect the theme or the nature of the jokes within, unless a specific filename is requested.\n", + "\n", + " Your goal is not just to alter the mood of the jokes but to do so in a way that is creative, thoughtful, and respects the essence of the original humor. Remember, while the themes might be light-hearted, your transformations should offer a melancholic twist that makes them uniquely 'sad jokes'.\n", + " \"\"\",\n", + " overwrite_instructions=True, # overwrite any existing instructions with the ones provided\n", + " overwrite_tools=True, # overwrite any existing tools with the ones provided\n", + " llm_config={\n", + " \"config_list\": config_list,\n", + " \"tools\": [write_to_txt_schema],\n", + " },\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Register the `write_to_txt` function with the Sad Joker `GPTAssistantAgent`" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": {}, + "outputs": [], + "source": [ + "# Register get_dad_jokes with the_dad GPTAssistantAgent\n", + "the_sad_joker.register_function(\n", + " function_map={\n", + " \"write_to_txt\": write_to_txt,\n", + " },\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "9GBELjFBgjju" + }, + "source": [ + "## Creating the Groupchat and Starting the Conversation" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "9mT3c0k8SX8i" + }, + "source": [ + "Create the groupchat" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": { + "id": "A3LG3TsNSZmO" + }, + "outputs": [], + "source": [ + "groupchat = autogen.GroupChat(agents=[user_proxy, the_dad, the_sad_joker], messages=[], max_round=15)\n", + "group_chat_manager = autogen.GroupChatManager(groupchat=groupchat, llm_config={\"config_list\": config_list})" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "MT7GbnB9Spji" + }, + "source": [ + "Start the Conversation" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": { + "id": "1m6pe5RNSmEy" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[33muser_proxy\u001b[0m (to chat_manager):\n", + "\n", + "Jokes about cats\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[35m\n", + ">>>>>>>> EXECUTING FUNCTION get_dad_jokes...\u001b[0m\n", + "\u001b[33mthe_dad\u001b[0m (to chat_manager):\n", + "\n", + "Here are some cat-themed dad jokes for the sad joker to transform:\n", + "\n", + "1. Where do cats write notes? Scratch Paper!\n", + "2. It was raining cats and dogs the other day. I almost stepped in a poodle.\n", + "3. What do you call a group of disorganized cats? A cat-tastrophe.\n", + "4. I accidentally took my cat's meds last night. Don’t ask meow.\n", + "5. What do you call a pile of cats? A Meowtain.\n", + "6. Animal Fact #25: Most bobcats are not named Bob.\n", + "\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[35m\n", + ">>>>>>>> EXECUTING FUNCTION write_to_txt...\u001b[0m\n", + "\u001b[33mthe_sad_joker\u001b[0m (to chat_manager):\n", + "\n", + "The cat-themed sad jokes have been transformed and saved to a text file named \"sad_cat_jokes.txt\".\n", + "\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33muser_proxy\u001b[0m (to chat_manager):\n", + "\n", + "\n", + "\n", + "--------------------------------------------------------------------------------\n" + ] + }, + { + "data": { + "text/plain": [ + "ChatResult(chat_id=None, chat_history=[{'content': 'Jokes about cats', 'role': 'assistant'}, {'content': \"Here are some cat-themed dad jokes for the sad joker to transform:\\n\\n1. Where do cats write notes? Scratch Paper!\\n2. It was raining cats and dogs the other day. I almost stepped in a poodle.\\n3. What do you call a group of disorganized cats? A cat-tastrophe.\\n4. I accidentally took my cat's meds last night. Don’t ask meow.\\n5. What do you call a pile of cats? A Meowtain.\\n6. Animal Fact #25: Most bobcats are not named Bob.\\n\", 'name': 'the_dad', 'role': 'user'}, {'content': 'The cat-themed sad jokes have been transformed and saved to a text file named \"sad_cat_jokes.txt\".\\n', 'name': 'the_sad_joker', 'role': 'user'}, {'content': '', 'role': 'assistant'}], summary='', cost=({'total_cost': 0.0278, 'gpt-4-1106-preview': {'cost': 0.0278, 'prompt_tokens': 2744, 'completion_tokens': 12, 'total_tokens': 2756}}, {'total_cost': 0.02194, 'gpt-4-1106-preview': {'cost': 0.02194, 'prompt_tokens': 2167, 'completion_tokens': 9, 'total_tokens': 2176}}), human_input=[])" + ] + }, + "execution_count": 15, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "user_proxy.initiate_chat(group_chat_manager, message=\"Jokes about cats\")" + ] + } + ], + "metadata": { + "colab": { + "provenance": [] + }, + "front_matter": { + "description": "This comprehensive example demonstrates the use of tools in a GPTAssistantAgent Multi-Agent System by utilizing functions such as calling an API and writing to a file.", + "tags": [ + "open ai assistant", + "gpt assistant", + "tool use" + ] + }, + "kernelspec": { + "display_name": "Python 3", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} diff --git a/test/agentchat/test_conversable_agent.py b/test/agentchat/test_conversable_agent.py index b57dcf1b597b..b81a897b47cf 100755 --- a/test/agentchat/test_conversable_agent.py +++ b/test/agentchat/test_conversable_agent.py @@ -1382,6 +1382,27 @@ def bob_initiate_chat(agent: ConversableAgent, text: Literal["past", "future"]): assert bob.chat_messages[charlie][-2]["content"] == "This is bob from the future speaking." +def test_http_client(): + + import httpx + + with pytest.raises(TypeError): + config_list = [ + { + "model": "my-gpt-4-deployment", + "api_key": "", + "http_client": httpx.Client(), + } + ] + + autogen.ConversableAgent( + "test_agent", + human_input_mode="NEVER", + llm_config={"config_list": config_list}, + default_auto_reply="This is alice speaking.", + ) + + if __name__ == "__main__": # test_trigger() # test_context() diff --git a/website/docs/Examples.md b/website/docs/Examples.md index 7c2a18a553a2..45c16de45715 100644 --- a/website/docs/Examples.md +++ b/website/docs/Examples.md @@ -78,6 +78,7 @@ Links to notebook examples: - Chat with OpenAI Assistant with Code Interpreter - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_code_interpreter.ipynb) - Chat with OpenAI Assistant with Retrieval Augmentation - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_retrieval.ipynb) - OpenAI Assistant in a Group Chat - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_groupchat.ipynb) +- GPTAssistantAgent based Multi-Agent Tool Use - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/gpt_assistant_agent_function_call.ipynb) ### Multimodal Agent diff --git a/website/docs/FAQ.mdx b/website/docs/FAQ.mdx index ee5331f4594b..6baa09768a63 100644 --- a/website/docs/FAQ.mdx +++ b/website/docs/FAQ.mdx @@ -268,3 +268,12 @@ Migrating enhances flexibility, modularity, and customization in handling chat m ### How to migrate? To ensure a smooth migration process, simply follow the detailed guide provided in [Handling Long Context Conversations with Transform Messages](/docs/topics/long_contexts.md). + +### What should I do if I get the error "TypeError: Assistants.create() got an unexpected keyword argument 'file_ids'"? + +This error typically occurs when using Autogen version earlier than 0.2.27 in combination with OpenAI library version 1.21 or later. The issue arises because the older version of Autogen does not support the file_ids parameter used by newer versions of the OpenAI API. +To resolve this issue, you need to upgrade your Autogen library to version 0.2.27 or higher that ensures compatibility between Autogen and the OpenAI library. + +```python +pip install --upgrade autogen +``` diff --git a/website/docs/topics/llm_configuration.ipynb b/website/docs/topics/llm_configuration.ipynb index 073ec686b2cb..51abf1f46225 100644 --- a/website/docs/topics/llm_configuration.ipynb +++ b/website/docs/topics/llm_configuration.ipynb @@ -254,6 +254,44 @@ "assert len(config_list) == 1" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Adding http client in llm_config for proxy\n", + "\n", + "In Autogen, a deepcopy is used on llm_config to ensure that the llm_config passed by user is not modified internally. You may get an error if the llm_config contains objects of a class that do not support deepcopy. To fix this, you need to implement a `__deepcopy__` method for the class.\n", + "\n", + "The below example shows how to implement a `__deepcopy__` method for http client and add a proxy." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "#!pip install httpx\n", + "import httpx\n", + "\n", + "\n", + "class MyHttpClient(httpx.Client):\n", + " def __deepcopy__(self, memo):\n", + " return self\n", + "\n", + "config_list = [\n", + " {\n", + " \"model\": \"my-gpt-4-deployment\",\n", + " \"api_key\": \"\",\n", + " \"http_client\": MyHttpClient(proxy=\"http://localhost:8030\"),\n", + " }\n", + "]\n", + "\n", + "llm_config = {\n", + " \"config_list\": config_list,\n", + "}" + ] + }, { "cell_type": "markdown", "metadata": {}, diff --git a/website/docs/topics/openai-assistant/_category_.json b/website/docs/topics/openai-assistant/_category_.json new file mode 100644 index 000000000000..146faedf94c3 --- /dev/null +++ b/website/docs/topics/openai-assistant/_category_.json @@ -0,0 +1,5 @@ +{ + "position": 2, + "label": "OpenAI Assistant", + "collapsible": true +} diff --git a/website/docs/topics/openai-assistant/gpt_assistant_agent.md b/website/docs/topics/openai-assistant/gpt_assistant_agent.md new file mode 100644 index 000000000000..4e358fcab16f --- /dev/null +++ b/website/docs/topics/openai-assistant/gpt_assistant_agent.md @@ -0,0 +1,181 @@ +# Agent Backed by OpenAI Assistant API + +The GPTAssistantAgent is a powerful component of the AutoGen framework, utilizing OpenAI's Assistant API to enhance agents with advanced capabilities. This agent enables the integration of multiple tools such as the Code Interpreter, File Search, and Function Calling, allowing for a highly customizable and dynamic interaction model. + +Version Requirements: + +- AutoGen: Version 0.2.27 or higher. +- OpenAI: Version 1.21 or higher. + +Key Features of the GPTAssistantAgent: + +- Multi-Tool Mastery: Agents can leverage a combination of OpenAI's built-in tools, like [Code Interpreter](https://platform.openai.com/docs/assistants/tools/code-interpreter) and [File Search](https://platform.openai.com/docs/assistants/tools/file-search), alongside custom tools you create or integrate via [Function Calling](https://platform.openai.com/docs/assistants/tools/function-calling). + +- Streamlined Conversation Management: Benefit from persistent threads that automatically store message history and adjust based on the model's context length. This simplifies development by allowing you to focus on adding new messages rather than managing conversation flow. + +- File Access and Integration: Enable agents to access and utilize files in various formats. Files can be incorporated during agent creation or throughout conversations via threads. Additionally, agents can generate files (e.g., images, spreadsheets) and cite referenced files within their responses. + +For a practical illustration, here are some examples: + +- [Chat with OpenAI Assistant using function call](/docs/notebooks/agentchat_oai_assistant_function_call) demonstrates how to leverage function calling to enable intelligent function selection. +- [GPTAssistant with Code Interpreter](/docs/notebooks/agentchat_oai_code_interpreter) showcases the integration of the Code Interpreter tool which executes Python code dynamically within applications. +- [Group Chat with GPTAssistantAgent](/docs/notebooks/agentchat_oai_assistant_groupchat) demonstrates how to use the GPTAssistantAgent in AutoGen's group chat mode, enabling collaborative task performance through automated chat with agents powered by LLMs, tools, or humans. + +## Create a OpenAI Assistant in Autogen + +```python +import os + +from autogen import config_list_from_json +from autogen.agentchat.contrib.gpt_assistant_agent import GPTAssistantAgent + +assistant_id = os.environ.get("ASSISTANT_ID", None) +config_list = config_list_from_json("OAI_CONFIG_LIST") +llm_config = { + "config_list": config_list, +} +assistant_config = { + # define the openai assistant behavior as you need +} +oai_agent = GPTAssistantAgent( + name="oai_agent", + instructions="I'm an openai assistant running in autogen", + llm_config=llm_config, + assistant_config=assistant_config, +) +``` + +## Use OpenAI Assistant Built-in Tools and Function Calling + +### Code Interpreter + +The [Code Interpreter](https://platform.openai.com/docs/assistants/tools/code-interpreter) empowers your agents to write and execute Python code in a secure environment provide by OpenAI. This unlocks several capabilities, including but not limited to: + +- Process data: Handle various data formats and manipulate data on the fly. +- Generate outputs: Create new data files or even visualizations like graphs. +- ... + +Using the Code Interpreter with the following configuration. +```python +assistant_config = { + "tools": [ + {"type": "code_interpreter"}, + ], + "tool_resources": { + "code_interpreter": { + "file_ids": ["$file.id"] # optional. Files that are passed at the Assistant level are accessible by all Runs with this Assistant. + } + } +} +``` + +To get the `file.id`, you can employ two methods: + +1. OpenAI Playground: Leverage the OpenAI Playground, an interactive platform accessible at https://platform.openai.com/playground, to upload your files and obtain the corresponding file IDs. + +2. Code-Based Uploading: Alternatively, you can upload files and retrieve their file IDs programmatically using the following code snippet: + + ```python + from openai import OpenAI + client = OpenAI( + # Defaults to os.environ.get("OPENAI_API_KEY") + ) + # Upload a file with an "assistants" purpose + file = client.files.create( + file=open("mydata.csv", "rb"), + purpose='assistants' + ) + ``` + +### File Search + +The [File Search](https://platform.openai.com/docs/assistants/tools/file-search) tool empowers your agents to tap into knowledge beyond its pre-trained model. This allows you to incorporate your own documents and data, such as product information or code files, into your agent's capabilities. + +Using the File Search with the following configuration. + +```python +assistant_config = { + "tools": [ + {"type": "file_search"}, + ], + "tool_resources": { + "file_search": { + "vector_store_ids": ["$vector_store.id"] + } + } +} +``` + +Here's how to obtain the vector_store.id using two methods: + +1. OpenAI Playground: Leverage the OpenAI Playground, an interactive platform accessible at https://platform.openai.com/playground, to create a vector store, upload your files, and add it into your vector store. Once complete, you'll be able to retrieve the associated `vector_store.id`. + +2. Code-Based Uploading:Alternatively, you can upload files and retrieve their file IDs programmatically using the following code snippet: + + ```python + from openai import OpenAI + client = OpenAI( + # Defaults to os.environ.get("OPENAI_API_KEY") + ) + + # Step 1: Create a Vector Store + vector_store = client.beta.vector_stores.create(name="Financial Statements") + print("Vector Store created:", vector_store.id) # This is your vector_store.id + + # Step 2: Prepare Files for Upload + file_paths = ["edgar/goog-10k.pdf", "edgar/brka-10k.txt"] + file_streams = [open(path, "rb") for path in file_paths] + + # Step 3: Upload Files and Add to Vector Store (with status polling) + file_batch = client.beta.vector_stores.file_batches.upload_and_poll( + vector_store_id=vector_store.id, files=file_streams + ) + + # Step 4: Verify Completion (Optional) + print("File batch status:", file_batch.status) + print("Uploaded file count:", file_batch.file_counts.processed) + ``` + +### Function calling + +Function Calling empowers you to extend the capabilities of your agents with your pre-defined functionalities, which allows you to describe custom functions to the Assistant, enabling intelligent function selection and argument generation. + +Using the Function calling with the following configuration. + +```python +# learn more from https://platform.openai.com/docs/guides/function-calling/function-calling +from autogen.function_utils import get_function_schema + +def get_current_weather(location: str) -> dict: + """ + Retrieves the current weather for a specified location. + + Args: + location (str): The location to get the weather for. + + Returns: + Union[str, dict]: A dictionary with weather details.. + """ + + # Simulated response + return { + "location": location, + "temperature": 22.5, + "description": "Partly cloudy" + } + +api_schema = get_function_schema( + get_current_weather, + name=get_current_weather.__name__, + description="Returns the current weather data for a specified location." +) + +assistant_config = { + "tools": [ + { + "type": "function", + "function": api_schema, + } + ], +} +```