diff --git a/cookbook/integration_langgraph.ipynb b/cookbook/integration_langgraph.ipynb index 0d77623d3..0e98f0acc 100644 --- a/cookbook/integration_langgraph.ipynb +++ b/cookbook/integration_langgraph.ipynb @@ -1,973 +1,1026 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": { - "id": "MPJsVa1O4TuL" - }, - "source": [ - "---\n", - "title: Open Source Observability for LangGraph\n", - "description: Learn how to use Langfuse for open source observability/tracing in your LangGraph application (Python).\n", - "category: Integrations\n", - "---\n", - "\n", - "# Cookbook: LangGraph Integration\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "YlCI9KeX4Zn4" - }, - "source": [ - "## What is LangGraph?\n", - "\n", - "[LangGraph](https://langchain-ai.github.io/langgraph/) is an open-source framework by the LangChain team for building complex, stateful, multi-agent applications using large language models (LLMs). LangGraph includes built-in persistence to save and resume state, which enables error recovery and human-in-the-loop workflows." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "3o8L1qPcaZeC" - }, - "source": [ - "## Goal of this Cookbook\n", - "\n", - "This cookbook demonstrates how [Langfuse](https://langfuse.com/docs) helps to debug, analyze, and iterate on your LangGraph application using the [LangChain integration](https://langfuse.com/docs/integrations/langchain/tracing)." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "gPTaMtxH4eHV" - }, - "source": [ - "**By the end of this cookbook, you will be able to:**\n", - "\n", - "\n", - "* Automatically trace LangGraph application via the Langfuse integration\n", - "* Monitor advanced multi-agent setups\n", - "* Add scores (like user feedback)\n", - "* Manage your prompts used in LangGraph with Langfuse\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "0sSIS88y9Ewm" - }, - "source": [ - "## Initialize Langfuse\n", - "\n", - "**Note:** You need to run at least Python 3.11 ([GitHub Issue](https://github.com/langfuse/langfuse/issues/1926)).\n", - "\n", - "Initialize the Langfuse client with your [API keys](https://langfuse.com/faq/all/where-are-langfuse-api-keys) from the project settings in the Langfuse UI and add them to your environment." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "collapsed": true, - "id": "C85BK1vJ5yD3", - "outputId": "73f44b09-ae33-4bd0-8e92-9c1bfc8a1e7c" - }, - "outputs": [], - "source": [ - "%pip install langfuse\n", - "%pip install langchain langgraph langchain_openai langchain_community" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": { - "id": "S1yglQ464VD-" - }, - "outputs": [], - "source": [ - "import os\n", - "\n", - "# get keys for your project from https://cloud.langfuse.com\n", - "os.environ[\"LANGFUSE_PUBLIC_KEY\"] = \"pk-lf-***\"\n", - "os.environ[\"LANGFUSE_SECRET_KEY\"] = \"sk-lf-***\"\n", - "os.environ[\"LANGFUSE_HOST\"] = \"https://cloud.langfuse.com\" # for EU data region\n", - "# os.environ[\"LANGFUSE_HOST\"] = \"https://us.cloud.langfuse.com\" # for US data region\n", - "\n", - "# your openai key\n", - "os.environ[\"OPENAI_API_KEY\"] = \"***\"" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "kqYMmi6n9Nh1" - }, - "source": [ - "## Example 1: Simple chat app with LangGraph\n", - "\n", - "**What we will do in this section:**\n", - "\n", - "* Build a support chatbot in LangGraph that can answer common questions\n", - "* Tracing the chatbot's input and output using Langfuse\n", - "\n", - "We will start with a basic chatbot and build a more advanced multi agent setup in the next section, introducing key LangGraph concepts along the way.\n", - "\n", - "### Create Agent\n", - "\n", - "Start by creating a `StateGraph`. A `StateGraph` object defines our chatbot's structure as a state machine. We will add nodes to represent the LLM and functions the chatbot can call, and edges to specify how the bot transitions between these functions." - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "metadata": { - "id": "aGIxgPww6VX6" - }, - "outputs": [], - "source": [ - "from typing import Annotated\n", - "\n", - "from langchain_openai import ChatOpenAI\n", - "from langchain_core.messages import HumanMessage\n", - "from typing_extensions import TypedDict\n", - "\n", - "from langgraph.graph import StateGraph\n", - "from langgraph.graph.message import add_messages\n", - "\n", - "class State(TypedDict):\n", - " # Messages have the type \"list\". The `add_messages` function in the annotation defines how this state key should be updated\n", - " # (in this case, it appends messages to the list, rather than overwriting them)\n", - " messages: Annotated[list, add_messages]\n", - "\n", - "graph_builder = StateGraph(State)\n", - "\n", - "llm = ChatOpenAI(model = \"gpt-4o\", temperature = 0.2)\n", - "\n", - "# The chatbot node function takes the current State as input and returns an updated messages list. This is the basic pattern for all LangGraph node functions.\n", - "def chatbot(state: State):\n", - " return {\"messages\": [llm.invoke(state[\"messages\"])]}\n", - "\n", - "# Add a \"chatbot\" node. Nodes represent units of work. They are typically regular python functions.\n", - "graph_builder.add_node(\"chatbot\", chatbot)\n", - "\n", - "# Add an entry point. This tells our graph where to start its work each time we run it.\n", - "graph_builder.set_entry_point(\"chatbot\")\n", - "\n", - "# Set a finish point. This instructs the graph \"any time this node is run, you can exit.\"\n", - "graph_builder.set_finish_point(\"chatbot\")\n", - "\n", - "# To be able to run our graph, call \"compile()\" on the graph builder. This creates a \"CompiledGraph\" we can use invoke on our state.\n", - "graph = graph_builder.compile()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "IW2SJcRgh7Xo" - }, - "source": [ - "### Add Langfuse as callback to the invocation\n", - "\n", - "Now, we will add then [Langfuse callback handler for LangChain](https://langfuse.com/docs/integrations/langchain/tracing) to trace the steps of our application: `config={\"callbacks\": [langfuse_handler]}`" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "8PxEc455-KYM", - "outputId": "0d1a6a04-a024-47b8-d320-72cd25b7aefd" - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "{'chatbot': {'messages': [AIMessage(content='Langfuse is a tool designed to help developers monitor and observe the performance of their Large Language Model (LLM) applications. It provides detailed insights into how these applications are functioning, allowing for better debugging, optimization, and overall management. Langfuse offers features such as tracking key metrics, visualizing data, and identifying potential issues in real-time, making it easier for developers to maintain and improve their LLM-based solutions.', response_metadata={'token_usage': {'completion_tokens': 86, 'prompt_tokens': 13, 'total_tokens': 99}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_400f27fa1f', 'finish_reason': 'stop', 'logprobs': None}, id='run-9a0c97cb-ccfe-463e-902c-5a5900b796b4-0', usage_metadata={'input_tokens': 13, 'output_tokens': 86, 'total_tokens': 99})]}}\n" - ] - } - ], - "source": [ - "from langfuse.callback import CallbackHandler\n", - "\n", - "# Initialize Langfuse CallbackHandler for Langchain (tracing)\n", - "langfuse_handler = CallbackHandler()\n", - "\n", - "for s in graph.stream({\"messages\": [HumanMessage(content = \"What is Langfuse?\")]},\n", - " config={\"callbacks\": [langfuse_handler]}):\n", - " print(s)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "Fdf3ZRnWGZ0N" - }, - "source": [ - "### View traces in Langfuse\n", - "\n", - "Example trace in Langfuse: https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/d109e148-d188-4d6e-823f-aac0864afbab\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "17Aq7u6_LBR6" - }, - "source": [ - "![Trace view of chat app in Langfuse](https://langfuse.com/images/cookbook/integration-langgraph/integration_langgraph_chatapp_trace.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "v3yyVtGKhMPU" - }, - "source": [ - "### Visualize the chat app\n", - "\n", - "You can visualize the graph using the `get_graph` method along with a \"draw\" method" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/", - "height": 236 - }, - "id": "MKkM6mw47kIy", - "outputId": "9cf8a453-05e0-4193-fc77-81cb176d9ef4" - }, - "outputs": [], - "source": [ - "from IPython.display import Image, display\n", - "display(Image(graph.get_graph().draw_mermaid_png()))" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "yY0HW5xISntw" - }, - "source": [ - "```mermaid\n", - "graph TD;\n", - "\t__start__([__start__]):::first\n", - "\tchatbot(chatbot)\n", - "\t__end__([__end__]):::last\n", - "\t__start__ --> chatbot;\n", - "\tchatbot --> __end__;\n", - "\tclassDef default fill:#f2f0ff,line-height:1.2\n", - "\tclassDef first fill-opacity:0\n", - "\tclassDef last fill:#bfb6fc\n", - "```" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "n2W94eY19TR1" - }, - "source": [ - "## Example 2: Multi agent application with LangGraph\n", - "\n", - "**What we will do in this section**:\n", - "\n", - "* Build 2 executing agents: One research agent using the LangChain WikipediaAPIWrapper to search Wikipedia and one that uses a custom tool to get the current time.\n", - "* Build an agent supervisor to help delegate the user questions to one of the two agents\n", - "* Add Langfuse handler as callback to trace the steps of the supervisor and executing agents" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "collapsed": true, - "id": "WfnrswDdjYTV", - "outputId": "0d938cb1-9fd2-4ed3-cfdd-c84a9ad3ed82" - }, - "outputs": [], - "source": [ - "%pip install langgraph langchain langchain_openai langchain_experimental pandas wikipedia" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "tciUQ62IEVec" - }, - "source": [ - "### Create tools\n", - "\n", - "For this example, you build an agent to do wikipedia research, and one agent to tell you the current time. Define the tools they will use below:" - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "metadata": { - "id": "Cet0loyp9p-T" - }, - "outputs": [], - "source": [ - "from typing import Annotated\n", - "\n", - "from langchain_community.tools import WikipediaQueryRun\n", - "from langchain_community.utilities import WikipediaAPIWrapper\n", - "from datetime import datetime\n", - "from langchain.tools import Tool\n", - "\n", - "# Define a tools that searches Wikipedia\n", - "wikipedia_tool = WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())\n", - "\n", - "# Define a new tool that returns the current datetime\n", - "datetime_tool = Tool(\n", - " name=\"Datetime\",\n", - " func = lambda x: datetime.now().isoformat(),\n", - " description=\"Returns the current datetime\",\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "31uhDy_mEqr6" - }, - "source": [ - "### Helper utilities\n", - "\n", - "Define a helper function below to simplify adding new agent worker nodes." - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "metadata": { - "id": "75atiExdqd4P" - }, - "outputs": [], - "source": [ - "from langchain.agents import AgentExecutor, create_openai_tools_agent\n", - "from langchain_core.messages import BaseMessage, HumanMessage\n", - "from langchain_openai import ChatOpenAI\n", - "\n", - "def create_agent(llm: ChatOpenAI, system_prompt: str, tools: list):\n", - " # Each worker node will be given a name and some tools.\n", - " prompt = ChatPromptTemplate.from_messages(\n", - " [\n", - " (\n", - " \"system\",\n", - " system_prompt,\n", - " ),\n", - " MessagesPlaceholder(variable_name=\"messages\"),\n", - " MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n", - " ]\n", - " )\n", - " agent = create_openai_tools_agent(llm, tools, prompt)\n", - " executor = AgentExecutor(agent=agent, tools=tools)\n", - " return executor\n", - "\n", - "def agent_node(state, agent, name):\n", - " result = agent.invoke(state)\n", - " return {\"messages\": [HumanMessage(content=result[\"output\"], name=name)]}" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "74bZqwU6FCOa" - }, - "source": [ - "### Create agent supervisor\n", - "\n", - "It will use function calling to choose the next worker node OR finish processing." - ] - }, - { - "cell_type": "code", - "execution_count": 9, - "metadata": { - "id": "Hu8MzgihrHdF" - }, - "outputs": [], - "source": [ - "from langchain_core.output_parsers.openai_functions import JsonOutputFunctionsParser\n", - "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n", - "\n", - "members = [\"Researcher\", \"CurrentTime\"]\n", - "system_prompt = (\n", - " \"You are a supervisor tasked with managing a conversation between the\"\n", - " \" following workers: {members}. Given the following user request,\"\n", - " \" respond with the worker to act next. Each worker will perform a\"\n", - " \" task and respond with their results and status. When finished,\"\n", - " \" respond with FINISH.\"\n", - ")\n", - "# Our team supervisor is an LLM node. It just picks the next agent to process and decides when the work is completed\n", - "options = [\"FINISH\"] + members\n", - "\n", - "# Using openai function calling can make output parsing easier for us\n", - "function_def = {\n", - " \"name\": \"route\",\n", - " \"description\": \"Select the next role.\",\n", - " \"parameters\": {\n", - " \"title\": \"routeSchema\",\n", - " \"type\": \"object\",\n", - " \"properties\": {\n", - " \"next\": {\n", - " \"title\": \"Next\",\n", - " \"anyOf\": [\n", - " {\"enum\": options},\n", - " ],\n", - " }\n", - " },\n", - " \"required\": [\"next\"],\n", - " },\n", - "}\n", - "\n", - "# Create the prompt using ChatPromptTemplate\n", - "prompt = ChatPromptTemplate.from_messages(\n", - " [\n", - " (\"system\", system_prompt),\n", - " MessagesPlaceholder(variable_name=\"messages\"),\n", - " (\n", - " \"system\",\n", - " \"Given the conversation above, who should act next?\"\n", - " \" Or should we FINISH? Select one of: {options}\",\n", - " ),\n", - " ]\n", - ").partial(options=str(options), members=\", \".join(members))\n", - "\n", - "llm = ChatOpenAI(model=\"gpt-4o\")\n", - "\n", - "# Construction of the chain for the supervisor agent\n", - "supervisor_chain = (\n", - " prompt\n", - " | llm.bind_functions(functions=[function_def], function_call=\"route\")\n", - " | JsonOutputFunctionsParser()\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "ognuMaIeFVh7" - }, - "source": [ - "### Construct graph\n", - "\n", - "Now we are ready to start building the graph. Below, define the state and worker nodes using the function we just defined. Then we connect all the edges in the graph." - ] - }, - { - "cell_type": "code", - "execution_count": 10, - "metadata": { - "id": "_LwtCmw_rHVz" - }, - "outputs": [], - "source": [ - "import functools\n", - "import operator\n", - "from typing import Sequence, TypedDict\n", - "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n", - "from langgraph.graph import END, StateGraph, START\n", - "\n", - "# The agent state is the input to each node in the graph\n", - "class AgentState(TypedDict):\n", - " # The annotation tells the graph that new messages will always be added to the current states\n", - " messages: Annotated[Sequence[BaseMessage], operator.add]\n", - " # The 'next' field indicates where to route to next\n", - " next: str\n", - "\n", - "# Add the research agent using the create_agent helper function\n", - "research_agent = create_agent(llm, \"You are a web researcher.\", [wikipedia_tool])\n", - "research_node = functools.partial(agent_node, agent=research_agent, name=\"Researcher\")\n", - "\n", - "# Add the time agent using the create_agent helper function\n", - "currenttime_agent = create_agent(llm, \"You can tell the current time at\", [datetime_tool])\n", - "currenttime_node = functools.partial(agent_node, agent=currenttime_agent, name = \"CurrentTime\")\n", - "\n", - "workflow = StateGraph(AgentState)\n", - "\n", - "# Add a \"chatbot\" node. Nodes represent units of work. They are typically regular python functions.\n", - "workflow.add_node(\"Researcher\", research_node)\n", - "workflow.add_node(\"CurrentTime\", currenttime_node)\n", - "workflow.add_node(\"supervisor\", supervisor_chain)\n", - "\n", - "# We want our workers to ALWAYS \"report back\" to the supervisor when done\n", - "for member in members:\n", - " workflow.add_edge(member, \"supervisor\")\n", - "\n", - "# Conditional edges usually contain \"if\" statements to route to different nodes depending on the current graph state.\n", - "# These functions receive the current graph state and return a string or list of strings indicating which node(s) to call next.\n", - "conditional_map = {k: k for k in members}\n", - "conditional_map[\"FINISH\"] = END\n", - "workflow.add_conditional_edges(\"supervisor\", lambda x: x[\"next\"], conditional_map)\n", - "\n", - "# Add an entry point. This tells our graph where to start its work each time we run it.\n", - "workflow.add_edge(START, \"supervisor\")\n", - "\n", - "# To be able to run our graph, call \"compile()\" on the graph builder. This creates a \"CompiledGraph\" we can use invoke on our state.\n", - "graph_2 = workflow.compile()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "w3xfJLJyFwBG" - }, - "source": [ - "### Add Langfuse as callback to the invocation\n", - "\n", - "Add [Langfuse handler](https://langfuse.com/docs/integrations/langchain/tracing) as callback: `config={\"callbacks\": [langfuse_handler]}`" - ] - }, - { - "cell_type": "code", - "execution_count": 11, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "QsX1gw9kryGP", - "outputId": "65d94f3c-17e7-4ad8-88b5-f837676d206b" - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "{'supervisor': {'next': 'Researcher'}}\n", - "----\n", - "{'Researcher': {'messages': [HumanMessage(content=\"Photosynthesis is a biological process by which photosynthetic organisms, such as most plants, algae, and cyanobacteria, convert light energy, usually from sunlight, into chemical energy. This energy is stored in the form of organic compounds like sugars, which fuel their metabolism.\\n\\n### Key Points of Photosynthesis:\\n\\n1. **Light Absorption**:\\n - The process begins when light energy is absorbed by reaction centers, which are proteins containing photosynthetic pigments (e.g., chlorophyll in plants).\\n\\n2. **Light-Dependent Reactions**:\\n - In these reactions, light energy is used to strip electrons from substances like water, producing oxygen gas.\\n - The hydrogen from water is used to create NADPH (reduced nicotinamide adenine dinucleotide phosphate) and ATP (adenosine triphosphate).\\n\\n3. **Light-Independent Reactions (Calvin Cycle)**:\\n - These reactions do not require light and occur in the stroma of chloroplasts.\\n - Carbon dioxide is incorporated into organic compounds like ribulose bisphosphate (RuBP).\\n - Using ATP and NADPH from the light-dependent reactions, these compounds are reduced to form carbohydrates such as glucose.\\n\\n### Types of Photosynthesis:\\n- **Oxygenic Photosynthesis**:\\n - Produces oxygen and is performed by plants, algae, and cyanobacteria.\\n- **Anoxygenic Photosynthesis**:\\n - Does not produce oxygen and is performed by some bacteria using substances like hydrogen sulfide instead of water.\\n\\n### Importance of Photosynthesis:\\n- It produces and maintains the oxygen content of the Earth's atmosphere.\\n- Supplies most of the biological energy necessary for complex life.\\n- Captures carbon dioxide from the atmosphere, playing a critical role in climate processes.\\n\\n### Evolution and Discovery:\\n- The first photosynthetic organisms used reducing agents other than water, such as hydrogen or hydrogen sulfide.\\n- Cyanobacteria, which evolved later, contributed to the oxygenation of the Earth.\\n- Photosynthesis was discovered in 1779 by Jan Ingenhousz, who demonstrated that plants need light to perform the process.\\n\\n### Global Impact:\\n- The average rate of energy captured by global photosynthesis is about 130 terawatts.\\n- Photosynthetic organisms convert around 100–115 billion tons of carbon into biomass each year.\\n\\nPhotosynthesis is crucial for life on Earth, providing the oxygen we breathe and the energy base for nearly all ecosystems.\", name='Researcher')]}}\n", - "----\n", - "{'supervisor': {'next': 'FINISH'}}\n", - "----\n" - ] - } - ], - "source": [ - "from langfuse.callback import CallbackHandler\n", - "\n", - "# Initialize Langfuse CallbackHandler for Langchain (tracing)\n", - "langfuse_handler = CallbackHandler()\n", - "\n", - "# Add Langfuse handler as callback: config={\"callbacks\": [langfuse_handler]}\n", - "for s in graph_2.stream({\"messages\": [HumanMessage(content = \"How does photosynthesis work?\")]},\n", - " config={\"callbacks\": [langfuse_handler]}):\n", - " print(s)\n", - " print(\"----\")" - ] - }, - { - "cell_type": "code", - "execution_count": 12, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "AqJnMtP5HDql", - "outputId": "69c3d5d6-d44c-4784-a484-c66e4748b522" - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "{'supervisor': {'next': 'CurrentTime'}}\n", - "----\n", - "{'CurrentTime': {'messages': [HumanMessage(content='The current time is 9:34 AM on July 25, 2024.', name='CurrentTime')]}}\n", - "----\n", - "{'supervisor': {'next': 'FINISH'}}\n", - "----\n" - ] - } - ], - "source": [ - "# Add Langfuse handler as callback: config={\"callbacks\": [langfuse_handler]}\n", - "for s in graph_2.stream({\"messages\": [HumanMessage(content = \"What time is it?\")]},\n", - " config={\"callbacks\": [langfuse_handler]}):\n", - " print(s)\n", - " print(\"----\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "o4XjtNenH9GF" - }, - "source": [ - "### See traces in Langfuse\n", - "\n", - "Example traces in Langfuse:\n", - "\n", - "1. [How does photosynthesis work?](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/a8b0cc9e-da3b-485f-a642-35431a6f9289)\n", - "2. [What time is it?](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/ee5d5828-e983-4372-8e7f-04dfbe3e19d4)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "_-5EEBZAIbwc" - }, - "source": [ - "![Trace view of multi agent in Langfuse](https://langfuse.com/images/cookbook/integration-langgraph/integration_langgraph_multiagent_traces.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "hCEzabn_jhbf" - }, - "source": [ - "### Visualize the agent\n", - "\n", - "You can visualize the graph using the `get_graph` method along with a \"draw\" method" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/", - "height": 255 - }, - "id": "notlPjnl-HXV", - "outputId": "17d6c6db-92af-4a6e-b1af-61b68e9cc87a" - }, - "outputs": [], - "source": [ - "from IPython.display import Image, display\n", - "display(Image(graph_2.get_graph().draw_mermaid_png()))" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "mESkG2IJS8OY" - }, - "source": [ - "```mermaid\n", - "graph TD;\n", - "\t__start__([__start__]):::first\n", - "\tResearcher(Researcher)\n", - "\tCurrentTime(CurrentTime)\n", - "\tsupervisor(supervisor)\n", - "\t__end__([__end__]):::last\n", - "\tCurrentTime --> supervisor;\n", - "\tResearcher --> supervisor;\n", - "\t__start__ --> supervisor;\n", - "\tsupervisor -.-> Researcher;\n", - "\tsupervisor -.-> CurrentTime;\n", - "\tsupervisor -.  FINISH  .-> __end__;\n", - "\tclassDef default fill:#f2f0ff,line-height:1.2\n", - "\tclassDef first fill-opacity:0\n", - "\tclassDef last fill:#bfb6fc\n", - "```" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "uybP4h8wGvWw" - }, - "source": [ - "## Adding scores to traces as scores\n", - "\n", - "[Scores](https://langfuse.com/docs/scores/overview) are used to evaluate single observations or entire traces. You can create them via our annotation workflow in the Langfuse UI, run model-based evaluation or ingest via the SDK as we do it in this example.\n", - "\n", - "You can attach a score to the current observation context by calling `langfuse_context.score_current_observation`. You can also score the entire trace from anywhere inside the nesting hierarchy by calling `langfuse_context.score_current_trace`.\n", - "\n", - "To get the context of the current observation, we use the [`observe()` decorator](https://langfuse.com/docs/sdk/python/decorators) and apply it to the `main()` function. By default it captures:\n", - "\n", - "* nesting via context vars\n", - "* timings/durations\n", - "* function name\n", - "* args and kwargs as input dict\n", - "* returned values as output\n", - "\n", - "The decorator will automatically create a trace for the top-level function and spans for any nested functions.\n" - ] - }, - { - "cell_type": "code", - "execution_count": 28, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "pgAqYnQuGwCL", - "outputId": "11e14766-b25b-44b4-c3d4-d980f3d111cc" - }, - "outputs": [ - { - "data": { - "text/plain": [ - "{'messages': [HumanMessage(content='What time is it?'),\n", - " HumanMessage(content='The current date and time is 2024-07-25T09:54:57.', name='CurrentTime')],\n", - " 'next': 'FINISH'}" - ] - }, - "execution_count": 28, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "from langfuse.decorators import langfuse_context, observe\n", - "\n", - "# Langfuse observe() decorator to automatically create a trace for the top-level function and spans for any nested functions.\n", - "@observe()\n", - "def research_agent(user_message):\n", - " # Get callback handler scoped to this observed function\n", - " lf_handler = langfuse_context.get_current_langchain_handler()\n", - "\n", - " # Trace langchain run via the Langfuse CallbackHandler\n", - " response = graph_2.invoke({\"messages\": [HumanMessage(content=user_message)]},\n", - " config={\"callbacks\": [lf_handler]})\n", - "\n", - " # Score the entire trace e.g. to add user feedback\n", - " langfuse_context.score_current_trace(\n", - " name = \"user-explicit-feedback\",\n", - " value = 1,\n", - " comment = \"The time is correct!\"\n", - " )\n", - "\n", - " return response\n", - "research_agent(\"What time is it?\")" - ] + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "MPJsVa1O4TuL" + }, + "source": [ + "---\n", + "title: Open Source Observability for LangGraph\n", + "description: Learn how to use Langfuse for open source observability/tracing in your LangGraph application (Python).\n", + "category: Integrations\n", + "---\n", + "\n", + "# Cookbook: LangGraph Integration\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "YlCI9KeX4Zn4" + }, + "source": [ + "## What is LangGraph?\n", + "\n", + "[LangGraph](https://langchain-ai.github.io/langgraph/) is an open-source framework by the LangChain team for building complex, stateful, multi-agent applications using large language models (LLMs). LangGraph includes built-in persistence to save and resume state, which enables error recovery and human-in-the-loop workflows." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "3o8L1qPcaZeC" + }, + "source": [ + "## Goal of this Cookbook\n", + "\n", + "This cookbook demonstrates how [Langfuse](https://langfuse.com/docs) helps to debug, analyze, and iterate on your LangGraph application using the [LangChain integration](https://langfuse.com/docs/integrations/langchain/tracing)." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "gPTaMtxH4eHV" + }, + "source": [ + "**By the end of this cookbook, you will be able to:**\n", + "\n", + "\n", + "* Automatically trace LangGraph application via the Langfuse integration\n", + "* Monitor advanced multi-agent setups\n", + "* Add scores (like user feedback)\n", + "* Manage your prompts used in LangGraph with Langfuse\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "0sSIS88y9Ewm" + }, + "source": [ + "## Initialize Langfuse\n", + "\n", + "**Note:** You need to run at least Python 3.11 ([GitHub Issue](https://github.com/langfuse/langfuse/issues/1926)).\n", + "\n", + "Initialize the Langfuse client with your [API keys](https://langfuse.com/faq/all/where-are-langfuse-api-keys) from the project settings in the Langfuse UI and add them to your environment." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" }, - { - "cell_type": "markdown", - "metadata": { - "id": "cq_DeCcXSxwq" - }, - "source": [ - "### View trace with score in Langfuse\n", - "\n", - "Example trace: https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/23338c52-350d-4efb-89ca-82d759828b1d\n", - "\n", - "![Trace view including added score](https://langfuse.com/images/cookbook/integration-langgraph/integration_langgraph_score.png)" - ] + "collapsed": true, + "id": "C85BK1vJ5yD3", + "outputId": "73f44b09-ae33-4bd0-8e92-9c1bfc8a1e7c" + }, + "outputs": [], + "source": [ + "%pip install langfuse\n", + "%pip install langchain langgraph langchain_openai langchain_community" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": { + "id": "S1yglQ464VD-" + }, + "outputs": [], + "source": [ + "import os\n", + "\n", + "# get keys for your project from https://cloud.langfuse.com\n", + "os.environ[\"LANGFUSE_PUBLIC_KEY\"] = \"pk-lf-***\"\n", + "os.environ[\"LANGFUSE_SECRET_KEY\"] = \"sk-lf-***\"\n", + "os.environ[\"LANGFUSE_HOST\"] = \"https://cloud.langfuse.com\" # for EU data region\n", + "# os.environ[\"LANGFUSE_HOST\"] = \"https://us.cloud.langfuse.com\" # for US data region\n", + "\n", + "# your openai key\n", + "os.environ[\"OPENAI_API_KEY\"] = \"***\"" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "kqYMmi6n9Nh1" + }, + "source": [ + "## Example 1: Simple chat app with LangGraph\n", + "\n", + "**What we will do in this section:**\n", + "\n", + "* Build a support chatbot in LangGraph that can answer common questions\n", + "* Tracing the chatbot's input and output using Langfuse\n", + "\n", + "We will start with a basic chatbot and build a more advanced multi agent setup in the next section, introducing key LangGraph concepts along the way.\n", + "\n", + "### Create Agent\n", + "\n", + "Start by creating a `StateGraph`. A `StateGraph` object defines our chatbot's structure as a state machine. We will add nodes to represent the LLM and functions the chatbot can call, and edges to specify how the bot transitions between these functions." + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": { + "id": "aGIxgPww6VX6" + }, + "outputs": [], + "source": [ + "from typing import Annotated\n", + "\n", + "from langchain_openai import ChatOpenAI\n", + "from langchain_core.messages import HumanMessage\n", + "from typing_extensions import TypedDict\n", + "\n", + "from langgraph.graph import StateGraph\n", + "from langgraph.graph.message import add_messages\n", + "\n", + "class State(TypedDict):\n", + " # Messages have the type \"list\". The `add_messages` function in the annotation defines how this state key should be updated\n", + " # (in this case, it appends messages to the list, rather than overwriting them)\n", + " messages: Annotated[list, add_messages]\n", + "\n", + "graph_builder = StateGraph(State)\n", + "\n", + "llm = ChatOpenAI(model = \"gpt-4o\", temperature = 0.2)\n", + "\n", + "# The chatbot node function takes the current State as input and returns an updated messages list. This is the basic pattern for all LangGraph node functions.\n", + "def chatbot(state: State):\n", + " return {\"messages\": [llm.invoke(state[\"messages\"])]}\n", + "\n", + "# Add a \"chatbot\" node. Nodes represent units of work. They are typically regular python functions.\n", + "graph_builder.add_node(\"chatbot\", chatbot)\n", + "\n", + "# Add an entry point. This tells our graph where to start its work each time we run it.\n", + "graph_builder.set_entry_point(\"chatbot\")\n", + "\n", + "# Set a finish point. This instructs the graph \"any time this node is run, you can exit.\"\n", + "graph_builder.set_finish_point(\"chatbot\")\n", + "\n", + "# To be able to run our graph, call \"compile()\" on the graph builder. This creates a \"CompiledGraph\" we can use invoke on our state.\n", + "graph = graph_builder.compile()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "IW2SJcRgh7Xo" + }, + "source": [ + "### Add Langfuse as callback to the invocation\n", + "\n", + "Now, we will add then [Langfuse callback handler for LangChain](https://langfuse.com/docs/integrations/langchain/tracing) to trace the steps of our application: `config={\"callbacks\": [langfuse_handler]}`" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" }, + "id": "8PxEc455-KYM", + "outputId": "0d1a6a04-a024-47b8-d320-72cd25b7aefd" + }, + "outputs": [ { - "cell_type": "markdown", - "metadata": { - "id": "6cIQVrYZJVMO" - }, - "source": [ - "## Manage prompts with Langfuse\n", - "\n", - "Use [Langfuse prompt management](https://langfuse.com/docs/prompts/example-langchain) to effectively manage and version your prompts. We add the prompt used in this example via the SDK. In production, however, users would update and manage the prompts via the Langfuse UI instead of using the SDK.\n", - "\n", - "Langfuse prompt management is basically a Prompt CMS (Content Management System). Alternatively, you can also edit and version the prompt in the Langfuse UI.\n", - "\n", - "* `Name` that identifies the prompt in Langfuse Prompt Management\n", - "* Prompt with prompt template incl. `{{input variables}}`\n", - "* `labels` to include `production` to immediately use prompt as the default\n", - "\n", - "In this example, we create a system prompt for an assistant that translates every user message into Spanish." - ] + "name": "stdout", + "output_type": "stream", + "text": [ + "{'chatbot': {'messages': [AIMessage(content='Langfuse is a tool designed to help developers monitor and observe the performance of their Large Language Model (LLM) applications. It provides detailed insights into how these applications are functioning, allowing for better debugging, optimization, and overall management. Langfuse offers features such as tracking key metrics, visualizing data, and identifying potential issues in real-time, making it easier for developers to maintain and improve their LLM-based solutions.', response_metadata={'token_usage': {'completion_tokens': 86, 'prompt_tokens': 13, 'total_tokens': 99}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_400f27fa1f', 'finish_reason': 'stop', 'logprobs': None}, id='run-9a0c97cb-ccfe-463e-902c-5a5900b796b4-0', usage_metadata={'input_tokens': 13, 'output_tokens': 86, 'total_tokens': 99})]}}\n" + ] + } + ], + "source": [ + "from langfuse.callback import CallbackHandler\n", + "\n", + "# Initialize Langfuse CallbackHandler for Langchain (tracing)\n", + "langfuse_handler = CallbackHandler()\n", + "\n", + "for s in graph.stream({\"messages\": [HumanMessage(content = \"What is Langfuse?\")]},\n", + " config={\"callbacks\": [langfuse_handler]}):\n", + " print(s)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Fdf3ZRnWGZ0N" + }, + "source": [ + "### View traces in Langfuse\n", + "\n", + "Example trace in Langfuse: https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/d109e148-d188-4d6e-823f-aac0864afbab\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "17Aq7u6_LBR6" + }, + "source": [ + "![Trace view of chat app in Langfuse](https://langfuse.com/images/cookbook/integration-langgraph/integration_langgraph_chatapp_trace.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "v3yyVtGKhMPU" + }, + "source": [ + "### Visualize the chat app\n", + "\n", + "You can visualize the graph using the `get_graph` method along with a \"draw\" method" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 236 }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "H0J8-nbhUUz6", - "outputId": "ee71e43d-9f77-451d-b71c-f77cd297b065" - }, - "outputs": [ - { - "data": { - "text/plain": [ - "" - ] - }, - "execution_count": 15, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "from langfuse import Langfuse\n", - "\n", - "# Initialize Langfuse client (prompt management)\n", - "langfuse = Langfuse()\n", - "\n", - "langfuse.create_prompt(\n", - " name=\"translator_system-prompt\",\n", - " prompt=\"You are a translator that translates every input text into Spanish.\",\n", - " labels=[\"production\"]\n", - ")" - ] + "id": "MKkM6mw47kIy", + "outputId": "9cf8a453-05e0-4193-fc77-81cb176d9ef4" + }, + "outputs": [], + "source": [ + "from IPython.display import Image, display\n", + "display(Image(graph.get_graph().draw_mermaid_png()))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "yY0HW5xISntw" + }, + "source": [ + "```mermaid\n", + "graph TD;\n", + "\t__start__([__start__]):::first\n", + "\tchatbot(chatbot)\n", + "\t__end__([__end__]):::last\n", + "\t__start__ --> chatbot;\n", + "\tchatbot --> __end__;\n", + "\tclassDef default fill:#f2f0ff,line-height:1.2\n", + "\tclassDef first fill-opacity:0\n", + "\tclassDef last fill:#bfb6fc\n", + "```" + ] + }, + { + "metadata": {}, + "cell_type": "markdown", + "source": [ + "### Use Langfuse with LangGraph Server\n", + "\n", + "You can add Langfuse as callback when using [LangGraph Server](https://langchain-ai.github.io/langgraph/concepts/langgraph_server/)" + ] + }, + { + "metadata": {}, + "cell_type": "markdown", + "source": "When using the LangGraph Server, the LangGraph Server handles graph invocation automatically. Therefore, you should add the Langfuse callback when declaring the graph." + }, + { + "metadata": {}, + "cell_type": "code", + "outputs": [], + "execution_count": null, + "source": [ + "from typing import Annotated\n", + "\n", + "from langchain_openai import ChatOpenAI\n", + "from typing_extensions import TypedDict\n", + "\n", + "from langgraph.graph import StateGraph\n", + "from langgraph.graph.message import add_messages\n", + "\n", + "from langfuse.callback import CallbackHandler\n", + "\n", + "class State(TypedDict):\n", + " messages: Annotated[list, add_messages]\n", + "\n", + "graph_builder = StateGraph(State)\n", + "\n", + "llm = ChatOpenAI(model = \"gpt-4o\", temperature = 0.2)\n", + "\n", + "def chatbot(state: State):\n", + " return {\"messages\": [llm.invoke(state[\"messages\"])]}\n", + "\n", + "graph_builder.add_node(\"chatbot\", chatbot)\n", + "graph_builder.set_entry_point(\"chatbot\")\n", + "graph_builder.set_finish_point(\"chatbot\")\n", + "\n", + "# Initialize Langfuse CallbackHandler for Langchain (tracing)\n", + "langfuse_handler = CallbackHandler()\n", + "\n", + "# Call \"with_config\" from the compiled graph.\n", + "# It returns a \"CompiledGraph\", similar to \"compile\", but with callbacks included.\n", + "# This enables automatic graph tracing without needing to add callbacks manually every time.\n", + "graph = graph_builder.compile().with_config({\"callbacks\": [langfuse_handler]})" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "n2W94eY19TR1" + }, + "source": [ + "## Example 2: Multi agent application with LangGraph\n", + "\n", + "**What we will do in this section**:\n", + "\n", + "* Build 2 executing agents: One research agent using the LangChain WikipediaAPIWrapper to search Wikipedia and one that uses a custom tool to get the current time.\n", + "* Build an agent supervisor to help delegate the user questions to one of the two agents\n", + "* Add Langfuse handler as callback to trace the steps of the supervisor and executing agents" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" }, - { - "cell_type": "markdown", - "metadata": { - "id": "Dullp4XDXhzg" - }, - "source": [ - "![View prompt in Langfuse UI](https://langfuse.com/images/cookbook/integration-langgraph/integration_langgraph_prompt_example.png)" - ] + "collapsed": true, + "id": "WfnrswDdjYTV", + "outputId": "0d938cb1-9fd2-4ed3-cfdd-c84a9ad3ed82" + }, + "outputs": [], + "source": [ + "%pip install langgraph langchain langchain_openai langchain_experimental pandas wikipedia" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "tciUQ62IEVec" + }, + "source": [ + "### Create tools\n", + "\n", + "For this example, you build an agent to do wikipedia research, and one agent to tell you the current time. Define the tools they will use below:" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": { + "id": "Cet0loyp9p-T" + }, + "outputs": [], + "source": [ + "from typing import Annotated\n", + "\n", + "from langchain_community.tools import WikipediaQueryRun\n", + "from langchain_community.utilities import WikipediaAPIWrapper\n", + "from datetime import datetime\n", + "from langchain.tools import Tool\n", + "\n", + "# Define a tools that searches Wikipedia\n", + "wikipedia_tool = WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())\n", + "\n", + "# Define a new tool that returns the current datetime\n", + "datetime_tool = Tool(\n", + " name=\"Datetime\",\n", + " func = lambda x: datetime.now().isoformat(),\n", + " description=\"Returns the current datetime\",\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "31uhDy_mEqr6" + }, + "source": [ + "### Helper utilities\n", + "\n", + "Define a helper function below to simplify adding new agent worker nodes." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": { + "id": "75atiExdqd4P" + }, + "outputs": [], + "source": [ + "from langchain.agents import AgentExecutor, create_openai_tools_agent\n", + "from langchain_core.messages import BaseMessage, HumanMessage\n", + "from langchain_openai import ChatOpenAI\n", + "\n", + "def create_agent(llm: ChatOpenAI, system_prompt: str, tools: list):\n", + " # Each worker node will be given a name and some tools.\n", + " prompt = ChatPromptTemplate.from_messages(\n", + " [\n", + " (\n", + " \"system\",\n", + " system_prompt,\n", + " ),\n", + " MessagesPlaceholder(variable_name=\"messages\"),\n", + " MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n", + " ]\n", + " )\n", + " agent = create_openai_tools_agent(llm, tools, prompt)\n", + " executor = AgentExecutor(agent=agent, tools=tools)\n", + " return executor\n", + "\n", + "def agent_node(state, agent, name):\n", + " result = agent.invoke(state)\n", + " return {\"messages\": [HumanMessage(content=result[\"output\"], name=name)]}" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "74bZqwU6FCOa" + }, + "source": [ + "### Create agent supervisor\n", + "\n", + "It will use function calling to choose the next worker node OR finish processing." + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": { + "id": "Hu8MzgihrHdF" + }, + "outputs": [], + "source": [ + "from langchain_core.output_parsers.openai_functions import JsonOutputFunctionsParser\n", + "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n", + "\n", + "members = [\"Researcher\", \"CurrentTime\"]\n", + "system_prompt = (\n", + " \"You are a supervisor tasked with managing a conversation between the\"\n", + " \" following workers: {members}. Given the following user request,\"\n", + " \" respond with the worker to act next. Each worker will perform a\"\n", + " \" task and respond with their results and status. When finished,\"\n", + " \" respond with FINISH.\"\n", + ")\n", + "# Our team supervisor is an LLM node. It just picks the next agent to process and decides when the work is completed\n", + "options = [\"FINISH\"] + members\n", + "\n", + "# Using openai function calling can make output parsing easier for us\n", + "function_def = {\n", + " \"name\": \"route\",\n", + " \"description\": \"Select the next role.\",\n", + " \"parameters\": {\n", + " \"title\": \"routeSchema\",\n", + " \"type\": \"object\",\n", + " \"properties\": {\n", + " \"next\": {\n", + " \"title\": \"Next\",\n", + " \"anyOf\": [\n", + " {\"enum\": options},\n", + " ],\n", + " }\n", + " },\n", + " \"required\": [\"next\"],\n", + " },\n", + "}\n", + "\n", + "# Create the prompt using ChatPromptTemplate\n", + "prompt = ChatPromptTemplate.from_messages(\n", + " [\n", + " (\"system\", system_prompt),\n", + " MessagesPlaceholder(variable_name=\"messages\"),\n", + " (\n", + " \"system\",\n", + " \"Given the conversation above, who should act next?\"\n", + " \" Or should we FINISH? Select one of: {options}\",\n", + " ),\n", + " ]\n", + ").partial(options=str(options), members=\", \".join(members))\n", + "\n", + "llm = ChatOpenAI(model=\"gpt-4o\")\n", + "\n", + "# Construction of the chain for the supervisor agent\n", + "supervisor_chain = (\n", + " prompt\n", + " | llm.bind_functions(functions=[function_def], function_call=\"route\")\n", + " | JsonOutputFunctionsParser()\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ognuMaIeFVh7" + }, + "source": [ + "### Construct graph\n", + "\n", + "Now we are ready to start building the graph. Below, define the state and worker nodes using the function we just defined. Then we connect all the edges in the graph." + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": { + "id": "_LwtCmw_rHVz" + }, + "outputs": [], + "source": [ + "import functools\n", + "import operator\n", + "from typing import Sequence, TypedDict\n", + "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n", + "from langgraph.graph import END, StateGraph, START\n", + "\n", + "# The agent state is the input to each node in the graph\n", + "class AgentState(TypedDict):\n", + " # The annotation tells the graph that new messages will always be added to the current states\n", + " messages: Annotated[Sequence[BaseMessage], operator.add]\n", + " # The 'next' field indicates where to route to next\n", + " next: str\n", + "\n", + "# Add the research agent using the create_agent helper function\n", + "research_agent = create_agent(llm, \"You are a web researcher.\", [wikipedia_tool])\n", + "research_node = functools.partial(agent_node, agent=research_agent, name=\"Researcher\")\n", + "\n", + "# Add the time agent using the create_agent helper function\n", + "currenttime_agent = create_agent(llm, \"You can tell the current time at\", [datetime_tool])\n", + "currenttime_node = functools.partial(agent_node, agent=currenttime_agent, name = \"CurrentTime\")\n", + "\n", + "workflow = StateGraph(AgentState)\n", + "\n", + "# Add a \"chatbot\" node. Nodes represent units of work. They are typically regular python functions.\n", + "workflow.add_node(\"Researcher\", research_node)\n", + "workflow.add_node(\"CurrentTime\", currenttime_node)\n", + "workflow.add_node(\"supervisor\", supervisor_chain)\n", + "\n", + "# We want our workers to ALWAYS \"report back\" to the supervisor when done\n", + "for member in members:\n", + " workflow.add_edge(member, \"supervisor\")\n", + "\n", + "# Conditional edges usually contain \"if\" statements to route to different nodes depending on the current graph state.\n", + "# These functions receive the current graph state and return a string or list of strings indicating which node(s) to call next.\n", + "conditional_map = {k: k for k in members}\n", + "conditional_map[\"FINISH\"] = END\n", + "workflow.add_conditional_edges(\"supervisor\", lambda x: x[\"next\"], conditional_map)\n", + "\n", + "# Add an entry point. This tells our graph where to start its work each time we run it.\n", + "workflow.add_edge(START, \"supervisor\")\n", + "\n", + "# To be able to run our graph, call \"compile()\" on the graph builder. This creates a \"CompiledGraph\" we can use invoke on our state.\n", + "graph_2 = workflow.compile()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "w3xfJLJyFwBG" + }, + "source": [ + "### Add Langfuse as callback to the invocation\n", + "\n", + "Add [Langfuse handler](https://langfuse.com/docs/integrations/langchain/tracing) as callback: `config={\"callbacks\": [langfuse_handler]}`" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" }, + "id": "QsX1gw9kryGP", + "outputId": "65d94f3c-17e7-4ad8-88b5-f837676d206b" + }, + "outputs": [ { - "cell_type": "markdown", - "metadata": { - "id": "pNboOjf2YQpD" - }, - "source": [ - "Use the utility method `.get_langchain_prompt()` to transform the Langfuse prompt into a string that can be used in Langchain.\n", - "\n", - "\n", - "**Context:** Langfuse declares input variables in prompt templates using double brackets (`{{input variable}}`). Langchain uses single brackets for declaring input variables in PromptTemplates (`{input variable}`). The utility method `.get_langchain_prompt()` replaces the double brackets with single brackets. In this example, however, we don't use any variables in our prompt." - ] + "name": "stdout", + "output_type": "stream", + "text": [ + "{'supervisor': {'next': 'Researcher'}}\n", + "----\n", + "{'Researcher': {'messages': [HumanMessage(content=\"Photosynthesis is a biological process by which photosynthetic organisms, such as most plants, algae, and cyanobacteria, convert light energy, usually from sunlight, into chemical energy. This energy is stored in the form of organic compounds like sugars, which fuel their metabolism.\\n\\n### Key Points of Photosynthesis:\\n\\n1. **Light Absorption**:\\n - The process begins when light energy is absorbed by reaction centers, which are proteins containing photosynthetic pigments (e.g., chlorophyll in plants).\\n\\n2. **Light-Dependent Reactions**:\\n - In these reactions, light energy is used to strip electrons from substances like water, producing oxygen gas.\\n - The hydrogen from water is used to create NADPH (reduced nicotinamide adenine dinucleotide phosphate) and ATP (adenosine triphosphate).\\n\\n3. **Light-Independent Reactions (Calvin Cycle)**:\\n - These reactions do not require light and occur in the stroma of chloroplasts.\\n - Carbon dioxide is incorporated into organic compounds like ribulose bisphosphate (RuBP).\\n - Using ATP and NADPH from the light-dependent reactions, these compounds are reduced to form carbohydrates such as glucose.\\n\\n### Types of Photosynthesis:\\n- **Oxygenic Photosynthesis**:\\n - Produces oxygen and is performed by plants, algae, and cyanobacteria.\\n- **Anoxygenic Photosynthesis**:\\n - Does not produce oxygen and is performed by some bacteria using substances like hydrogen sulfide instead of water.\\n\\n### Importance of Photosynthesis:\\n- It produces and maintains the oxygen content of the Earth's atmosphere.\\n- Supplies most of the biological energy necessary for complex life.\\n- Captures carbon dioxide from the atmosphere, playing a critical role in climate processes.\\n\\n### Evolution and Discovery:\\n- The first photosynthetic organisms used reducing agents other than water, such as hydrogen or hydrogen sulfide.\\n- Cyanobacteria, which evolved later, contributed to the oxygenation of the Earth.\\n- Photosynthesis was discovered in 1779 by Jan Ingenhousz, who demonstrated that plants need light to perform the process.\\n\\n### Global Impact:\\n- The average rate of energy captured by global photosynthesis is about 130 terawatts.\\n- Photosynthetic organisms convert around 100–115 billion tons of carbon into biomass each year.\\n\\nPhotosynthesis is crucial for life on Earth, providing the oxygen we breathe and the energy base for nearly all ecosystems.\", name='Researcher')]}}\n", + "----\n", + "{'supervisor': {'next': 'FINISH'}}\n", + "----\n" + ] + } + ], + "source": [ + "from langfuse.callback import CallbackHandler\n", + "\n", + "# Initialize Langfuse CallbackHandler for Langchain (tracing)\n", + "langfuse_handler = CallbackHandler()\n", + "\n", + "# Add Langfuse handler as callback: config={\"callbacks\": [langfuse_handler]}\n", + "for s in graph_2.stream({\"messages\": [HumanMessage(content = \"How does photosynthesis work?\")]},\n", + " config={\"callbacks\": [langfuse_handler]}):\n", + " print(s)\n", + " print(\"----\")" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" }, + "id": "AqJnMtP5HDql", + "outputId": "69c3d5d6-d44c-4784-a484-c66e4748b522" + }, + "outputs": [ { - "cell_type": "code", - "execution_count": 16, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "z49I82blYeXy", - "outputId": "6cf7cd23-6dde-4e7b-ae50-e369db37c2d0" - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "You are a translator that translates every input text into Spanish. \n" - ] - } - ], - "source": [ - "# Get current production version of prompt and transform the Langfuse prompt into a string that can be used in Langchain\n", - "langfuse_system_prompt = langfuse.get_prompt(\"translator_system-prompt\")\n", - "langchain_system_prompt = langfuse_system_prompt.get_langchain_prompt()\n", - "\n", - "print(langchain_system_prompt)" - ] + "name": "stdout", + "output_type": "stream", + "text": [ + "{'supervisor': {'next': 'CurrentTime'}}\n", + "----\n", + "{'CurrentTime': {'messages': [HumanMessage(content='The current time is 9:34 AM on July 25, 2024.', name='CurrentTime')]}}\n", + "----\n", + "{'supervisor': {'next': 'FINISH'}}\n", + "----\n" + ] + } + ], + "source": [ + "# Add Langfuse handler as callback: config={\"callbacks\": [langfuse_handler]}\n", + "for s in graph_2.stream({\"messages\": [HumanMessage(content = \"What time is it?\")]},\n", + " config={\"callbacks\": [langfuse_handler]}):\n", + " print(s)\n", + " print(\"----\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "o4XjtNenH9GF" + }, + "source": [ + "### See traces in Langfuse\n", + "\n", + "Example traces in Langfuse:\n", + "\n", + "1. [How does photosynthesis work?](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/a8b0cc9e-da3b-485f-a642-35431a6f9289)\n", + "2. [What time is it?](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/ee5d5828-e983-4372-8e7f-04dfbe3e19d4)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "_-5EEBZAIbwc" + }, + "source": [ + "![Trace view of multi agent in Langfuse](https://langfuse.com/images/cookbook/integration-langgraph/integration_langgraph_multiagent_traces.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "hCEzabn_jhbf" + }, + "source": [ + "### Visualize the agent\n", + "\n", + "You can visualize the graph using the `get_graph` method along with a \"draw\" method" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 255 }, - { - "cell_type": "markdown", - "metadata": { - "id": "n3zBULfCt0Wq" - }, - "source": [ - "Now we can use the new system prompt string to update our assistant." - ] + "id": "notlPjnl-HXV", + "outputId": "17d6c6db-92af-4a6e-b1af-61b68e9cc87a" + }, + "outputs": [], + "source": [ + "from IPython.display import Image, display\n", + "display(Image(graph_2.get_graph().draw_mermaid_png()))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "mESkG2IJS8OY" + }, + "source": [ + "```mermaid\n", + "graph TD;\n", + "\t__start__([__start__]):::first\n", + "\tResearcher(Researcher)\n", + "\tCurrentTime(CurrentTime)\n", + "\tsupervisor(supervisor)\n", + "\t__end__([__end__]):::last\n", + "\tCurrentTime --> supervisor;\n", + "\tResearcher --> supervisor;\n", + "\t__start__ --> supervisor;\n", + "\tsupervisor -.-> Researcher;\n", + "\tsupervisor -.-> CurrentTime;\n", + "\tsupervisor -.  FINISH  .-> __end__;\n", + "\tclassDef default fill:#f2f0ff,line-height:1.2\n", + "\tclassDef first fill-opacity:0\n", + "\tclassDef last fill:#bfb6fc\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "uybP4h8wGvWw" + }, + "source": [ + "## Adding scores to traces as scores\n", + "\n", + "[Scores](https://langfuse.com/docs/scores/overview) are used to evaluate single observations or entire traces. You can create them via our annotation workflow in the Langfuse UI, run model-based evaluation or ingest via the SDK as we do it in this example.\n", + "\n", + "You can attach a score to the current observation context by calling `langfuse_context.score_current_observation`. You can also score the entire trace from anywhere inside the nesting hierarchy by calling `langfuse_context.score_current_trace`.\n", + "\n", + "To get the context of the current observation, we use the [`observe()` decorator](https://langfuse.com/docs/sdk/python/decorators) and apply it to the `main()` function. By default it captures:\n", + "\n", + "* nesting via context vars\n", + "* timings/durations\n", + "* function name\n", + "* args and kwargs as input dict\n", + "* returned values as output\n", + "\n", + "The decorator will automatically create a trace for the top-level function and spans for any nested functions.\n" + ] + }, + { + "cell_type": "code", + "execution_count": 28, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" }, + "id": "pgAqYnQuGwCL", + "outputId": "11e14766-b25b-44b4-c3d4-d980f3d111cc" + }, + "outputs": [ { - "cell_type": "code", - "execution_count": 17, - "metadata": { - "id": "oGQhulyMmvZD" - }, - "outputs": [], - "source": [ - "from typing import Annotated\n", - "from langchain_openai import ChatOpenAI\n", - "from typing_extensions import TypedDict\n", - "from langgraph.graph import StateGraph\n", - "from langgraph.graph.message import add_messages\n", - "\n", - "class State(TypedDict):\n", - " messages: Annotated[list, add_messages]\n", - "\n", - "graph_builder = StateGraph(State)\n", - "\n", - "llm = ChatOpenAI(model = \"gpt-4o\", temperature = 0.2)\n", - "\n", - "# Add the system prompt for our translator assistent\n", - "system_prompt = {\n", - " \"role\": \"system\",\n", - " \"content\": langchain_system_prompt\n", - "}\n", - "\n", - "def chatbot(state: State):\n", - " messages_with_system_prompt = [system_prompt] + state[\"messages\"]\n", - " response = llm.invoke(messages_with_system_prompt)\n", - " return {\"messages\": [response]}\n", - "\n", - "graph_builder.add_node(\"chatbot\", chatbot)\n", - "graph_builder.set_entry_point(\"chatbot\")\n", - "graph_builder.set_finish_point(\"chatbot\")\n", - "graph = graph_builder.compile()" + "data": { + "text/plain": [ + "{'messages': [HumanMessage(content='What time is it?'),\n", + " HumanMessage(content='The current date and time is 2024-07-25T09:54:57.', name='CurrentTime')],\n", + " 'next': 'FINISH'}" ] + }, + "execution_count": 28, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from langfuse.decorators import langfuse_context, observe\n", + "\n", + "# Langfuse observe() decorator to automatically create a trace for the top-level function and spans for any nested functions.\n", + "@observe()\n", + "def research_agent(user_message):\n", + " # Get callback handler scoped to this observed function\n", + " lf_handler = langfuse_context.get_current_langchain_handler()\n", + "\n", + " # Trace langchain run via the Langfuse CallbackHandler\n", + " response = graph_2.invoke({\"messages\": [HumanMessage(content=user_message)]},\n", + " config={\"callbacks\": [lf_handler]})\n", + "\n", + " # Score the entire trace e.g. to add user feedback\n", + " langfuse_context.score_current_trace(\n", + " name = \"user-explicit-feedback\",\n", + " value = 1,\n", + " comment = \"The time is correct!\"\n", + " )\n", + "\n", + " return response\n", + "research_agent(\"What time is it?\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "cq_DeCcXSxwq" + }, + "source": [ + "### View trace with score in Langfuse\n", + "\n", + "Example trace: https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/23338c52-350d-4efb-89ca-82d759828b1d\n", + "\n", + "![Trace view including added score](https://langfuse.com/images/cookbook/integration-langgraph/integration_langgraph_score.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "6cIQVrYZJVMO" + }, + "source": [ + "## Manage prompts with Langfuse\n", + "\n", + "Use [Langfuse prompt management](https://langfuse.com/docs/prompts/example-langchain) to effectively manage and version your prompts. We add the prompt used in this example via the SDK. In production, however, users would update and manage the prompts via the Langfuse UI instead of using the SDK.\n", + "\n", + "Langfuse prompt management is basically a Prompt CMS (Content Management System). Alternatively, you can also edit and version the prompt in the Langfuse UI.\n", + "\n", + "* `Name` that identifies the prompt in Langfuse Prompt Management\n", + "* Prompt with prompt template incl. `{{input variables}}`\n", + "* `labels` to include `production` to immediately use prompt as the default\n", + "\n", + "In this example, we create a system prompt for an assistant that translates every user message into Spanish." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" }, + "id": "H0J8-nbhUUz6", + "outputId": "ee71e43d-9f77-451d-b71c-f77cd297b065" + }, + "outputs": [ { - "cell_type": "code", - "execution_count": 18, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "YYd7wbttm2ec", - "outputId": "fdc18797-3b3b-4cae-9ebb-8b9d946494c1" - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "{'chatbot': {'messages': [AIMessage(content='¿Qué es Langfuse?', response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 30, 'total_tokens': 36}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_400f27fa1f', 'finish_reason': 'stop', 'logprobs': None}, id='run-1f419fe3-73e2-4413-aa6c-96560bbd09c8-0', usage_metadata={'input_tokens': 30, 'output_tokens': 6, 'total_tokens': 36})]}}\n" - ] - } - ], - "source": [ - "from langfuse.callback import CallbackHandler\n", - "\n", - "# Initialize Langfuse CallbackHandler for Langchain (tracing)\n", - "langfuse_handler = CallbackHandler()\n", - "\n", - "# Add Langfuse handler as callback: config={\"callbacks\": [langfuse_handler]}\n", - "for s in graph.stream({\"messages\": [HumanMessage(content = \"What is Langfuse?\")]},\n", - " config={\"callbacks\": [langfuse_handler]}):\n", - " print(s)" + "data": { + "text/plain": [ + "" ] + }, + "execution_count": 15, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from langfuse import Langfuse\n", + "\n", + "# Initialize Langfuse client (prompt management)\n", + "langfuse = Langfuse()\n", + "\n", + "langfuse.create_prompt(\n", + " name=\"translator_system-prompt\",\n", + " prompt=\"You are a translator that translates every input text into Spanish.\",\n", + " labels=[\"production\"]\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Dullp4XDXhzg" + }, + "source": [ + "![View prompt in Langfuse UI](https://langfuse.com/images/cookbook/integration-langgraph/integration_langgraph_prompt_example.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "pNboOjf2YQpD" + }, + "source": [ + "Use the utility method `.get_langchain_prompt()` to transform the Langfuse prompt into a string that can be used in Langchain.\n", + "\n", + "\n", + "**Context:** Langfuse declares input variables in prompt templates using double brackets (`{{input variable}}`). Langchain uses single brackets for declaring input variables in PromptTemplates (`{input variable}`). The utility method `.get_langchain_prompt()` replaces the double brackets with single brackets. In this example, however, we don't use any variables in our prompt." + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" }, + "id": "z49I82blYeXy", + "outputId": "6cf7cd23-6dde-4e7b-ae50-e369db37c2d0" + }, + "outputs": [ { - "cell_type": "markdown", - "metadata": { - "id": "nFDUGZ8qNBfj" - }, - "source": [ - "## Feedback\n", - "\n", - "If you have any feedback or requests, please create a GitHub [Issue](https://langfuse.com/issue) or share your idea with the community on [Discord](https://langfuse.com/discord)." - ] + "name": "stdout", + "output_type": "stream", + "text": [ + "You are a translator that translates every input text into Spanish. \n" + ] } - ], - "metadata": { + ], + "source": [ + "# Get current production version of prompt and transform the Langfuse prompt into a string that can be used in Langchain\n", + "langfuse_system_prompt = langfuse.get_prompt(\"translator_system-prompt\")\n", + "langchain_system_prompt = langfuse_system_prompt.get_langchain_prompt()\n", + "\n", + "print(langchain_system_prompt)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "n3zBULfCt0Wq" + }, + "source": [ + "Now we can use the new system prompt string to update our assistant." + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "metadata": { + "id": "oGQhulyMmvZD" + }, + "outputs": [], + "source": [ + "from typing import Annotated\n", + "from langchain_openai import ChatOpenAI\n", + "from typing_extensions import TypedDict\n", + "from langgraph.graph import StateGraph\n", + "from langgraph.graph.message import add_messages\n", + "\n", + "class State(TypedDict):\n", + " messages: Annotated[list, add_messages]\n", + "\n", + "graph_builder = StateGraph(State)\n", + "\n", + "llm = ChatOpenAI(model = \"gpt-4o\", temperature = 0.2)\n", + "\n", + "# Add the system prompt for our translator assistent\n", + "system_prompt = {\n", + " \"role\": \"system\",\n", + " \"content\": langchain_system_prompt\n", + "}\n", + "\n", + "def chatbot(state: State):\n", + " messages_with_system_prompt = [system_prompt] + state[\"messages\"]\n", + " response = llm.invoke(messages_with_system_prompt)\n", + " return {\"messages\": [response]}\n", + "\n", + "graph_builder.add_node(\"chatbot\", chatbot)\n", + "graph_builder.set_entry_point(\"chatbot\")\n", + "graph_builder.set_finish_point(\"chatbot\")\n", + "graph = graph_builder.compile()" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "metadata": { "colab": { - "provenance": [] - }, - "kernelspec": { - "display_name": "Python 3", - "name": "python3" + "base_uri": "https://localhost:8080/" }, - "language_info": { - "name": "python" + "id": "YYd7wbttm2ec", + "outputId": "fdc18797-3b3b-4cae-9ebb-8b9d946494c1" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'chatbot': {'messages': [AIMessage(content='¿Qué es Langfuse?', response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 30, 'total_tokens': 36}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_400f27fa1f', 'finish_reason': 'stop', 'logprobs': None}, id='run-1f419fe3-73e2-4413-aa6c-96560bbd09c8-0', usage_metadata={'input_tokens': 30, 'output_tokens': 6, 'total_tokens': 36})]}}\n" + ] } + ], + "source": [ + "from langfuse.callback import CallbackHandler\n", + "\n", + "# Initialize Langfuse CallbackHandler for Langchain (tracing)\n", + "langfuse_handler = CallbackHandler()\n", + "\n", + "# Add Langfuse handler as callback: config={\"callbacks\": [langfuse_handler]}\n", + "for s in graph.stream({\"messages\": [HumanMessage(content = \"What is Langfuse?\")]},\n", + " config={\"callbacks\": [langfuse_handler]}):\n", + " print(s)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "nFDUGZ8qNBfj" + }, + "source": [ + "## Feedback\n", + "\n", + "If you have any feedback or requests, please create a GitHub [Issue](https://langfuse.com/issue) or share your idea with the community on [Discord](https://langfuse.com/discord)." + ] + } + ], + "metadata": { + "colab": { + "provenance": [] + }, + "kernelspec": { + "display_name": "Python 3", + "name": "python3" }, - "nbformat": 4, - "nbformat_minor": 0 + "language_info": { + "name": "python" + } + }, + "nbformat": 4, + "nbformat_minor": 0 } diff --git a/pages/docs/integrations/langchain/example-python-langgraph.md b/pages/docs/integrations/langchain/example-python-langgraph.md index 1de1dbe5d..680f476ce 100644 --- a/pages/docs/integrations/langchain/example-python-langgraph.md +++ b/pages/docs/integrations/langchain/example-python-langgraph.md @@ -148,6 +148,47 @@ graph TD; classDef last fill:#bfb6fc ``` +### Use Langfuse with LangGraph Server + +You can add Langfuse as callback when using [LangGraph Server](https://langchain-ai.github.io/langgraph/concepts/langgraph_server/) + +When using the LangGraph Server, the LangGraph Server handles graph invocation automatically. Therefore, you should add the Langfuse callback when declaring the graph. + + +```python +from typing import Annotated + +from langchain_openai import ChatOpenAI +from typing_extensions import TypedDict + +from langgraph.graph import StateGraph +from langgraph.graph.message import add_messages + +from langfuse.callback import CallbackHandler + +class State(TypedDict): + messages: Annotated[list, add_messages] + +graph_builder = StateGraph(State) + +llm = ChatOpenAI(model = "gpt-4o", temperature = 0.2) + +def chatbot(state: State): + return {"messages": [llm.invoke(state["messages"])]} + +graph_builder.add_node("chatbot", chatbot) +graph_builder.set_entry_point("chatbot") +graph_builder.set_finish_point("chatbot") + +# Initialize Langfuse CallbackHandler for Langchain (tracing) +langfuse_handler = CallbackHandler() + +# Call "with_config" from the compiled graph. +# It returns a "CompiledGraph", similar to "compile", but with callbacks included. +# This enables automatic graph tracing without needing to add callbacks manually every time. +graph = graph_builder.compile().with_config({"callbacks": [langfuse_handler]}) +``` + ## Example 2: Multi agent application with LangGraph **What we will do in this section**: diff --git a/pages/guides/cookbook/integration_langgraph.md b/pages/guides/cookbook/integration_langgraph.md index 1de1dbe5d..680f476ce 100644 --- a/pages/guides/cookbook/integration_langgraph.md +++ b/pages/guides/cookbook/integration_langgraph.md @@ -148,6 +148,47 @@ graph TD; classDef last fill:#bfb6fc ``` +### Use Langfuse with LangGraph Server + +You can add Langfuse as callback when using [LangGraph Server](https://langchain-ai.github.io/langgraph/concepts/langgraph_server/) + +When using the LangGraph Server, the LangGraph Server handles graph invocation automatically. Therefore, you should add the Langfuse callback when declaring the graph. + + +```python +from typing import Annotated + +from langchain_openai import ChatOpenAI +from typing_extensions import TypedDict + +from langgraph.graph import StateGraph +from langgraph.graph.message import add_messages + +from langfuse.callback import CallbackHandler + +class State(TypedDict): + messages: Annotated[list, add_messages] + +graph_builder = StateGraph(State) + +llm = ChatOpenAI(model = "gpt-4o", temperature = 0.2) + +def chatbot(state: State): + return {"messages": [llm.invoke(state["messages"])]} + +graph_builder.add_node("chatbot", chatbot) +graph_builder.set_entry_point("chatbot") +graph_builder.set_finish_point("chatbot") + +# Initialize Langfuse CallbackHandler for Langchain (tracing) +langfuse_handler = CallbackHandler() + +# Call "with_config" from the compiled graph. +# It returns a "CompiledGraph", similar to "compile", but with callbacks included. +# This enables automatic graph tracing without needing to add callbacks manually every time. +graph = graph_builder.compile().with_config({"callbacks": [langfuse_handler]}) +``` + ## Example 2: Multi agent application with LangGraph **What we will do in this section**: