Skip to content

Commit

Permalink
feat(smolagents): updates to latest and makes examples use openteleme…
Browse files Browse the repository at this point in the history
…try-instrument (#1277)

Signed-off-by: Adrian Cole <[email protected]>
  • Loading branch information
codefromthecrypt authored Feb 11, 2025
1 parent 2106acf commit b151bc9
Show file tree
Hide file tree
Showing 14 changed files with 148 additions and 167 deletions.
3 changes: 3 additions & 0 deletions python/.gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,5 @@
# vendored virtual environments
.venv

# gradio work directory
.gradio
Original file line number Diff line number Diff line change
Expand Up @@ -14,42 +14,58 @@ pip install openinference-instrumentation-smolagents

## Quickstart

This quickstart shows you how to instrument your guardrailed LLM application
This quickstart shows you how to instrument your LLM agent application.

Install required packages.
You've already installed openinference-instrumentation-smolagents. Next is to install packages for smolagents,
Phoenix and `opentelemetry-instrument`, which exports traces to it.

```shell
pip install smolagents arize-phoenix opentelemetry-sdk opentelemetry-exporter-otlp
pip install smolagents arize-phoenix opentelemetry-sdk opentelemetry-exporter-otlp-proto-grpc opentelemetry-distro
```

Start Phoenix in the background as a collector. By default, it listens on `http://localhost:6006`. You can visit the app via a browser at the same address. (Phoenix does not send data over the internet. It only operates locally on your machine.)
Start Phoenix in the background as a collector, which listens on `http://localhost:6006` and default gRPC port 4317.
Note that Phoenix does not send data over the internet. It only operates locally on your machine.

```shell
python -m phoenix.server.main serve
```

Set up `SmolagentsInstrumentor` to trace your crew and send the traces to Phoenix at the endpoint defined below.
Create an example like this:

```python
from smolagents import CodeAgent, DuckDuckGoSearchTool, HfApiModel

agent = CodeAgent(tools=[DuckDuckGoSearchTool()], model=HfApiModel())

agent.run("How many seconds would it take for a leopard at full speed to run through Pont des Arts?")
```

Then, run it like this:

```shell
opentelemetry-instrument python example.py
```

Finally, browse for your trace in Phoenix at `http://localhost:6006`!

## Manual instrumentation

`opentelemetry-instrument` is the [Zero-code instrumentation](https://opentelemetry.io/docs/zero-code/python) approach
for Python. It avoids explicitly importing and configuring OpenTelemetry code in your main source. Alternatively, you
can copy-paste the following into your main source and run it without `opentelemetry-instrument`.

```python
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor

from openinference.instrumentation.smolagents import SmolagentsInstrumentor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace.export import SimpleSpanProcessor

endpoint = "http://0.0.0.0:6006/v1/traces"
otlp_exporter = OTLPSpanExporter(endpoint="http://localhost:4317", insecure=True)
trace_provider = TracerProvider()
trace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))
trace_provider.add_span_processor(SimpleSpanProcessor(otlp_exporter))

SmolagentsInstrumentor().instrument(tracer_provider=trace_provider)

from smolagents import CodeAgent, DuckDuckGoSearchTool, HfApiModel

agent = CodeAgent(tools=[DuckDuckGoSearchTool()], model=HfApiModel())

agent.run("How many seconds would it take for a leopard at full speed to run through Pont des Arts?")
```

## More Info
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# OpenInference smolagents Examples

This directory contains numerous examples that show how to use OpenInference to instrument smolagents applications.
Specifically, this uses the [openinference-instrumentation-smolagents](..) package from source in the parent directory.

## Installation

```shell
pip install -r requirements.txt
```

Start Phoenix in the background as a collector, which listens on `http://localhost:6006` and default gRPC port 4317.
Note that Phoenix does not send data over the internet. It only operates locally on your machine.

```shell
python -m phoenix.server.main serve
```

## Running

Copy [env.example](env.example) to `.env` and update variables your example uses, such as `OPENAI_API_KEY`.

Then, run an example like this:

```shell
dotenv run -- opentelemetry-instrument python managed_agent.py
```

Finally, browse for your trace in Phoenix at `http://localhost:6006`!

## Manual instrumentation

`opentelemetry-instrument` is the [Zero-code instrumentation](https://opentelemetry.io/docs/zero-code/python) approach
for Python. It avoids explicitly importing and configuring OpenTelemetry code in your main source. Alternatively, you
can copy-paste the following into your main source and run it without `opentelemetry-instrument`.

```python
from opentelemetry.sdk.trace import TracerProvider

from openinference.instrumentation.smolagents import SmolagentsInstrumentor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace.export import SimpleSpanProcessor

otlp_exporter = OTLPSpanExporter(endpoint="http://localhost:4317", insecure=True)
trace_provider = TracerProvider()
trace_provider.add_span_processor(SimpleSpanProcessor(otlp_exporter))

SmolagentsInstrumentor().instrument(tracer_provider=trace_provider)
```
Original file line number Diff line number Diff line change
@@ -1,7 +1,3 @@
from io import BytesIO

import requests
from PIL import Image
from smolagents import CodeAgent, GradioUI, HfApiModel, Tool
from smolagents.default_tools import VisitWebpageTool

Expand All @@ -17,6 +13,11 @@ def __init__(self):
self.url = "https://em-content.zobj.net/source/twitter/53/robot-face_1f916.png"

def forward(self):
from io import BytesIO

import requests
from PIL import Image

response = requests.get(self.url)

return Image.open(BytesIO(response.content))
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# API Key from https://platform.openai.com/api-keys
OPENAI_API_KEY=sk-YOUR_API_KEY
# API Key from https://e2b.dev/docs/legacy/getting-started/api-key
E2B_API_KEY=e2b_YOUR_API_KEY

# Phoenix listens on the default gRPC port 4317, so you don't need to change
# exporter settings. If you prefer to export via HTTP, uncomment this:
# OTEL_EXPORTER_OTLP_ENDPOINT=http://0.0.0.0:6006
# OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf

# Export traces every second instead of every 5 seconds
OTEL_BSP_SCHEDULE_DELAY=1000
Original file line number Diff line number Diff line change
@@ -1,38 +1,16 @@
import os

from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
from smolagents import (
CodeAgent,
DuckDuckGoSearchTool,
ManagedAgent,
OpenAIServerModel,
ToolCallingAgent,
)

from openinference.instrumentation.smolagents import SmolagentsInstrumentor

endpoint = "http://0.0.0.0:6006/v1/traces"
trace_provider = TracerProvider()
trace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))

SmolagentsInstrumentor().instrument(tracer_provider=trace_provider, skip_dep_check=True)


model = OpenAIServerModel(
model_id="gpt-4o",
api_base="https://api.openai.com/v1",
api_key=os.environ["OPENAI_API_KEY"],
)
model = OpenAIServerModel(model_id="gpt-4o")
agent = ToolCallingAgent(
tools=[DuckDuckGoSearchTool()],
model=model,
max_steps=3,
)
managed_agent = ManagedAgent(
agent=agent,
name="managed_agent",
name="search",
description=(
"This is an agent that can do web search. "
"When solving a task, ask him directly first, he gives good answers. "
Expand All @@ -42,7 +20,7 @@
manager_agent = CodeAgent(
tools=[DuckDuckGoSearchTool()],
model=model,
managed_agents=[managed_agent],
managed_agents=[agent],
)
manager_agent.run(
"How many seconds would it take for a leopard at full speed to run through Pont des Arts? "
Expand Down
Original file line number Diff line number Diff line change
@@ -1,22 +1,5 @@
import os

from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import (
SimpleSpanProcessor,
)
from smolagents import OpenAIServerModel

from openinference.instrumentation.smolagents import SmolagentsInstrumentor

endpoint = "http://0.0.0.0:6006/v1/traces"
trace_provider = TracerProvider()
trace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))

SmolagentsInstrumentor().instrument(tracer_provider=trace_provider, skip_dep_check=True)

model = OpenAIServerModel(
model_id="gpt-4o", api_key=os.environ["OPENAI_API_KEY"], api_base="https://api.openai.com/v1"
)
model = OpenAIServerModel(model_id="gpt-4o")
output = model(messages=[{"role": "user", "content": "hello world"}])
print(output)
print(output.content)
Original file line number Diff line number Diff line change
@@ -1,21 +1,6 @@
import os

from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import (
SimpleSpanProcessor,
)
from smolagents import OpenAIServerModel
from smolagents.tools import Tool

from openinference.instrumentation.smolagents import SmolagentsInstrumentor

endpoint = "http://0.0.0.0:6006/v1/traces"
trace_provider = TracerProvider()
trace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))

SmolagentsInstrumentor().instrument(tracer_provider=trace_provider, skip_dep_check=True)


class GetWeatherTool(Tool):
name = "get_weather"
Expand All @@ -27,10 +12,8 @@ def forward(self, location: str) -> str:
return "sunny"


model = OpenAIServerModel(
model_id="gpt-4o", api_key=os.environ["OPENAI_API_KEY"], api_base="https://api.openai.com/v1"
)
output_message = model(
model = OpenAIServerModel(model_id="gpt-4o")
output = model(
messages=[
{
"role": "user",
Expand All @@ -39,4 +22,4 @@ def forward(self, location: str) -> str:
],
tools_to_call_from=[GetWeatherTool()],
)
print(output_message)
print(output.tool_calls[0].function)
Original file line number Diff line number Diff line change
@@ -1,24 +1,9 @@
import os

import datasets
from langchain.docstore.document import Document
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.retrievers import BM25Retriever
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import (
SimpleSpanProcessor,
)
from smolagents import CodeAgent, OpenAIServerModel, Tool

from openinference.instrumentation.smolagents import SmolagentsInstrumentor

endpoint = "http://0.0.0.0:6006/v1/traces"
trace_provider = TracerProvider()
trace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))

SmolagentsInstrumentor().instrument(tracer_provider=trace_provider)

knowledge_base = datasets.load_dataset("m-ric/huggingface_doc", split="train")
knowledge_base = knowledge_base.filter(
lambda row: row["source"].startswith("huggingface/transformers")
Expand Down Expand Up @@ -78,13 +63,9 @@ def forward(self, query: str) -> str:
retriever_tool = RetrieverTool(docs_processed)
agent = CodeAgent(
tools=[retriever_tool],
model=OpenAIServerModel(
"gpt-4o",
api_base="https://api.openai.com/v1",
api_key=os.environ["OPENAI_API_KEY"],
),
model=OpenAIServerModel("gpt-4o"),
max_steps=4,
verbose=True,
verbosity_level=2,
)

agent_output = agent.run(
Expand Down
Original file line number Diff line number Diff line change
@@ -1,9 +1,19 @@
# Main dependencies of the examples in this directory:
smolagents[e2b,gradio,litellm,openai]
datasets
langchain
langchain-community
opentelemetry-exporter-otlp
opentelemetry-exporter-otlp-proto-http
opentelemetry-sdk
langchain_community
rank_bm25
requests
sqlalchemy

# Install `opentelemetry-instrument` for zero code instrumentation. This
# defaults to export traces to localhost 4317, which Phoenix listens on.
opentelemetry-sdk
opentelemetry-exporter-otlp-proto-grpc
opentelemetry-distro
# Source of openinference-instrumentation-smolagents
-e ../


# Both `opentelemetry-instrument` and main need .env variables. Run like this:
# dotenv run -- opentelemetry-instrument python e2b_example.py
python-dotenv[cli]
Original file line number Diff line number Diff line change
@@ -1,10 +1,3 @@
import os

from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import (
SimpleSpanProcessor,
)
from smolagents import (
CodeAgent,
OpenAIServerModel,
Expand All @@ -23,14 +16,6 @@
text,
)

from openinference.instrumentation.smolagents import SmolagentsInstrumentor

endpoint = "http://0.0.0.0:6006/v1/traces"
trace_provider = TracerProvider()
trace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))

SmolagentsInstrumentor().instrument(tracer_provider=trace_provider)

engine = create_engine("sqlite:///:memory:")
metadata_obj = MetaData()

Expand Down Expand Up @@ -90,10 +75,6 @@ def sql_engine(query: str) -> str:

agent = CodeAgent(
tools=[sql_engine],
model=OpenAIServerModel(
"gpt-4o-mini",
api_base="https://api.openai.com/v1",
api_key=os.environ["OPENAI_API_KEY"],
),
model=OpenAIServerModel("gpt-4o-mini"),
)
agent.run("Can you give me the name of the client who got the most expensive receipt?")
Loading

0 comments on commit b151bc9

Please sign in to comment.