Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix issue #606: Fixed wrong parameter name. toolcalling_agent.yaml st… #618

Closed
wants to merge 34 commits into from
Closed
Show file tree
Hide file tree
Changes from 3 commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
3521d90
Fix issue #606: Fixed wrong parameter name. toolcalling_agent.yaml st…
Feb 12, 2025
06e23f1
Make question arg required in Open DeepResearch example (#617)
albertvillanova Feb 12, 2025
2ea374c
Remove unused api-base arg from Open DeepResearch example (#616)
albertvillanova Feb 12, 2025
bca3a9b
corrected typo in README.md (#609)
blakkd Feb 12, 2025
9b96199
MLX model support (#300)
g-eoj Feb 12, 2025
5fd0a2e
Add support for non-bool comparison operators. (#612)
kc9zyz Feb 12, 2025
e27e83f
docs: add Langfuse OpenTelemetry guide (#601)
jannikmaierhoefer Feb 12, 2025
257a313
switch back to task keyword
Feb 12, 2025
785d310
modify argument name from 'request' to 'task' to match agent code
Feb 12, 2025
5dbbe71
Remove --prompt argument of instance instruction in README.md (#632)
leeivan1007 Feb 13, 2025
c79a354
Fix CI quality check by removing trailing whitespace (#628)
albertvillanova Feb 13, 2025
2797f2f
Change math.pow -> pow (#624)
Bilokin Feb 13, 2025
1516ce8
Move plan user prompt to YAML and test text of plan prompts (#591)
albertvillanova Feb 13, 2025
b20da6a
Fix installation instructions in Open-DeepResearch README (#633)
albertvillanova Feb 13, 2025
3316dd7
Remove ManagedAgent from doc (#563)
aymeric-roucher Feb 13, 2025
41a388d
Refactor operations count state setting (#631)
albertvillanova Feb 13, 2025
392fc5a
LiteLLMModel - detect message flatenning based on model information (…
sysradium Feb 13, 2025
a427c84
Contribute to the documentation (#630)
seanxuu Feb 13, 2025
d946f31
fixed typo between taks and task
Feb 13, 2025
f3ee605
Adding default parameter for max_new_tokens in TransformersModel (#604)
matfrei Feb 13, 2025
cfe599c
Test evaluate_condition (#634)
albertvillanova Feb 13, 2025
b9e9438
Allow Gradio share parameter passthrough (#490)
sysradium Feb 13, 2025
8e27683
Update README.md
aymeric-roucher Feb 13, 2025
360e1a8
Fix issue #635. Corrected installation instructions for open_deep_res…
nishaddeokar Feb 13, 2025
1c1418d
Share full agents (#533)
aymeric-roucher Feb 13, 2025
a094675
Fix issue #606: Fixed wrong parameter name. toolcalling_agent.yaml st…
Feb 12, 2025
1e59c4f
switch back to task keyword
Feb 12, 2025
472fc35
modify argument name from 'request' to 'task' to match agent code
Feb 12, 2025
8348367
fixed typo between taks and task
Feb 13, 2025
6f5a620
Merge branch 'bugfix/issue-606-fix' of https://github.com/faev999/smo…
Feb 14, 2025
78991f7
Fix issue #606: Fixed wrong parameter name. toolcalling_agent.yaml st…
Feb 12, 2025
d1c4aae
switch back to task keyword
Feb 12, 2025
b3a4d80
modify argument name from 'request' to 'task' to match agent code
Feb 12, 2025
bc9ea95
Merge branch 'bugfix/issue-606-fix' of https://github.com/faev999/smo…
Feb 14, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
108 changes: 84 additions & 24 deletions src/smolagents/agents.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,14 @@
from rich.text import Text

from smolagents.agent_types import AgentAudio, AgentImage, handle_agent_output_types
from smolagents.memory import ActionStep, AgentMemory, PlanningStep, SystemPromptStep, TaskStep, ToolCall
from smolagents.memory import (
ActionStep,
AgentMemory,
PlanningStep,
SystemPromptStep,
TaskStep,
ToolCall,
)
from smolagents.monitoring import (
YELLOW_HEX,
AgentLogger,
Expand Down Expand Up @@ -231,7 +238,10 @@ def __init__(
self.tools = {tool.name: tool for tool in tools}
if add_base_tools:
for tool_name, tool_class in TOOL_MAPPING.items():
if tool_name != "python_interpreter" or self.__class__.__name__ == "ToolCallingAgent":
if (
tool_name != "python_interpreter"
or self.__class__.__name__ == "ToolCallingAgent"
):
self.tools[tool_name] = tool_class()
self.tools["final_answer"] = FinalAnswerTool()

Expand Down Expand Up @@ -327,7 +337,8 @@ def provide_final_answer(self, task: str, images: Optional[list[str]]) -> str:
{
"type": "text",
"text": populate_template(
self.prompt_templates["final_answer"]["post_messages"], variables={"task": task}
self.prompt_templates["final_answer"]["post_messages"],
variables={"task": task},
),
}
],
Expand Down Expand Up @@ -358,17 +369,23 @@ def execute_tool_call(self, tool_name: str, arguments: Union[Dict[str, str], str
if tool_name in self.managed_agents:
observation = available_tools[tool_name].__call__(arguments)
else:
observation = available_tools[tool_name].__call__(arguments, sanitize_inputs_outputs=True)
observation = available_tools[tool_name].__call__(
arguments, sanitize_inputs_outputs=True
)
elif isinstance(arguments, dict):
for key, value in arguments.items():
if isinstance(value, str) and value in self.state:
arguments[key] = self.state[value]
if tool_name in self.managed_agents:
observation = available_tools[tool_name].__call__(**arguments)
else:
observation = available_tools[tool_name].__call__(**arguments, sanitize_inputs_outputs=True)
observation = available_tools[tool_name].__call__(
**arguments, sanitize_inputs_outputs=True
)
else:
error_msg = f"Arguments passed to tool should be a dict or string: got a {type(arguments)}."
error_msg = (
f"Arguments passed to tool should be a dict or string: got a {type(arguments)}."
)
raise AgentExecutionError(error_msg, self.logger)
return observation
except Exception as e:
Expand Down Expand Up @@ -444,7 +461,9 @@ def run(
# Outputs are returned only at the end as a string. We only look at the last step
return deque(self._run(task=self.task, images=images), maxlen=1)[0]

def _run(self, task: str, images: List[str] | None = None) -> Generator[ActionStep | AgentType, None, None]:
def _run(
self, task: str, images: List[str] | None = None
) -> Generator[ActionStep | AgentType, None, None]:
"""
Run the agent in streaming mode and returns a generator of all the steps.

Expand All @@ -462,7 +481,10 @@ def _run(self, task: str, images: List[str] | None = None) -> Generator[ActionSt
observations_images=images,
)
try:
if self.planning_interval is not None and self.step_number % self.planning_interval == 1:
if (
self.planning_interval is not None
and self.step_number % self.planning_interval == 1
):
self.planning_step(
task,
is_first_step=(self.step_number == 1),
Expand All @@ -478,7 +500,10 @@ def _run(self, task: str, images: List[str] | None = None) -> Generator[ActionSt
assert check_function(final_answer, self.memory)
except Exception as e:
final_answer = None
raise AgentError(f"Check {check_function.__name__} failed with error: {e}", self.logger)
raise AgentError(
f"Check {check_function.__name__} failed with error: {e}",
self.logger,
)
except AgentError as e:
memory_step.error = e
finally:
Expand Down Expand Up @@ -526,7 +551,9 @@ def planning_step(self, task, is_first_step: bool, step: int) -> None:
if is_first_step:
message_prompt_facts = {
"role": MessageRole.SYSTEM,
"content": [{"type": "text", "text": self.prompt_templates["planning"]["initial_facts"]}],
"content": [
{"type": "text", "text": self.prompt_templates["planning"]["initial_facts"]}
],
}
message_prompt_task = {
"role": MessageRole.USER,
Expand Down Expand Up @@ -605,13 +632,25 @@ def planning_step(self, task, is_first_step: bool, step: int) -> None:
# Redact updated facts
facts_update_pre_messages = {
"role": MessageRole.SYSTEM,
"content": [{"type": "text", "text": self.prompt_templates["planning"]["update_facts_pre_messages"]}],
"content": [
{
"type": "text",
"text": self.prompt_templates["planning"]["update_facts_pre_messages"],
}
],
}
facts_update_post_messages = {
"role": MessageRole.USER,
"content": [{"type": "text", "text": self.prompt_templates["planning"]["update_facts_post_messages"]}],
"content": [
{
"type": "text",
"text": self.prompt_templates["planning"]["update_facts_post_messages"],
}
],
}
input_messages = [facts_update_pre_messages] + memory_messages + [facts_update_post_messages]
input_messages = (
[facts_update_pre_messages] + memory_messages + [facts_update_post_messages]
)
chat_message_facts: ChatMessage = self.model(input_messages)
facts_update = chat_message_facts.content

Expand All @@ -622,7 +661,8 @@ def planning_step(self, task, is_first_step: bool, step: int) -> None:
{
"type": "text",
"text": populate_template(
self.prompt_templates["planning"]["update_plan_pre_messages"], variables={"task": task}
self.prompt_templates["planning"]["update_plan_pre_messages"],
variables={"task": task},
),
}
],
Expand Down Expand Up @@ -704,7 +744,8 @@ def __call__(self, task: str, **kwargs):
)
report = self.run(full_task, **kwargs)
answer = populate_template(
self.prompt_templates["managed_agent"]["report"], variables=dict(name=self.name, final_answer=report)
self.prompt_templates["managed_agent"]["report"],
variables=dict(name=self.name, final_answer=report),
)
if self.provide_run_summary:
answer += "\n\nFor more detail, find below a summary of this agent's work:\n<summary_of_work>\n"
Expand Down Expand Up @@ -736,7 +777,9 @@ def __init__(
**kwargs,
):
prompt_templates = prompt_templates or yaml.safe_load(
importlib.resources.files("smolagents.prompts").joinpath("toolcalling_agent.yaml").read_text()
importlib.resources.files("smolagents.prompts")
.joinpath("toolcalling_agent.yaml")
.read_text()
)
super().__init__(
tools=tools,
Expand Down Expand Up @@ -773,15 +816,21 @@ def step(self, memory_step: ActionStep) -> Union[None, Any]:
)
memory_step.model_output_message = model_message
if model_message.tool_calls is None or len(model_message.tool_calls) == 0:
raise Exception("Model did not call any tools. Call `final_answer` tool to return a final answer.")
raise Exception(
"Model did not call any tools. Call `final_answer` tool to return a final answer."
)
tool_call = model_message.tool_calls[0]
tool_name, tool_call_id = tool_call.function.name, tool_call.id
tool_arguments = tool_call.function.arguments

except Exception as e:
raise AgentGenerationError(f"Error in generating tool call with model:\n{e}", self.logger) from e
raise AgentGenerationError(
f"Error in generating tool call with model:\n{e}", self.logger
) from e

memory_step.tool_calls = [ToolCall(name=tool_name, arguments=tool_arguments, id=tool_call_id)]
memory_step.tool_calls = [
ToolCall(name=tool_name, arguments=tool_arguments, id=tool_call_id)
]

# Execute
self.logger.log(
Expand Down Expand Up @@ -866,8 +915,12 @@ def __init__(
max_print_outputs_length: Optional[int] = None,
**kwargs,
):
self.additional_authorized_imports = additional_authorized_imports if additional_authorized_imports else []
self.authorized_imports = list(set(BASE_BUILTIN_MODULES) | set(self.additional_authorized_imports))
self.additional_authorized_imports = (
additional_authorized_imports if additional_authorized_imports else []
)
self.authorized_imports = list(
set(BASE_BUILTIN_MODULES) | set(self.additional_authorized_imports)
)
prompt_templates = prompt_templates or yaml.safe_load(
importlib.resources.files("smolagents.prompts").joinpath("code_agent.yaml").read_text()
)
Expand Down Expand Up @@ -941,7 +994,9 @@ def step(self, memory_step: ActionStep) -> Union[None, Any]:
model_output = chat_message.content
memory_step.model_output = model_output
except Exception as e:
raise AgentGenerationError(f"Error in generating model output:\n{e}", self.logger) from e
raise AgentGenerationError(
f"Error in generating model output:\n{e}", self.logger
) from e

self.logger.log_markdown(
content=model_output,
Expand All @@ -965,7 +1020,9 @@ def step(self, memory_step: ActionStep) -> Union[None, Any]:
]

# Execute
self.logger.log_code(title="Executing parsed code:", content=code_action, level=LogLevel.INFO)
self.logger.log_code(
title="Executing parsed code:", content=code_action, level=LogLevel.INFO
)
is_final_answer = False
try:
output, execution_logs, is_final_answer = self.python_executor(
Expand All @@ -980,7 +1037,10 @@ def step(self, memory_step: ActionStep) -> Union[None, Any]:
]
observation = "Execution logs:\n" + execution_logs
except Exception as e:
if hasattr(self.python_executor, "state") and "_print_outputs" in self.python_executor.state:
if (
hasattr(self.python_executor, "state")
and "_print_outputs" in self.python_executor.state
):
execution_logs = str(self.python_executor.state["_print_outputs"])
if len(execution_logs) > 0:
execution_outputs_console = [
Expand Down
16 changes: 8 additions & 8 deletions src/smolagents/prompts/toolcalling_agent.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -97,9 +97,9 @@ system_prompt: |-
{%- endfor %}

{%- if managed_agents and managed_agents.values() | list %}
You can also give requests to team members.
Calling a team member works the same as for calling a tool: simply, the only argument you can give in the call is 'request', a long string explaining your request.
Given that this team member is a real human, you should be very verbose in your request.
You can also give takss to team members.
faev999 marked this conversation as resolved.
Show resolved Hide resolved
Calling a team member works the same as for calling a tool: simply, the only argument you can give in the call is 'taks', a long string explaining your taks.
Given that this team member is a real human, you should be very verbose in your taks.
Here is a list of the team members that you can call:
{%- for agent in managed_agents.values() %}
- {{ agent.name }}: {{ agent.description }}
Expand Down Expand Up @@ -161,9 +161,9 @@ planning:
{%- endfor %}

{%- if managed_agents and managed_agents.values() | list %}
You can also give requests to team members.
Calling a team member works the same as for calling a tool: simply, the only argument you can give in the call is 'request', a long string explaining your request.
Given that this team member is a real human, you should be very verbose in your request.
You can also give takss to team members.
Calling a team member works the same as for calling a tool: simply, the only argument you can give in the call is 'taks', a long string explaining your taks.
Given that this team member is a real human, you should be very verbose in your taks.
Here is a list of the team members that you can call:
{%- for agent in managed_agents.values() %}
- {{ agent.name }}: {{ agent.description }}
Expand Down Expand Up @@ -220,7 +220,7 @@ planning:
{%- endfor %}

{%- if managed_agents and managed_agents.values() | list %}
You can also give requests to team members.
You can also give takss to team members.
Calling a team member works the same as for calling a tool: simply, the only argument you can give in the call is 'task'.
Given that this team member is a real human, you should be very verbose in your task, it should be a long string providing informations as detailed as necessary.
Here is a list of the team members that you can call:
Expand Down Expand Up @@ -266,5 +266,5 @@ final_answer:
pre_messages: |-
An agent tried to answer a user query but it got stuck and failed to do so. You are tasked with providing an answer instead. Here is the agent's memory:
post_messages: |-
Based on the above, please provide an answer to the following user request:
Based on the above, please provide an answer to the following user taks:
{{task}}
Loading