portia.execution_agents.default_execution_agent
The Default execution agent for hardest problems.
This agent uses multiple models (verifier, parser etc) to achieve the highest accuracy in completing tasks.
ToolArgument Objects
class ToolArgument(BaseModel)
Represents an argument for a tool as extracted from the goal and context.
Attributes:
name
str - The name of the argument, as requested by the tool.value
Any | None - The value of the argument, as provided in the goal or context.valid
bool - Whether the value is a valid type and/or format for the given argument.explanation
str - Explanation of the source for the value of the argument.
ToolInputs Objects
class ToolInputs(BaseModel)
Represents the inputs for a tool.
Attributes:
args
list[ToolArgument] - Arguments for the tool.
VerifiedToolArgument Objects
class VerifiedToolArgument(BaseModel)
Represents an argument for a tool after being verified by an agent.
Attributes:
name
str - The name of the argument, as requested by the tool.value
Any | None - The value of the argument, as provided in the goal or context.made_up
bool - Whether the value was made up or not. Should be false if the value was provided by the user.
VerifiedToolInputs Objects
class VerifiedToolInputs(BaseModel)
Represents the inputs for a tool after being verified by an agent.
Attributes:
args
list[VerifiedToolArgument] - Arguments for the tool.
ParserModel Objects
class ParserModel()
Model to parse the arguments for a tool.
Arguments:
llm
BaseChatModel - The language model used for argument parsing.context
str - The context for argument generation.agent
DefaultExecutionAgent - The agent using the parser model.
Attributes:
arg_parser_prompt
ChatPromptTemplate - The prompt template for argument parsing.llm
BaseChatModel - The language model used.context
str - The context for argument generation.agent
DefaultExecutionAgent - The agent using the parser model.previous_errors
list[str] - A list of previous errors encountered during parsing.retries
int - The number of retries attempted for parsing.
__init__
def __init__(llm: BaseChatModel, context: str,
agent: DefaultExecutionAgent) -> None
Initialize the model.
Arguments:
llm
BaseChatModel - The language model used for argument parsing.context
str - The context for argument generation.agent
DefaultExecutionAgent - The agent using the parser model.
invoke
def invoke(state: MessagesState) -> dict[str, Any]
Invoke the model with the given message state.
Arguments:
state
MessagesState - The current state of the conversation.
Returns:
dict[str, Any]: The response after invoking the model.
Raises:
InvalidRunStateError
- If the agent's tool is not available.
VerifierModel Objects
class VerifierModel()
A model to verify the arguments for a tool.
This model ensures that the arguments passed to a tool are valid, determining whether they are "made up" or not based on the context and specific rules. The verification process uses an LLM to analyze the context and tool arguments and returns a structured validation output.
Attributes:
arg_verifier_prompt
ChatPromptTemplate - The prompt template used for arg verification.llm
BaseChatModel - The language model used to invoke the verification process.context
str - The context in which the tool arguments are being validated.agent
DefaultExecutionAgent - The agent responsible for handling the verification process.
__init__
def __init__(llm: BaseChatModel, context: str,
agent: DefaultExecutionAgent) -> None
Initialize the model.
Arguments:
llm
BaseChatModel - The language model used for argument parsing.context
str - The context for argument generation.agent
DefaultExecutionAgent - The agent using the parser model.
invoke
def invoke(state: MessagesState) -> dict[str, Any]
Invoke the model with the given message state.
Arguments:
state
MessagesState - The current state of the conversation.
Returns:
dict[str, Any]: The response after invoking the model.
Raises:
InvalidRunStateError
- If the agent's tool is not available.
ToolCallingModel Objects
class ToolCallingModel()
Model to call the tool with the verified arguments.
__init__
def __init__(llm: BaseChatModel, context: str, tools: list[StructuredTool],
agent: DefaultExecutionAgent) -> None
Initialize the model.
Arguments:
llm
BaseChatModel - The language model used for argument parsing.context
str - The context for argument generation.agent
DefaultExecutionAgent - The agent using the parser model.tools
list[StructuredTool] - The tools to pass to the model.
invoke
def invoke(state: MessagesState) -> dict[str, Any]
Invoke the model with the given message state.
Arguments:
state
MessagesState - The current state of the conversation.
Returns:
dict[str, Any]: The response after invoking the model.
Raises:
InvalidRunStateError
- If the agent's tool is not available.
DefaultExecutionAgent Objects
class DefaultExecutionAgent(BaseExecutionAgent)
Agent responsible for achieving a task by using verification.
This agent does the following things:
- It uses an LLM to make sure that we have the right arguments for the tool, with explanations of the values and where they come from.
- It uses an LLM to make sure that the arguments are correct, and that they are labeled as provided, inferred or assumed.
- If any of the arguments are assumed, it will request a clarification.
- If the arguments are correct, it will call the tool and return the result to the user.
- If the tool fails, it will try again at least 3 times.
Also, if the agent is being called a second time, it will just jump to step 4.
Possible improvements:
- This approach (as well as the other agents) could be improved for arguments that are lists
__init__
def __init__(step: Step,
plan_run: PlanRun,
config: Config,
tool: Tool | None = None) -> None
Initialize the agent.
Arguments:
step
Step - The current step in the task plan.plan_run
PlanRun - The run that defines the task execution process.config
Config - The configuration settings for the agent.tool
Tool | None - The tool to be used for the task (optional).
clarifications_or_continue
def clarifications_or_continue(
state: MessagesState) -> Literal[AgentNode.TOOL_AGENT, END]
Determine if we should continue with the tool call or request clarifications instead.
Arguments:
state
MessagesState - The current state of the conversation.
Returns:
Literal[AgentNode.TOOL_AGENT, END]: The next node we should route to.
get_last_resolved_clarification
def get_last_resolved_clarification(arg_name: str) -> Clarification | None
Return the last argument clarification that matches the given arg_name.
Arguments:
arg_name
str - The name of the argument to match clarifications for
Returns:
Clarification | None: The matched clarification
execute_sync
def execute_sync() -> Output
Run the core execution logic of the task.
This method will invoke the tool with arguments that are parsed and verified first.
Returns:
Output
- The result of the agent's execution, containing the tool call result.