Skip to main content

portia.execution_agents.execution_utils

Agent execution utilities.

This module contains utility functions for managing agent execution flow.

AgentNode Objects

class AgentNode(str, Enum)

Nodes for agent execution.

This enumeration defines the different types of nodes that can be encountered during the agent execution process.

Attributes:

  • TOOL_AGENT str - A node representing the tool agent.
  • SUMMARIZER str - A node representing the summarizer.
  • TOOLS str - A node representing the tools.
  • ARGUMENT_VERIFIER str - A node representing the argument verifier.
  • ARGUMENT_PARSER str - A node representing the argument parser.

next_state_after_tool_call

def next_state_after_tool_call(
state: MessagesState,
tool: Tool | None = None
) -> Literal[AgentNode.TOOL_AGENT, AgentNode.SUMMARIZER, END]

Determine the next state after a tool call.

This function checks the state after a tool call to determine if the run should proceed to the tool agent again, to the summarizer, or end.

Arguments:

  • state MessagesState - The current state of the messages.
  • tool Tool | None - The tool involved in the call, if any.

Returns:

Literal[AgentNode.TOOL_AGENT, AgentNode.SUMMARIZER, END]: The next state to transition to.

Raises:

  • ToolRetryError - If the tool has an error and the maximum retry limit has not been reached.

is_clarification

def is_clarification(artifact: Any) -> bool

Check if the artifact is a clarification or list of clarifications.

tool_call_or_end

def tool_call_or_end(state: MessagesState) -> Literal[AgentNode.TOOLS, END]

Determine if tool execution should continue.

This function checks if the current state indicates that the tool execution should continue, or if the run should end.

Arguments:

  • state MessagesState - The current state of the messages.

Returns:

Literal[AgentNode.TOOLS, END]: The next state to transition to.

process_output

def process_output(
messages: list[BaseMessage],
tool: Tool | None = None,
clarifications: list[Clarification] | None = None) -> Output

Process the output of the agent.

This function processes the agent's output based on the type of message received. It raises errors if the tool encounters issues and returns the appropriate output.

Arguments:

  • messages list[BaseMessage} - The set of messages received from the agent's plan_run.
  • tool Tool | None - The tool associated with the agent, if any.
  • clarifications list[Clarification] | None - A list of clarifications, if any.

Returns:

  • Output - The processed output, which can be an error, tool output, or clarification.

Raises:

  • ToolRetryError - If there was a soft error with the tool and retries are allowed.
  • ToolFailedError - If there was a hard error with the tool.
  • InvalidAgentOutputError - If the output from the agent is invalid.

map_message_types_for_instructor

def map_message_types_for_instructor(
messages: list[BaseMessage]) -> list[ChatCompletionMessageParam]

Map the message types to the correct format for the LLM provider.

Arguments:

  • messages list[BaseMessage] - Input Langchain messages

Returns:

  • list[dict] - The mapped messages.

invoke_structured_output

def invoke_structured_output(
model: BaseChatModel, response_model: type[BaseModel],
messages: list[BaseMessage]) -> dict[Any, Any] | BaseModel

Invoke a model with structured output.

This function allows us to dispatch to the structured output method that works with the LLM provider.

Arguments:

  • model BaseChatModel - The LangChain model to invoke.
  • response_model type[T] - The Pydantic model to use as schema for structured output.
  • messages list[BaseMessage] - The message input to the model.

Returns:

  • T - The deserialized Pydantic model from the LLM provider.