portia.execution_agents.utils.step_summarizer
StepSummarizer implementation.
The StepSummarizer can be used by agents to summarize the output of a given tool.
StepSummarizer Objects
class StepSummarizer()
Class to summarize the output of a tool using llm.
This is used only on the tool output message.
Attributes:
summarizer_prompt
ChatPromptTemplate - The prompt template used to generate the summary.llm
BaseChatModel - The language model used for summarization.summary_max_length
int - The maximum length of the summary.
__init__
def __init__(llm: BaseChatModel, summary_max_length: int = 500) -> None
Initialize the model.
Arguments:
llm
BaseChatModel - The language model used for summarization.summary_max_length
int - The maximum length of the summary. Default is 500 characters.
invoke
def invoke(state: MessagesState) -> dict[str, Any]
Invoke the model with the given message state.
This method processes the last message in the state, checks if it's a tool message with an output, and if so, generates a summary of the tool's output. The summary is then added to the artifact of the last message.
Arguments:
state
MessagesState - The current state of the messages, which includes the output.
Returns:
dict[str, Any]: A dict containing the updated message state, including the summary.
Raises:
Exception
- If an error occurs during the invocation of the summarizer model.