Skip to main content

LLM Tool

Usage

Portia offers both open source tools as well as a cloud-hosted library of tools to save you development time. You can dig into the specs of those tools in our open source repo (SDK repo ↗).

You can import our open source tools into your project using from portia.open_source_tools.registry import open_source_tool_registry and load them into an InMemoryToolRegistry object. You can also combine their use with cloud or custom tools as explained in the docs (Add custom tools ↗).

Tool details

Tool ID: llm_tool

Tool description: Jack of all trades tool to respond to a prompt by relying solely on LLM capabilities. YOU NEVER CALL OTHER TOOLS. You use your native capabilities as an LLM only. This includes using your general knowledge, your in-built reasoning and your code interpreter capabilities. This tool can be used to summarize the outputs of other tools, make general language model queries or to answer questions. This should be used only as a last resort when no other tool satisfies a step in a task, however if there are no other tools that can be used to complete a step or for steps that don't require a tool call, this SHOULD be used

Args schema:

{
"description": "Input for LLM Tool.",
"properties": {
"task": {
"description": "The task to be completed by the LLM tool",
"title": "Task",
"type": "string"
},
"input_data": {
"description": "Any relevant data that should be used to complete the task. Important: This should include all relevant data in their entirety, from the first to the last character (i.e. NOT a summary).",
"items": {
"type": "string"
},
"title": "Input Data",
"type": "array"
}
},
"required": [
"task"
],
"title": "LLMToolSchema",
"type": "object"
}

Output schema:

('str', "The LLM's response to the user query.")