Skip to main content

Install and setup

Let's get you set up and run a test query to make sure everything is in order.

Requirements

Portia requires python v3.11 and above. If you need to update your python version please visit their docs. If you are unsure what python version you have, you can check using

python3 --version

Install the Portia Python SDK

Run the following command to install our SDK and its dependencies.

pip install portia-sdk-python

Configure access to your preferred LLM

Set environment variables to connect to one of our currently supported LLMs. We are currently expanding this list.

gpt-4o-mini is set as the default model. You can sign up to their platform here

export OPENAI_API_KEY='your-api-key-here'

Test your installation from the command line

Let's submit a basic prompt to your LLM using our framework to make sure it's all working fine. We will submit a simple maths question, which should invoke one of the open source tools in our SDK:

Open AI is the default LLM provider. Just run:

portia-cli run "add 1 + 2"

Portia will return the final state of the plan run created in response to the submitted prompt. We will delve into plan run states more deeply in a later section but for now you want to be sure you can see "state": "COMPLETE" and the answer to your maths question e.g. "final_output": {"value": 3.0} as part of that returned state. Here's an example output:

{
"id": "prun-13a97e70-2ca6-41c9-bc49-b7f84f6d3982",
"plan_id": "plan-96693022-598e-458c-8d2f-44ba51d4f0b5",
"current_step_index": 0,
"clarifications": [],
"state": "COMPLETE",
"step_outputs": {
"$result": {
"value": 3.0
}
},
"final_output": {
"value": 3.0
}
}

Test your installation from a python file

As a final verification step for your installation, set up the required environment variables in the .env of a project directory of your choice, namely the relevant LLM API keys. We can now replicate the CLI-driven test above from a python file within that directory.

In your local .env file, set up your API key as an environment variable using OPENAI_API_KEY.
Then create a file e.g. main.py in your project directory and paste the following code in.

main.py
from dotenv import load_dotenv
from portia import (
Portia,
default_config,
example_tool_registry,
)

load_dotenv()

# Instantiate Portia with the default config which uses Open AI, and with some example tools.
portia = Portia(tools=example_tool_registry)
# Run the test query and print the output!
plan_run = portia.run('add 1 + 2')
print(plan_run.model_dump_json(indent=2))

You should see a similar output to the the CLI-driven test we ran in step 4.

We will review the various elements in main.py in more detail in later sections. For now you should remember that:

  • You will use a Portia instance to handle user prompts.
  • A Portia instance expects a Config. This is where you can specify things like the model you want to use and where you want to store plan runs.
  • A Portia instance also expects tools. This can be a list of tools, or a ToolRegistry (i.e a collection of tools you want to use).

If you got this far then we're off to the races 🐎. Let's get you set up with a Portia account so you can also use our cloud features. Don't worry it comes with a free trial (Pricing page ↗) 😉