portia.model
LLM provider model classes for Portia Agents.
Message Objects
class Message(BaseModel)
Portia LLM message class.
from_langchain
@classmethod
def from_langchain(cls, message: BaseMessage) -> Message
Create a Message from a LangChain message.
Arguments:
message
BaseMessage - The LangChain message to convert.
Returns:
Message
- The converted message.
to_langchain
def to_langchain() -> BaseMessage
Convert to LangChain BaseMessage sub-type.
Returns:
BaseMessage
- The converted message, subclass of LangChain's BaseMessage.
LLMProvider Objects
class LLMProvider(Enum)
Enum for supported LLM providers.
Attributes:
OPENAI
- OpenAI provider.ANTHROPIC
- Anthropic provider.MISTRALAI
- MistralAI provider.GOOGLE_GENERATIVE_AI
- Google Generative AI provider.AZURE_OPENAI
- Azure OpenAI provider.
GenerativeModel Objects
class GenerativeModel(ABC)
Base class for all generative model clients.
__init__
def __init__(model_name: str) -> None
Initialize the model.
Arguments:
model_name
- The name of the model.
get_response
@abstractmethod
def get_response(messages: list[Message]) -> Message
Given a list of messages, call the model and return its response as a new message.
Arguments:
messages
list[Message] - The list of messages to send to the model.
Returns:
Message
- The response from the model.
get_structured_response
@abstractmethod
def get_structured_response(messages: list[Message],
schema: type[BaseModelT]) -> BaseModelT
Get a structured response from the model, given a Pydantic model.
Arguments:
messages
list[Message] - The list of messages to send to the model.schema
type[BaseModelT] - The Pydantic model to use for the response.
Returns:
BaseModelT
- The structured response from the model.
__str__
def __str__() -> str
Get the string representation of the model.
__repr__
def __repr__() -> str
Get the string representation of the model.
to_langchain
@abstractmethod
def to_langchain() -> BaseChatModel
Get the LangChain client.
LangChainGenerativeModel Objects
class LangChainGenerativeModel(GenerativeModel)
Base class for LangChain-based models.
__init__
def __init__(client: BaseChatModel, model_name: str) -> None
Initialize with LangChain client.
Arguments:
client
- LangChain chat model instancemodel_name
- The name of the model
to_langchain
def to_langchain() -> BaseChatModel
Get the LangChain client.
get_response
def get_response(messages: list[Message]) -> Message
Get response using LangChain model.
get_structured_response
def get_structured_response(messages: list[Message], schema: type[BaseModelT],
**kwargs: Any) -> BaseModelT
Get structured response using LangChain model.
Arguments:
messages
list[Message] - The list of messages to send to the model.schema
type[BaseModelT] - The Pydantic model to use for the response.**kwargs
- Additional keyword arguments to pass to the with_structured_output method.
Returns:
BaseModelT
- The structured response from the model.
OpenAIGenerativeModel Objects
class OpenAIGenerativeModel(LangChainGenerativeModel)
OpenAI model implementation.
__init__
def __init__(*,
model_name: str,
api_key: SecretStr,
seed: int = 343,
max_retries: int = 3,
temperature: float = 0,
**kwargs: Any) -> None
Initialize with OpenAI client.
Arguments:
model_name
- OpenAI model to useapi_key
- API key for OpenAIseed
- Random seed for model generationmax_retries
- Maximum number of retriestemperature
- Temperature parameter**kwargs
- Additional keyword arguments to pass to ChatOpenAI
get_structured_response
def get_structured_response(messages: list[Message], schema: type[BaseModelT],
**kwargs: Any) -> BaseModelT
Call the model in structured output mode targeting the given Pydantic model.
Arguments:
messages
list[Message] - The list of messages to send to the model.schema
type[BaseModelT] - The Pydantic model to use for the response.**kwargs
- Additional keyword arguments to pass to the model.
Returns:
BaseModelT
- The structured response from the model.
get_structured_response_instructor
def get_structured_response_instructor(messages: list[Message],
schema: type[BaseModelT]) -> BaseModelT
Get structured response using instructor.
AzureOpenAIGenerativeModel Objects
class AzureOpenAIGenerativeModel(LangChainGenerativeModel)
Azure OpenAI model implementation.
__init__
def __init__(*,
model_name: str,
api_key: SecretStr,
azure_endpoint: str,
api_version: str = "2025-01-01-preview",
seed: int = 343,
max_retries: int = 3,
temperature: float = 0,
**kwargs: Any) -> None
Initialize with Azure OpenAI client.
Arguments:
model_name
- OpenAI model to useazure_endpoint
- Azure OpenAI endpointapi_version
- Azure API versionseed
- Random seed for model generationapi_key
- API key for Azure OpenAImax_retries
- Maximum number of retriestemperature
- Temperature parameter (defaults to 1 for O_3_MINI, 0 otherwise)**kwargs
- Additional keyword arguments to pass to AzureChatOpenAI
get_structured_response
def get_structured_response(messages: list[Message], schema: type[BaseModelT],
**kwargs: Any) -> BaseModelT
Call the model in structured output mode targeting the given Pydantic model.
Arguments:
messages
list[Message] - The list of messages to send to the model.schema
type[BaseModelT] - The Pydantic model to use for the response.**kwargs
- Additional keyword arguments to pass to the model.
Returns:
BaseModelT
- The structured response from the model.
get_structured_response_instructor
def get_structured_response_instructor(messages: list[Message],
schema: type[BaseModelT]) -> BaseModelT
Get structured response using instructor.
AnthropicGenerativeModel Objects
class AnthropicGenerativeModel(LangChainGenerativeModel)
Anthropic model implementation.
__init__
def __init__(*,
model_name: str = "claude-3-5-sonnet-latest",
api_key: SecretStr,
timeout: int = 120,
max_retries: int = 3,
max_tokens: int = 8096,
**kwargs: Any) -> None
Initialize with Anthropic client.
Arguments:
model_name
- Name of the Anthropic modeltimeout
- Request timeout in secondsmax_retries
- Maximum number of retriesmax_tokens
- Maximum number of tokens to generateapi_key
- API key for Anthropic**kwargs
- Additional keyword arguments to pass to ChatAnthropic
get_structured_response
def get_structured_response(messages: list[Message], schema: type[BaseModelT],
**kwargs: Any) -> BaseModelT
Call the model in structured output mode targeting the given Pydantic model.
Arguments:
messages
list[Message] - The list of messages to send to the model.schema
type[BaseModelT] - The Pydantic model to use for the response.**kwargs
- Additional keyword arguments to pass to the model.
Returns:
BaseModelT
- The structured response from the model.
get_structured_response_instructor
def get_structured_response_instructor(messages: list[Message],
schema: type[BaseModelT]) -> BaseModelT
Get structured response using instructor.
map_message_to_instructor
def map_message_to_instructor(message: Message) -> ChatCompletionMessageParam
Map a Message to ChatCompletionMessageParam.
Arguments:
message
Message - The message to map.
Returns:
ChatCompletionMessageParam
- Message in the format expected by instructor.