Skip to main content

portia.llm_wrapper

Wrapper around different LLM providers, standardizing their usage.

This module provides an abstraction layer around various large language model (LLM) providers, allowing them to be treated uniformly in the application. It defines a base class BaseLLMWrapper and a concrete implementation LLMWrapper that handles communication with different LLM providers such as OpenAI, Anthropic, and MistralAI.

The LLMWrapper class includes methods to convert the provider's model to LangChain-compatible models and to generate responses using the instructor tool.

Classes in this file include:

  • BaseLLMWrapper: An abstract base class for all LLM wrappers, providing a template for conversion methods.
  • LLMWrapper: A concrete implementation that supports different LLM providers and provides functionality for converting to LangChain models and generating responses using instructor.

BaseLLMWrapper Objects

class BaseLLMWrapper(ABC)

Abstract base class for LLM wrappers.

This abstract class defines the interface that all LLM wrappers should implement. It requires conversion methods for LangChain models (to_langchain) and for generating responses using the instructor tool (to_instructor).

Methods:

  • to_langchain - Convert the LLM to a LangChain-compatible model.
  • to_instructor - Generate a response using the instructor tool.

__init__

def __init__(api_key: SecretStr) -> None

Initialize the base LLM wrapper.

Arguments:

  • api_key str - The API key for the LLM provider.

to_langchain

@abstractmethod
def to_langchain() -> BaseChatModel

Return a LangChain chat model based on the LLM provider.

Converts the LLM provider's model to a LangChain-compatible model for interaction within the LangChain framework.

Returns:

  • BaseChatModel - A LangChain-compatible model.

Raises:

  • NotImplementedError - If the function is not implemented

to_instructor

@abstractmethod
def to_instructor(response_model: type[T],
messages: list[ChatCompletionMessageParam]) -> T

Generate a response using instructor.

Arguments:

  • response_model type[T] - The Pydantic model to deserialize the response into.
  • messages list[ChatCompletionMessageParam] - The messages to send to the LLM.

Returns:

  • T - The deserialized response.

Raises:

  • NotImplementedError - If the function is not implemented

LLMWrapper Objects

class LLMWrapper(BaseLLMWrapper)

LLMWrapper class for different LLMs.

This class provides functionality for working with various LLM providers, such as OpenAI, Anthropic, and MistralAI. It includes methods to convert the LLM provider's model to a LangChain-compatible model and to generate responses using the instructor tool.

Attributes:

  • model_name LLMModel - The name of the model to use.
  • api_key SecretStr - The API key for the LLM provider.
  • model_seed int - The seed for the model's random generation.
  • api_endpoint str | None - The API endpoint for the LLM provider (Optional, many API's don't require it).

Methods:

  • to_langchain - Converts the LLM provider's model to a LangChain-compatible model.
  • to_instructor - Generates a response using instructor for the selected LLM provider.

__init__

def __init__(model_name: LLMModel,
api_key: SecretStr,
model_seed: int = 343,
api_endpoint: str | None = None) -> None

Initialize the wrapper.

Arguments:

  • model_name LLMModel - The name of the LLM model to use.
  • api_key SecretStr - The API key for authentication with the LLM provider.
  • model_seed int, optional - Seed for model's random generation. Defaults to 343.
  • api_endpoint str | None, optional - The API endpoint for the LLM provider

for_usage

@classmethod
def for_usage(cls, usage: str, config: Config) -> LLMWrapper

Create an LLMWrapper from a LLMModel.

to_langchain

def to_langchain() -> BaseChatModel

Return a LangChain chat model based on the LLM provider.

Converts the LLM provider's model to a LangChain-compatible model for interaction within the LangChain framework.

Returns:

  • BaseChatModel - A LangChain-compatible model.

to_instructor

def to_instructor(response_model: type[T],
messages: list[ChatCompletionMessageParam]) -> T

Use instructor to generate an object of the specified response model type.

Arguments:

  • response_model type[T] - The Pydantic model to deserialize the response into.
  • messages list[ChatCompletionMessageParam] - The messages to send to the LLM.

Returns:

  • T - The deserialized response from the LLM provider.