Skip to content

llm ¤

This module defines the base classes for the LLM interface.

Classes:

Name Description
BaseLLM

The base LLM class that all LLMs should inherit from.

BaseLLMParams

The base LLM params that are common to all LLMs.

LLMResponse

The response from the LLM.

LLMSettings

The settings for the LLM. Defines the model and version to use.

Attributes:

Name Type Description
LLM

Type variable for LLM

LLMParamsType

Type variable for LLM params

LLM module-attribute ¤

LLM = TypeVar('LLM', bound=BaseLLM)

Type variable for LLM

LLMParamsType module-attribute ¤

LLMParamsType = TypeVar(
    "LLMParamsType", bound=BaseLLMParams
)

Type variable for LLM params

BaseLLM ¤

The base LLM class that all LLMs should inherit from.

Methods:

Name Description
predict

The predict method that all LLMs should implement.

predict abstractmethod ¤

predict(*args, **kwargs) -> LLMResponse

The predict method that all LLMs should implement. Args: args: *kwargs:

Returns: llm response object

Source code in src/declarai/operators/llm.py
109
110
111
112
113
114
115
116
117
118
119
120
@abstractmethod
def predict(self, *args, **kwargs) -> LLMResponse:
    """
    The predict method that all LLMs should implement.
    Args:
        *args:
        **kwargs:

    Returns: llm response object

    """
    raise NotImplementedError()

BaseLLMParams ¤

Bases: TypedDict

The base LLM params that are common to all LLMs.

LLMResponse ¤

Bases: BaseModel

The response from the LLM.

Attributes:

Name Type Description
response str

The raw response from the LLM

model Optional[str]

The model that was used to generate the response

prompt_tokens Optional[int]

The number of tokens in the prompt

completion_tokens Optional[int]

The number of tokens in the completion

total_tokens Optional[int]

The total number of tokens in the response

LLMSettings ¤

LLMSettings(
    provider: str,
    model: str,
    version: Optional[str] = None,
    **_: Optional[str]
)

The settings for the LLM. Defines the model and version to use.

Parameters:

Name Type Description Default
provider str

The provider of the model (openai, cohere, etc.)

required
model str

The model to use (gpt-4, gpt-3.5-turbo, etc.)

required
version Optional[str]

The version of the model to use (optional)

None
**_

Any additional params that are specific to the provider that will be ignored.

{}

Attributes:

Name Type Description
provider str

The provider of the model (openai, cohere, etc.)

model str

The full model name to use.

version

The version of the model to use (optional)

Attributes:

Name Type Description
model str

Some model providers allow defining a base model as well as a sub-model.

Source code in src/declarai/operators/llm.py
59
60
61
62
63
64
65
66
67
68
def __init__(
    self,
    provider: str,
    model: str,
    version: Optional[str] = None,
    **_,
):
    self.provider = provider
    self._model = model
    self.version = version

model property ¤

model: str

Some model providers allow defining a base model as well as a sub-model. Often the base model is an alias to latest model served on that model. for example, when sending gpt-3.5-turbo to OpenAI, the actual model will be one of the publicly available snapshots or an internally exposed version as described on their website: as of 27/07/2023 - https://platform.openai.com/docs/models/continuous-model-upgrades | With the release of gpt-3.5-turbo, some of our models are now being continually updated. | We also offer static model versions that developers can continue using for at least | three months after an updated model has been introduced.

Another use-case for sub models is using your own fine-tuned models. As described in the documentation: https://platform.openai.com/docs/guides/fine-tuning/customize-your-model-name

You will likely build your fine-tuned model names by concatenating the base model name with the fine-tuned model name, separated by a hyphen. For example gpt-3.5-turbo-declarai-text-classification-2023-03 or gpt-3.5-turbo:declarai:text-classification-2023-03

In any case you can always pass the full model name in the model parameter and leave the sub_model parameter empty if you prefer.