Skip to content

_base ¤

Base classes for declarai tasks.

Classes:

Name Description
BaseTask

Base class for tasks.

BaseTask ¤

Base class for tasks.

Methods:

Name Description
__call__

Orchestrates the execution of the task

_exec

Execute the task

_exec_middlewares

Execute the task middlewares and the task itself

compile

Compile the task to get the prompt sent to the LLM

stream_handler

A generator that yields each chunk from the stream and collects them in a buffer.

Attributes:

Name Type Description
llm_params LLMParamsType

Return the LLM parameters that are saved on the operator. These parameters are sent to the LLM when the task is

llm_response LLMResponse

The response from the LLM

llm_stream_response Iterator[LLMResponse]

The response from the LLM when streaming

operator BaseOperator

The operator to use for the task

llm_params property ¤

llm_params: LLMParamsType

Return the LLM parameters that are saved on the operator. These parameters are sent to the LLM when the task is executed. Returns: The LLM parameters

llm_response instance-attribute ¤

llm_response: LLMResponse

The response from the LLM

llm_stream_response class-attribute instance-attribute ¤

llm_stream_response: Iterator[LLMResponse] = None

The response from the LLM when streaming

operator instance-attribute ¤

operator: BaseOperator

The operator to use for the task

__call__ ¤

__call__(*args, **kwargs)

Orchestrates the execution of the task Args: args: Depends on the inherited class *kwargs: Depends on the inherited class

Returns: The result of the task, after parsing the result of the llm.

Source code in src/declarai/_base.py
74
75
76
77
78
79
80
81
82
83
84
def __call__(self, *args, **kwargs):
    """
    Orchestrates the execution of the task
    Args:
        *args: Depends on the inherited class
        **kwargs: Depends on the inherited class

    Returns: The result of the task, after parsing the result of the llm.

    """
    pass

_exec abstractmethod ¤

_exec(kwargs: dict) -> Any

Execute the task Args: kwargs: the runtime keyword arguments that are used to compile the task prompt.

Returns: The result of the task, which is the result of the operator.

Source code in src/declarai/_base.py
38
39
40
41
42
43
44
45
46
47
48
@abstractmethod
def _exec(self, kwargs: dict) -> Any:
    """
    Execute the task
    Args:
        kwargs: the runtime keyword arguments that are used to compile the task prompt.

    Returns: The result of the task, which is the result of the operator.

    """
    pass

_exec_middlewares abstractmethod ¤

_exec_middlewares(kwargs) -> Any

Execute the task middlewares and the task itself Args: kwargs: the runtime keyword arguments that are used to compile the task prompt.

Returns: The result of the task, which is the result of the operator. Same as _exec.

Source code in src/declarai/_base.py
50
51
52
53
54
55
56
57
58
59
60
@abstractmethod
def _exec_middlewares(self, kwargs) -> Any:
    """
    Execute the task middlewares and the task itself
    Args:
        kwargs: the runtime keyword arguments that are used to compile the task prompt.

    Returns: The result of the task, which is the result of the operator. Same as `_exec`.

    """
    pass

compile abstractmethod ¤

compile(**kwargs) -> str

Compile the task to get the prompt sent to the LLM Args: **kwargs: the runtime keyword arguments that are placed within the prompt string.

Returns: The prompt string that is sent to the LLM

Source code in src/declarai/_base.py
62
63
64
65
66
67
68
69
70
71
72
@abstractmethod
def compile(self, **kwargs) -> str:
    """
    Compile the task to get the prompt sent to the LLM
    Args:
        **kwargs: the runtime keyword arguments that are placed within the prompt string.

    Returns: The prompt string that is sent to the LLM

    """
    pass

stream_handler ¤

stream_handler(
    stream: Iterator[LLMResponse],
) -> Iterator[LLMResponse]

A generator that yields each chunk from the stream and collects them in a buffer. After the stream is exhausted, it runs the cleanup logic.

Source code in src/declarai/_base.py
86
87
88
89
90
91
92
93
94
95
96
97
def stream_handler(self, stream: Iterator[LLMResponse]) -> Iterator[LLMResponse]:
    """
    A generator that yields each chunk from the stream and collects them in a buffer.
    After the stream is exhausted, it runs the cleanup logic.
    """
    response_buffer = []
    for chunk in stream:
        response_buffer.append(chunk)
        yield chunk

    # After the stream is exhausted, run the cleanup logic
    self.stream_cleanup(response_buffer[-1])