Skip to content

operator ¤

Operator is a class that is used to wrap the compilation of prompts and the singular execution of the LLM.

Classes:

Name Description
BaseChatOperator

Base class for chat operators.

BaseOperator

Operator is a class that is used to wrap the compilation of prompts and the singular execution of the LLM.

BaseChatOperator ¤

BaseChatOperator(
    system: Optional[str] = None,
    greeting: Optional[str] = None,
    parsed: PythonParser = None,
    streaming: bool = None,
    **kwargs: bool
)

Bases: BaseOperator

Base class for chat operators. It extends the BaseOperator class and adds additional attributes that are used for chat operators. See BaseOperator for more information.

Parameters:

Name Type Description Default
system Optional[str]

The system message that is used for the chat

None
greeting Optional[str]

The greeting message that is used for the chat.

None
kwargs

Enables passing all the required parameters for BaseOperator

{}

Attributes:

Name Type Description
system str

The system message that is used for the chat

greeting str

The greeting message that is used for the chat.

parsed_send_func PythonParser

The parsed object that is used to compile the send function.

Methods:

Name Description
compile_template

Compiles the system prompt.

parse_output

Parses the raw output from the LLM into the desired format that was set in the parsed object.

predict

Executes prediction using the LLM.

Attributes:

Name Type Description
streaming bool

Returns whether the operator is streaming or not

Source code in src/declarai/operators/operator.py
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
def __init__(
    self,
    system: Optional[str] = None,
    greeting: Optional[str] = None,
    parsed: PythonParser = None,
    streaming: bool = None,
    **kwargs,
):
    super().__init__(parsed=parsed, streaming=streaming, **kwargs)
    self.system = system or self.parsed.docstring_freeform
    self.greeting = greeting or getattr(self.parsed.decorated, "greeting", None)
    self.parsed_send_func = (
        PythonParser(self.parsed.decorated.send)
        if getattr(self.parsed.decorated, "send", None)
        else None
    )

streaming property ¤

streaming: bool

Returns whether the operator is streaming or not Returns:

compile_template ¤

compile_template() -> Message

Compiles the system prompt. Returns: The compiled system message

Source code in src/declarai/operators/operator.py
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
def compile_template(self) -> Message:
    """
    Compiles the system prompt.
    Returns: The compiled system message
    """
    structured_template = StructuredOutputChatPrompt
    if self.parsed_send_func:
        output_schema = self._compile_output_prompt(structured_template)
    else:
        output_schema = None

    if output_schema:
        compiled_system_prompt = f"{self.system}/n{output_schema}"
    else:
        compiled_system_prompt = self.system
    return Message(message=compiled_system_prompt, role=MessageRole.system)

parse_output ¤

parse_output(output: str) -> Any

Parses the raw output from the LLM into the desired format that was set in the parsed object. Args: output: llm string output

Returns:

Type Description
Any

Any parsed output

Source code in src/declarai/operators/operator.py
111
112
113
114
115
116
117
118
119
120
def parse_output(self, output: str) -> Any:
    """
    Parses the raw output from the LLM into the desired format that was set in the parsed object.
    Args:
        output: llm string output

    Returns:
        Any parsed output
    """
    return self.parsed.parse(output)

predict ¤

predict(
    *,
    llm_params: Optional[LLMParamsType] = None,
    **kwargs: object
) -> Union[LLMResponse, Iterator[LLMResponse]]

Executes prediction using the LLM. It first compiles the prompts using the compile method, and then executes the LLM with the compiled prompts and the llm_params. Args: llm_params: The parameters that are passed during runtime. If provided, they will override the ones provided during initialization. **kwargs: The keyword arguments to pass to the compile method. Used to format the prompts placeholders.

Returns:

Type Description
Union[LLMResponse, Iterator[LLMResponse]]

The response from the LLM

Source code in src/declarai/operators/operator.py
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
def predict(
    self, *, llm_params: Optional[LLMParamsType] = None, **kwargs: object
) -> Union[LLMResponse, Iterator[LLMResponse]]:
    """
    Executes prediction using the LLM.
    It first compiles the prompts using the `compile` method, and then executes the LLM with the compiled prompts and the llm_params.
    Args:
        llm_params: The parameters that are passed during runtime. If provided, they will override the ones provided during initialization.
        **kwargs: The keyword arguments to pass to the `compile` method. Used to format the prompts placeholders.

    Returns:
        The response from the LLM
    """
    llm_params = llm_params or self.llm_params  # Order is important -
    if self.streaming is not None:
        llm_params["stream"] = self.streaming  # streaming should be the last param
    # provided params during execution should override the ones provided during initialization
    return self.llm.predict(**self.compile(**kwargs), **llm_params)

BaseOperator ¤

BaseOperator(
    llm: LLM,
    parsed: PythonParser,
    llm_params: LLMParamsType = None,
    streaming: bool = None,
    **kwargs: Dict
)

Operator is a class that is used to wrap the compilation of prompts and the singular execution of the LLM. Args: llm: The LLM to use for the operator parsed (PythonParser): The parsed object that is used to compile the prompts llm_params: The parameters to pass to the LLM streaming: Whether to use streaming or not kwargs: Enables passing of additional parameters to the operator Attributes: llm (LLM): The LLM to use for the operator parsed (PythonParser): The parsed object that is used to compile the prompts llm_params (LLMParamsType): The parameters that were passed during initialization of the operator

Methods:

Name Description
compile

Compiles the prompts using the parsed object and returns the compiled prompts

predict

Executes the LLM with the compiled prompts and the llm_params

parse_output

Parses the output of the LLM

Methods:

Name Description
compile

Implements the compile method of the BaseOperator class.

parse_output

Parses the raw output from the LLM into the desired format that was set in the parsed object.

predict

Executes prediction using the LLM.

Attributes:

Name Type Description
streaming bool

Returns whether the operator is streaming or not

Source code in src/declarai/operators/operator.py
42
43
44
45
46
47
48
49
50
51
52
53
def __init__(
    self,
    llm: LLM,
    parsed: PythonParser,
    llm_params: LLMParamsType = None,
    streaming: bool = None,
    **kwargs: Dict,
):
    self.llm = llm
    self.parsed = parsed
    self.llm_params = llm_params or {}
    self._call_streaming = streaming

streaming property ¤

streaming: bool

Returns whether the operator is streaming or not Returns:

compile ¤

compile(**kwargs) -> CompiledTemplate

Implements the compile method of the BaseOperator class. Args: **kwargs:

Returns:

Type Description
CompiledTemplate

Dict[str, List[Message]]: A dictionary containing a list of messages.

Source code in src/declarai/operators/operator.py
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
def compile(self, **kwargs) -> CompiledTemplate:
    """
    Implements the compile method of the BaseOperator class.
    Args:
        **kwargs:

    Returns:
        Dict[str, List[Message]]: A dictionary containing a list of messages.

    """
    template = self.compile_template()
    if kwargs:
        template[-1].message = format_prompt_msg(
            _string=template[-1].message, **kwargs
        )
    return {"messages": template}

parse_output ¤

parse_output(output: str) -> Any

Parses the raw output from the LLM into the desired format that was set in the parsed object. Args: output: llm string output

Returns:

Type Description
Any

Any parsed output

Source code in src/declarai/operators/operator.py
111
112
113
114
115
116
117
118
119
120
def parse_output(self, output: str) -> Any:
    """
    Parses the raw output from the LLM into the desired format that was set in the parsed object.
    Args:
        output: llm string output

    Returns:
        Any parsed output
    """
    return self.parsed.parse(output)

predict ¤

predict(
    *,
    llm_params: Optional[LLMParamsType] = None,
    **kwargs: object
) -> Union[LLMResponse, Iterator[LLMResponse]]

Executes prediction using the LLM. It first compiles the prompts using the compile method, and then executes the LLM with the compiled prompts and the llm_params. Args: llm_params: The parameters that are passed during runtime. If provided, they will override the ones provided during initialization. **kwargs: The keyword arguments to pass to the compile method. Used to format the prompts placeholders.

Returns:

Type Description
Union[LLMResponse, Iterator[LLMResponse]]

The response from the LLM

Source code in src/declarai/operators/operator.py
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
def predict(
    self, *, llm_params: Optional[LLMParamsType] = None, **kwargs: object
) -> Union[LLMResponse, Iterator[LLMResponse]]:
    """
    Executes prediction using the LLM.
    It first compiles the prompts using the `compile` method, and then executes the LLM with the compiled prompts and the llm_params.
    Args:
        llm_params: The parameters that are passed during runtime. If provided, they will override the ones provided during initialization.
        **kwargs: The keyword arguments to pass to the `compile` method. Used to format the prompts placeholders.

    Returns:
        The response from the LLM
    """
    llm_params = llm_params or self.llm_params  # Order is important -
    if self.streaming is not None:
        llm_params["stream"] = self.streaming  # streaming should be the last param
    # provided params during execution should override the ones provided during initialization
    return self.llm.predict(**self.compile(**kwargs), **llm_params)