Skip to content

chat_operator ¤

Chat implementation of OpenAI operator.

Classes:

Name Description
AzureOpenAIChatOperator

Chat implementation of OpenAI operator. This is a child of the BaseChatOperator class. See the BaseChatOperator class for further documentation.

OpenAIChatOperator

Chat implementation of OpenAI operator. This is a child of the BaseChatOperator class. See the BaseChatOperator class for further documentation.

AzureOpenAIChatOperator ¤

Bases: OpenAIChatOperator

Chat implementation of OpenAI operator. This is a child of the BaseChatOperator class. See the BaseChatOperator class for further documentation.

Attributes:

Name Type Description
llm AzureOpenAILLM

AzureOpenAILLM

Methods:

Name Description
compile_template

Compiles the system prompt.

parse_output

Parses the raw output from the LLM into the desired format that was set in the parsed object.

predict

Executes prediction using the LLM.

Attributes:

Name Type Description
streaming bool

Returns whether the operator is streaming or not

streaming property ¤

streaming: bool

Returns whether the operator is streaming or not Returns:

compile_template ¤

compile_template() -> Message

Compiles the system prompt. Returns: The compiled system message

Source code in src/declarai/operators/operator.py
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
def compile_template(self) -> Message:
    """
    Compiles the system prompt.
    Returns: The compiled system message
    """
    structured_template = StructuredOutputChatPrompt
    if self.parsed_send_func:
        output_schema = self._compile_output_prompt(structured_template)
    else:
        output_schema = None

    if output_schema:
        compiled_system_prompt = f"{self.system}/n{output_schema}"
    else:
        compiled_system_prompt = self.system
    return Message(message=compiled_system_prompt, role=MessageRole.system)

parse_output ¤

parse_output(output: str) -> Any

Parses the raw output from the LLM into the desired format that was set in the parsed object. Args: output: llm string output

Returns:

Type Description
Any

Any parsed output

Source code in src/declarai/operators/operator.py
111
112
113
114
115
116
117
118
119
120
def parse_output(self, output: str) -> Any:
    """
    Parses the raw output from the LLM into the desired format that was set in the parsed object.
    Args:
        output: llm string output

    Returns:
        Any parsed output
    """
    return self.parsed.parse(output)

predict ¤

predict(
    *,
    llm_params: Optional[LLMParamsType] = None,
    **kwargs: object
) -> Union[LLMResponse, Iterator[LLMResponse]]

Executes prediction using the LLM. It first compiles the prompts using the compile method, and then executes the LLM with the compiled prompts and the llm_params. Args: llm_params: The parameters that are passed during runtime. If provided, they will override the ones provided during initialization. **kwargs: The keyword arguments to pass to the compile method. Used to format the prompts placeholders.

Returns:

Type Description
Union[LLMResponse, Iterator[LLMResponse]]

The response from the LLM

Source code in src/declarai/operators/operator.py
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
def predict(
    self, *, llm_params: Optional[LLMParamsType] = None, **kwargs: object
) -> Union[LLMResponse, Iterator[LLMResponse]]:
    """
    Executes prediction using the LLM.
    It first compiles the prompts using the `compile` method, and then executes the LLM with the compiled prompts and the llm_params.
    Args:
        llm_params: The parameters that are passed during runtime. If provided, they will override the ones provided during initialization.
        **kwargs: The keyword arguments to pass to the `compile` method. Used to format the prompts placeholders.

    Returns:
        The response from the LLM
    """
    llm_params = llm_params or self.llm_params  # Order is important -
    if self.streaming is not None:
        llm_params["stream"] = self.streaming  # streaming should be the last param
    # provided params during execution should override the ones provided during initialization
    return self.llm.predict(**self.compile(**kwargs), **llm_params)

OpenAIChatOperator ¤

Bases: BaseChatOperator

Chat implementation of OpenAI operator. This is a child of the BaseChatOperator class. See the BaseChatOperator class for further documentation.

Attributes:

Name Type Description
llm OpenAILLM

OpenAILLM

Methods:

Name Description
compile_template

Compiles the system prompt.

parse_output

Parses the raw output from the LLM into the desired format that was set in the parsed object.

predict

Executes prediction using the LLM.

Attributes:

Name Type Description
streaming bool

Returns whether the operator is streaming or not

streaming property ¤

streaming: bool

Returns whether the operator is streaming or not Returns:

compile_template ¤

compile_template() -> Message

Compiles the system prompt. Returns: The compiled system message

Source code in src/declarai/operators/operator.py
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
def compile_template(self) -> Message:
    """
    Compiles the system prompt.
    Returns: The compiled system message
    """
    structured_template = StructuredOutputChatPrompt
    if self.parsed_send_func:
        output_schema = self._compile_output_prompt(structured_template)
    else:
        output_schema = None

    if output_schema:
        compiled_system_prompt = f"{self.system}/n{output_schema}"
    else:
        compiled_system_prompt = self.system
    return Message(message=compiled_system_prompt, role=MessageRole.system)

parse_output ¤

parse_output(output: str) -> Any

Parses the raw output from the LLM into the desired format that was set in the parsed object. Args: output: llm string output

Returns:

Type Description
Any

Any parsed output

Source code in src/declarai/operators/operator.py
111
112
113
114
115
116
117
118
119
120
def parse_output(self, output: str) -> Any:
    """
    Parses the raw output from the LLM into the desired format that was set in the parsed object.
    Args:
        output: llm string output

    Returns:
        Any parsed output
    """
    return self.parsed.parse(output)

predict ¤

predict(
    *,
    llm_params: Optional[LLMParamsType] = None,
    **kwargs: object
) -> Union[LLMResponse, Iterator[LLMResponse]]

Executes prediction using the LLM. It first compiles the prompts using the compile method, and then executes the LLM with the compiled prompts and the llm_params. Args: llm_params: The parameters that are passed during runtime. If provided, they will override the ones provided during initialization. **kwargs: The keyword arguments to pass to the compile method. Used to format the prompts placeholders.

Returns:

Type Description
Union[LLMResponse, Iterator[LLMResponse]]

The response from the LLM

Source code in src/declarai/operators/operator.py
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
def predict(
    self, *, llm_params: Optional[LLMParamsType] = None, **kwargs: object
) -> Union[LLMResponse, Iterator[LLMResponse]]:
    """
    Executes prediction using the LLM.
    It first compiles the prompts using the `compile` method, and then executes the LLM with the compiled prompts and the llm_params.
    Args:
        llm_params: The parameters that are passed during runtime. If provided, they will override the ones provided during initialization.
        **kwargs: The keyword arguments to pass to the `compile` method. Used to format the prompts placeholders.

    Returns:
        The response from the LLM
    """
    llm_params = llm_params or self.llm_params  # Order is important -
    if self.streaming is not None:
        llm_params["stream"] = self.streaming  # streaming should be the last param
    # provided params during execution should override the ones provided during initialization
    return self.llm.predict(**self.compile(**kwargs), **llm_params)