task
¤
Task interface
Provides the most basic component to interact with an LLM. LLMs are often interacted with via an API. In order to provide prompts and receive predictions, we will need to create the following: - parse the provided python code - Translate the parsed data into the proper prompt for the LLM - Send the request to the LLM and parse the output back into python
This class is an orchestrator that calls a parser and operators to perform the above tasks. while the parser is meant to be shared across cases, as python code has a consistent interface, the different LLM API providers as well as custom models have different APIs with different expected prompt structures. For that reason, there are multiple implementations of operators, depending on the required use case.
Classes:
Name | Description |
---|---|
FutureTask |
A FutureTask is a wrapper around the task that is returned from the |
Task |
Initializes the Task |
TaskDecorator |
The TaskDecorator is used to create a task. It is used as a decorator on a function that will be used as a task. |
FutureTask
¤
FutureTask(
exec_func: Callable[[], Any],
kwargs: Dict[str, Any],
compiled_template: str,
populated_prompt: str,
)
A FutureTask is a wrapper around the task that is returned from the plan
method.
It used to create a lazy execution of the task, and to provide additional information about the task.
The only functionality that is provided by the FutureTask is the __call__
method, which executes the task.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
exec_func |
Callable[[], Any]
|
the function to execute when the future task is called |
required |
kwargs |
Dict[str, Any]
|
the kwargs that were passed to the task |
required |
compiled_template |
str
|
the compiled template that was populated by the task |
required |
populated_prompt |
str
|
the populated prompt that was populated by the task |
required |
Methods:
Name | Description |
---|---|
__call__ |
executes the task |
Methods:
Name | Description |
---|---|
__call__ |
Calls the |
Attributes:
Name | Type | Description |
---|---|---|
compiled_template |
str
|
Returns the compiled template that was populated by the task |
populated_prompt |
str
|
Returns the populated prompt that was populated by the task |
task_kwargs |
Dict[str, Any]
|
Returns the kwargs that were passed to the task |
Source code in src/declarai/task.py
46 47 48 49 50 51 52 53 54 55 56 |
|
compiled_template
property
¤
compiled_template: str
Returns the compiled template that was populated by the task
populated_prompt
property
¤
populated_prompt: str
Returns the populated prompt that was populated by the task
__call__
¤
__call__() -> Any
Calls the exec_func
attribute of the FutureTask
Returns:
the response from the exec_func
Source code in src/declarai/task.py
58 59 60 61 62 63 64 |
|
Task
¤
Task(
operator: BaseOperator,
middlewares: List[Type[TaskMiddleware]] = None,
)
Bases: BaseTask
Initializes the Task Args: operator: the operator to use to interact with the LLM middlewares: the middlewares to use while executing the task **kwargs:
Attributes:
Name | Type | Description |
---|---|---|
operator |
the operator to use to interact with the LLM |
|
_call_kwargs |
Dict[str, Any]
|
the kwargs that were passed to the task are set as attributes on the task and passed to the middlewares |
Methods:
Name | Description |
---|---|
__call__ |
Orchestrates the execution of the task. |
compile |
Compiles the prompt to be sent to the LLM. This is the first step in the process of interacting with the LLM. |
plan |
Populates the compiled template with the actual data. |
stream_handler |
A generator that yields each chunk from the stream and collects them in a buffer. |
Attributes:
Name | Type | Description |
---|---|---|
llm_params |
LLMParamsType
|
Return the LLM parameters that are saved on the operator. These parameters are sent to the LLM when the task is |
llm_response |
LLMResponse
|
The response from the LLM |
llm_stream_response |
Iterator[LLMResponse]
|
The response from the LLM when streaming |
Source code in src/declarai/task.py
104 105 106 107 108 |
|
llm_params
property
¤
llm_params: LLMParamsType
Return the LLM parameters that are saved on the operator. These parameters are sent to the LLM when the task is executed. Returns: The LLM parameters
llm_stream_response
class-attribute
instance-attribute
¤
llm_stream_response: Iterator[LLMResponse] = None
The response from the LLM when streaming
__call__
¤
__call__(
*,
llm_params: LLMParamsType = None,
**kwargs: LLMParamsType
) -> Union[Any, Iterator[LLMResponse]]
Orchestrates the execution of the task. Args: llm_params: the params to pass to the LLM. If provided, they will override the params that were passed during initialization **kwargs: kwargs that are used to compile the template and populate the prompt.
Returns: the user defined return type of the task
Source code in src/declarai/task.py
158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 |
|
compile
¤
compile(**kwargs) -> Any
Compiles the prompt to be sent to the LLM. This is the first step in the process of interacting with the LLM. Can be used for debugging purposes as well, to see what the prompt will look like before sending it to the LLM. Args: **kwargs: the data to populate the template with
Returns:
Type | Description |
---|---|
Any
|
the compiled template |
Source code in src/declarai/task.py
110 111 112 113 114 115 116 117 118 119 120 121 |
|
plan
¤
plan(**kwargs) -> FutureTask
Populates the compiled template with the actual data. Args: **kwargs: the data to populate the template with Returns: a FutureTask that can be used to execute the task in a lazy manner
Source code in src/declarai/task.py
123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 |
|
stream_handler
¤
stream_handler(
stream: Iterator[LLMResponse],
) -> Iterator[LLMResponse]
A generator that yields each chunk from the stream and collects them in a buffer. After the stream is exhausted, it runs the cleanup logic.
Source code in src/declarai/_base.py
86 87 88 89 90 91 92 93 94 95 96 97 |
|
TaskDecorator
¤
TaskDecorator(llm: LLM)
The TaskDecorator is used to create a task. It is used as a decorator on a function that will be used as a task. Args: llm_settings: the settings that define which LLM to use **kwargs: additional llm_settings like open_ai_api_key etc. Methods: task: the decorator that creates the task
Methods:
Name | Description |
---|---|
task |
The decorator that creates the task |
Source code in src/declarai/task.py
191 192 |
|
task
¤
task(
func: Optional[Callable] = None,
*,
middlewares: List[Type[TaskMiddleware]] = None,
llm_params: LLMParamsType = None,
streaming: bool = None
)
The decorator that creates the task Args: func: the function to decorate that represents the task middlewares: middleware to use while executing the task llm_params: llm_params to use when calling the llm streaming: whether to stream the response from the llm or not
Returns:
Type | Description |
---|---|
Task
|
the task that was created |
Source code in src/declarai/task.py
212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 |
|