prefect_openai.completion
Module for generating and configuring OpenAI completions.
Classes
CompletionModel
Bases: Block
A block that contains config for an OpenAI Completion Model. Learn more in the OpenAPI Text Completion docs
Attributes:
Name | Type | Description |
---|---|---|
openai_credentials |
OpenAICredentials
|
The credentials used to authenticate with OpenAI. |
model |
Union[Literal['text-davinci-003', 'text-curie-001', 'text-babbage-001', 'text-ada-001'], str]
|
ID of the model to use. |
temperature |
float
|
What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. |
max_tokens |
int
|
The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). |
suffix |
Optional[str]
|
The suffix to append to the prompt. |
echo |
bool
|
Echo back the prompt in addition to the completion. |
timeout |
Optional[float]
|
The maximum time to wait for the model to warm up. |
Example
Load a configured block:
from prefect_openai import CompletionModel
completion_model = CompletionModel.load("BLOCK_NAME")
Source code in prefect_openai/completion.py
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 |
|
Attributes
logger: Logger
property
Returns a logger based on whether the CompletionModel is called from within a flow or task run context. If a run context is present, the logger property returns a run logger. Else, it returns a default logger labeled with the class's name.
Returns:
Type | Description |
---|---|
Logger
|
The run logger or a default logger with the class's name. |
Functions
submit_prompt
async
Submits a prompt for the model to generate a text completion.
OpenAI will return an object potentially containing multiple choices
,
where the zeroth index is what they consider the "best" completion.
Learn more in the OpenAPI Text Completion docs
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prompt |
str
|
The prompt to use for the completion. |
required |
**acreate_kwargs |
Dict[str, Any]
|
Additional keyword arguments to pass
to |
{}
|
Returns:
Type | Description |
---|---|
OpenAIObject
|
The OpenAIObject containing the completion and associated metadata. |
Example
Create an OpenAI Completion given a prompt:
from prefect import flow
from prefect_openai import CompletionModel, OpenAICredentials
@flow(log_prints=True)
def my_ai_bot(model_name: str = "text-davinci-003")
credentials = OpenAICredentials.load("my-openai-creds")
completion_model = CompletionModel(
openai_credentials=credentials,
)
for prompt in ["hi!", "what is the meaning of life?"]:
completion = completion_model.submit_prompt(prompt)
print(completion.choices[0].text)
Source code in prefect_openai/completion.py
111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 |
|
Functions
interpret_exception
Use OpenAI to interpret the exception raised from the decorated function. If used with a flow and return_state=True, will override the original state's data and message with the OpenAI interpretation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
completion_model_name |
str
|
The name of the CompletionModel block to use to interpret the caught exception. |
required |
prompt_prefix |
str
|
The prefix to include in the prompt ahead of the traceback and exception message. |
'Explain:'
|
traceback_tail |
int
|
The number of lines of the original traceback to include in the prompt to OpenAI, starting from the tail. If 0, only include the exception message in the prompt. Note this can be costly in terms of tokens so be sure to set this and the max_tokens in CompletionModel appropriately. |
0
|
Returns:
Type | Description |
---|---|
Callable
|
A decorator that will use an OpenAI CompletionModel to interpret the exception raised from the decorated function. |
Examples:
Interpret the exception raised from a flow.
import httpx
from prefect import flow
from prefect_openai.completion import interpret_exception
@flow
@interpret_exception("COMPLETION_MODEL_BLOCK_NAME_PLACEHOLDER")
def example_flow():
resp = httpx.get("https://httpbin.org/status/403")
resp.raise_for_status()
example_flow()
Use a unique prefix and include the last line of the traceback in the prompt.
import httpx
from prefect import flow
from prefect_openai.completion import interpret_exception
@flow
@interpret_exception(
"COMPLETION_MODEL_BLOCK_NAME_PLACEHOLDER",
prompt_prefix="Offer a solution:",
traceback_tail=1,
)
def example_flow():
resp = httpx.get("https://httpbin.org/status/403")
resp.raise_for_status()
example_flow()
Source code in prefect_openai/completion.py
217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 |
|