langchain_prefect.plugins
Module for defining Prefect plugins for langchain.
Classes
RecordLLMCalls
Bases: ContextDecorator
Context decorator for patching LLM calls with a prefect flow.
Source code in langchain_prefect/plugins.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 |
|
Functions
__enter__
Called when entering the context manager.
This is what would need to be changed if Langchain started making LLM api calls in a different place.
Source code in langchain_prefect/plugins.py
118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 |
|
__exit__
Reset methods when exiting the context manager.
Source code in langchain_prefect/plugins.py
135 136 137 138 |
|
__init__
Context decorator for patching LLM calls with a prefect flow.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tags |
Tags to apply to flow runs created by this context manager. |
required | |
flow_kwargs |
Keyword arguments to pass to the flow decorator. |
required | |
max_prompt_tokens |
The maximum number of tokens allowed in a prompt. |
required |
Example
Create a flow with a_custom_tag
upon calling OpenAI.generate
:
with RecordLLMCalls(tags={"a_custom_tag"}): llm = OpenAI(temperature=0.9) llm( "What would be a good company name " "for a company that makes carbonated water?" )
Track many LLM calls when using a langchain agent
llm = OpenAI(temperature=0) tools = load_tools(["llm-math"], llm=llm) agent = initialize_agent(tools, llm)
@flow def my_flow(): # noqa: D103 agent.run( "How old is the current Dalai Lama? " "What is his age divided by 2 (rounded to the nearest integer)?" )
with RecordLLMCalls(): my_flow()
Create an async flow upon calling OpenAI.agenerate
:
with RecordLLMCalls(): llm = OpenAI(temperature=0.9) await llm.agenerate( [ "Good name for a company that makes colorful socks?", "Good name for a company that sells carbonated water?", ] )
Create flow for LLM call and enforce a max number of tokens in the prompt:
with RecordLLMCalls(max_prompt_tokens=100): llm = OpenAI(temperature=0.9) llm( "What would be a good company name " "for a company that makes carbonated water?" )
Source code in langchain_prefect/plugins.py
62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 |
|
Functions
record_llm_call
Decorator for wrapping a Langchain LLM call with a prefect flow.
Source code in langchain_prefect/plugins.py
20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
|