LangChain Decorators ✨
Disclaimer: `LangChain decorators` is not created by the LangChain team and is not supported by it.
LangChain decoratorsis a layer on the top of LangChain that provides syntactic sugar 🍭 for writing custom langchain prompts and chainsFor Feedback, Issues, Contributions - please raise an issue here: ju-bezdek/langchain-decorators
Main principles and benefits:
- more
pythonicway of writing code - write multiline prompts that won't break your code flow with indentation
- making use of IDE in-built support for hinting, type checking and popup with docs to quickly peek in the function to see the prompt, parameters it consumes etc.
- leverage all the power of 🦜🔗 LangChain ecosystem
- adding support for optional parameters
- easily share parameters between the prompts by binding them to one class
Here is a simple example of a code written with LangChain Decorators ✨
@llm_prompt
def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers")->str:
"""
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
"""
return
# run it naturally
write_me_short_post(topic="starwars")
# or
write_me_short_post(topic="starwars", platform="redit")
Quick start
Installation
pip install langchain_decorators
Examples
Good idea on how to start is to review the examples here:
Defining other parameters
Here we are just marking a function as a prompt with llm_prompt decorator, turning it effectively into a LLMChain. Instead of running it
Standard LLMchain takes much more init parameter than just inputs_variables and prompt... here is this implementation detail hidden in the decorator. Here is how it works:
- Using Global settings:
# define global settings for all prompty (if not set - chatGPT is the current default)
from langchain_decorators import GlobalSettings
GlobalSettings.define_settings(
default_llm=ChatOpenAI(temperature=0.0), this is default... can change it here globally
default_streaming_llm=ChatOpenAI(temperature=0.0,streaming=True), this is default... can change it here for all ... will be used for streaming
)
- Using predefined prompt types
#You can change the default prompt types
from langchain_decorators import PromptTypes, PromptTypeSettings
PromptTypes.AGENT_REASONING.llm = ChatOpenAI()
# Or you can just define your own ones:
class MyCustomPromptTypes(PromptTypes):
GPT4=PromptTypeSettings(llm=ChatOpenAI(model="gpt-4"))
@llm_prompt(prompt_type=MyCustomPromptTypes.GPT4)
def write_a_complicated_code(app_idea:str)->str:
...
- Define the settings directly in the decorator
from langchain_openai import OpenAI
@llm_prompt(
llm=OpenAI(temperature=0.7),
stop_tokens=["\nObservation"],
...
)
def creative_writer(book_title:str)->str:
...
Passing a memory and/or callbacks:
To pass any of these, just declare them in the function (or use kwargs to pass anything)
@llm_prompt()
async def write_me_short_post(topic:str, platform:str="twitter", memory:SimpleMemory = None):
"""
{history_key}
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
"""
pass
await write_me_short_post(topic="old movies")
Simplified streaming
If we want to leverage streaming:
- we need to define prompt as async function
- turn on the streaming on the decorator, or we can define PromptType with streaming on
- capture the stream using StreamingContext
This way we just mark which prompt should be streamed, not needing to tinker with what LLM should we use, passing around the creating and distribute streaming handler into particular part of our chain... just turn the streaming on/off on prompt/prompt type...
The streaming will happen only if we call it in streaming context ... there we can define a simple function to handle the stream
# this code example is complete and should run as it is
from langchain_decorators import StreamingContext, llm_prompt
# this will mark the prompt for streaming (useful if we want stream just some prompts in our app... but don't want to pass distribute the callback handlers)
# note that only async functions can be streamed (will get an error if it's not)
@llm_prompt(capture_stream=True)
async def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers"):
"""
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
"""
pass
# just an arbitrary function to demonstrate the streaming... will be some websockets code in the real world
tokens=[]
def capture_stream_func(new_token:str):
tokens.append(new_token)
# if we want to capture the stream, we need to wrap the execution into StreamingContext...
# this will allow us to capture the stream even if the prompt call is hidden inside higher level method
# only the prompts marked with capture_stream will be captured here
with StreamingContext(stream_to_stdout=True, callback=capture_stream_func):
result = await run_prompt()
print("Stream finished ... we can distinguish tokens thanks to alternating colors")
print("\nWe've captured",len(tokens),"tokens🎉\n")
print("Here is the result:")
print(result)
Prompt declarations
By default the prompt is the whole function docs, unless you mark your prompt