id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
0979ac5d05fc-0
|
.ipynb
.pdf
Structured Decoding with RELLM
Contents
Hugging Face Baseline
RELLM LLM Wrapper
Structured Decoding with RELLM#
RELLM is a library that wraps local Hugging Face pipeline models for structured decoding.
It works by generating tokens one at a time. At each step, it masks tokens that don’t conform to the provided partial regular expression.
Warning - this module is still experimental
!pip install rellm > /dev/null
Hugging Face Baseline#
First, let’s establish a qualitative baseline by checking the output of the model without structured decoding.
import logging
logging.basicConfig(level=logging.ERROR)
prompt = """Human: "What's the capital of the United States?"
AI Assistant:{
"action": "Final Answer",
"action_input": "The capital of the United States is Washington D.C."
}
Human: "What's the capital of Pennsylvania?"
AI Assistant:{
"action": "Final Answer",
"action_input": "The capital of Pennsylvania is Harrisburg."
}
Human: "What 2 + 5?"
AI Assistant:{
"action": "Final Answer",
"action_input": "2 + 5 = 7."
}
Human: 'What's the capital of Maryland?'
AI Assistant:"""
from transformers import pipeline
from langchain.llms import HuggingFacePipeline
hf_model = pipeline("text-generation", model="cerebras/Cerebras-GPT-590M", max_new_tokens=200)
original_model = HuggingFacePipeline(pipeline=hf_model)
generated = original_model.generate([prompt], stop=["Human:"])
print(generated)
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/rellm_experimental.html
|
0979ac5d05fc-1
|
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
generations=[[Generation(text=' "What\'s the capital of Maryland?"\n', generation_info=None)]] llm_output=None
That’s not so impressive, is it? It didn’t answer the question and it didn’t follow the JSON format at all! Let’s try with the structured decoder.
RELLM LLM Wrapper#
Let’s try that again, now providing a regex to match the JSON structured format.
import regex # Note this is the regex library NOT python's re stdlib module
# We'll choose a regex that matches to a structured json string that looks like:
# {
# "action": "Final Answer",
# "action_input": string or dict
# }
pattern = regex.compile(r'\{\s*"action":\s*"Final Answer",\s*"action_input":\s*(\{.*\}|"[^"]*")\s*\}\nHuman:')
from langchain.experimental.llms import RELLM
model = RELLM(pipeline=hf_model, regex=pattern, max_new_tokens=200)
generated = model.predict(prompt, stop=["Human:"])
print(generated)
{"action": "Final Answer",
"action_input": "The capital of Maryland is Baltimore."
}
Voila! Free of parsing errors.
previous
PromptLayer OpenAI
next
Replicate
Contents
Hugging Face Baseline
RELLM LLM Wrapper
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/rellm_experimental.html
|
f500e159177f-0
|
.ipynb
.pdf
Runhouse
Runhouse#
The Runhouse allows remote compute and data across environments and users. See the Runhouse docs.
This example goes over how to use LangChain and Runhouse to interact with models hosted on your own GPU, or on-demand GPUs on AWS, GCP, AWS, or Lambda.
Note: Code uses SelfHosted name instead of the Runhouse.
!pip install runhouse
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
from langchain import PromptTemplate, LLMChain
import runhouse as rh
INFO | 2023-04-17 16:47:36,173 | No auth token provided, so not using RNS API to save and load configs
# For an on-demand A100 with GCP, Azure, or Lambda
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False)
# For an on-demand A10G with AWS (no single A100s on AWS)
# gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws')
# For an existing cluster
# gpu = rh.cluster(ips=['<ip of the cluster>'],
# ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'},
# name='rh-a10x')
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = SelfHostedHuggingFaceLLM(model_id="gpt2", hardware=gpu, model_reqs=["pip:./", "transformers", "torch"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html
|
f500e159177f-1
|
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
INFO | 2023-02-17 05:42:23,537 | Running _generate_text via gRPC
INFO | 2023-02-17 05:42:24,016 | Time to send message: 0.48 seconds
"\n\nLet's say we're talking sports teams who won the Super Bowl in the year Justin Beiber"
You can also load more custom models through the SelfHostedHuggingFaceLLM interface:
llm = SelfHostedHuggingFaceLLM(
model_id="google/flan-t5-small",
task="text2text-generation",
hardware=gpu,
)
llm("What is the capital of Germany?")
INFO | 2023-02-17 05:54:21,681 | Running _generate_text via gRPC
INFO | 2023-02-17 05:54:21,937 | Time to send message: 0.25 seconds
'berlin'
Using a custom load function, we can load a custom pipeline directly on the remote hardware:
def load_pipeline():
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline # Need to be inside the fn in notebooks
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10
)
return pipe
def inference_fn(pipeline, prompt, stop = None):
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html
|
f500e159177f-2
|
)
return pipe
def inference_fn(pipeline, prompt, stop = None):
return pipeline(prompt)[0]["generated_text"][len(prompt):]
llm = SelfHostedHuggingFaceLLM(model_load_fn=load_pipeline, hardware=gpu, inference_fn=inference_fn)
llm("Who is the current US president?")
INFO | 2023-02-17 05:42:59,219 | Running _generate_text via gRPC
INFO | 2023-02-17 05:42:59,522 | Time to send message: 0.3 seconds
'john w. bush'
You can send your pipeline directly over the wire to your model, but this will only work for small models (<2 Gb), and will be pretty slow:
pipeline = load_pipeline()
llm = SelfHostedPipeline.from_pipeline(
pipeline=pipeline, hardware=gpu, model_reqs=model_reqs
)
Instead, we can also send it to the hardware’s filesystem, which will be much faster.
rh.blob(pickle.dumps(pipeline), path="models/pipeline.pkl").save().to(gpu, path="models")
llm = SelfHostedPipeline.from_pipeline(pipeline="models/pipeline.pkl", hardware=gpu)
previous
Replicate
next
SageMakerEndpoint
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html
|
e32f7a8b6d36-0
|
.ipynb
.pdf
Modal
Modal#
The Modal Python Library provides convenient, on-demand access to serverless cloud compute from Python scripts on your local computer.
The Modal itself does not provide any LLMs but only the infrastructure.
This example goes over how to use LangChain to interact with Modal.
Here is another example how to use LangChain to interact with Modal.
!pip install modal-client
# register and get a new token
!modal token new
[?25lLaunching login page in your browser window[33m...[0m
[2KIf this is not showing up, please copy this URL into your web browser manually:
[2Km⠙[0m Waiting for authentication in the web browser...
]8;id=417802;https://modal.com/token-flow/tf-ptEuGecm7T1T5YQe42kwM1\[4;94mhttps://modal.com/token-flow/tf-ptEuGecm7T1T5YQe42kwM1[0m]8;;\
[2K[32m⠙[0m Waiting for authentication in the web browser...
[1A[2K^C
[31mAborted.[0m
Follow these instructions to deal with secrets.
from langchain.llms import Modal
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Modal(endpoint_url="YOUR_ENDPOINT_URL")
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Manifest
next
MosaicML
By Harrison Chase
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/modal.html
|
e32f7a8b6d36-1
|
llm_chain.run(question)
previous
Manifest
next
MosaicML
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/modal.html
|
8fd82a311e64-0
|
.ipynb
.pdf
PromptLayer OpenAI
Contents
Install PromptLayer
Imports
Set the Environment API Key
Use the PromptLayerOpenAI LLM like normal
Using PromptLayer Track
PromptLayer OpenAI#
PromptLayer is the first platform that allows you to track, manage, and share your GPT prompt engineering. PromptLayer acts a middleware between your code and OpenAI’s python library.
PromptLayer records all your OpenAI API requests, allowing you to search and explore request history in the PromptLayer dashboard.
This example showcases how to connect to PromptLayer to start recording your OpenAI requests.
Another example is here.
Install PromptLayer#
The promptlayer package is required to use PromptLayer with OpenAI. Install promptlayer using pip.
!pip install promptlayer
Imports#
import os
from langchain.llms import PromptLayerOpenAI
import promptlayer
Set the Environment API Key#
You can create a PromptLayer API Key at www.promptlayer.com by clicking the settings cog in the navbar.
Set it as an environment variable called PROMPTLAYER_API_KEY.
You also need an OpenAI Key, called OPENAI_API_KEY.
from getpass import getpass
PROMPTLAYER_API_KEY = getpass()
os.environ["PROMPTLAYER_API_KEY"] = PROMPTLAYER_API_KEY
from getpass import getpass
OPENAI_API_KEY = getpass()
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
Use the PromptLayerOpenAI LLM like normal#
You can optionally pass in pl_tags to track your requests with PromptLayer’s tagging feature.
llm = PromptLayerOpenAI(pl_tags=["langchain"])
llm("I am a cat and I want")
The above request should now appear on your PromptLayer dashboard.
Using PromptLayer Track#
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/promptlayer_openai.html
|
8fd82a311e64-1
|
The above request should now appear on your PromptLayer dashboard.
Using PromptLayer Track#
If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id.
llm = PromptLayerOpenAI(return_pl_id=True)
llm_results = llm.generate(["Tell me a joke"])
for res in llm_results.generations:
pl_request_id = res[0].generation_info["pl_request_id"]
promptlayer.track.score(request_id=pl_request_id, score=100)
Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well.
Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard.
previous
PredictionGuard
next
Structured Decoding with RELLM
Contents
Install PromptLayer
Imports
Set the Environment API Key
Use the PromptLayerOpenAI LLM like normal
Using PromptLayer Track
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/promptlayer_openai.html
|
1aec0a174e4a-0
|
.ipynb
.pdf
DeepInfra
Contents
Imports
Set the Environment API Key
Create the DeepInfra instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
DeepInfra#
DeepInfra provides several LLMs.
This notebook goes over how to use Langchain with DeepInfra.
Imports#
import os
from langchain.llms import DeepInfra
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from DeepInfra. You have to Login and get a new token.
You are given a 1 hour free of serverless GPU compute to test different models. (see here)
You can print your token with deepctl auth token
# get a new token: https://deepinfra.com/login?from=%2Fdash
from getpass import getpass
DEEPINFRA_API_TOKEN = getpass()
os.environ["DEEPINFRA_API_TOKEN"] = DEEPINFRA_API_TOKEN
Create the DeepInfra instance#
Make sure to deploy your model first via deepctl deploy create -m google/flat-t5-xl (see here)
llm = DeepInfra(model_id="DEPLOYED MODEL ID")
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in 2015?"
llm_chain.run(question)
previous
Databricks
next
ForefrontAI
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/deepinfra_example.html
|
1aec0a174e4a-1
|
llm_chain.run(question)
previous
Databricks
next
ForefrontAI
Contents
Imports
Set the Environment API Key
Create the DeepInfra instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/deepinfra_example.html
|
ebbc3afe0e08-0
|
.ipynb
.pdf
Banana
Banana#
Banana is focused on building the machine learning infrastructure.
This example goes over how to use LangChain to interact with Banana models
# Install the package https://docs.banana.dev/banana-docs/core-concepts/sdks/python
!pip install banana-dev
# get new tokens: https://app.banana.dev/
# We need two tokens, not just an `api_key`: `BANANA_API_KEY` and `YOUR_MODEL_KEY`
import os
from getpass import getpass
os.environ["BANANA_API_KEY"] = "YOUR_API_KEY"
# OR
# BANANA_API_KEY = getpass()
from langchain.llms import Banana
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Banana(model_key="YOUR_MODEL_KEY")
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Azure OpenAI
next
Beam integration for langchain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/banana.html
|
616cc08e06d9-0
|
.ipynb
.pdf
C Transformers
C Transformers#
The C Transformers library provides Python bindings for GGML models.
This example goes over how to use LangChain to interact with C Transformers models.
Install
%pip install ctransformers
Load Model
from langchain.llms import CTransformers
llm = CTransformers(model='marella/gpt-2-ggml')
Generate Text
print(llm('AI is going to'))
Streaming
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
llm = CTransformers(model='marella/gpt-2-ggml', callbacks=[StreamingStdOutCallbackHandler()])
response = llm('AI is going to')
LLMChain
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer:"""
prompt = PromptTemplate(template=template, input_variables=['question'])
llm_chain = LLMChain(prompt=prompt, llm=llm)
response = llm_chain.run('What is AI?')
previous
Cohere
next
Databricks
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/ctransformers.html
|
4dfe87f5c19b-0
|
.ipynb
.pdf
PredictionGuard
Contents
Basic LLM usage
Chaining
PredictionGuard#
How to use PredictionGuard wrapper
! pip install predictionguard langchain
import predictionguard as pg
from langchain.llms import PredictionGuard
Basic LLM usage#
pgllm = PredictionGuard(name="default-text-gen", token="<your access token>")
pgllm("Tell me a joke")
Chaining#
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.predict(question=question)
template = """Write a {adjective} poem about {subject}."""
prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"])
llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)
llm_chain.predict(adjective="sad", subject="ducks")
previous
PipelineAI
next
PromptLayer OpenAI
Contents
Basic LLM usage
Chaining
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/predictionguard.html
|
932eec7f5224-0
|
.ipynb
.pdf
PipelineAI
Contents
Install pipeline-ai
Imports
Set the Environment API Key
Create the PipelineAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
PipelineAI#
PipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models.
This notebook goes over how to use Langchain with PipelineAI.
Install pipeline-ai#
The pipeline-ai library is required to use the PipelineAI API, AKA Pipeline Cloud. Install pipeline-ai using pip install pipeline-ai.
# Install the package
!pip install pipeline-ai
Imports#
import os
from langchain.llms import PipelineAI
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from PipelineAI. Check out the cloud quickstart guide. You’ll be given a 30 day free trial with 10 hours of serverless GPU compute to test different models.
os.environ["PIPELINE_API_KEY"] = "YOUR_API_KEY_HERE"
Create the PipelineAI instance#
When instantiating PipelineAI, you need to specify the id or tag of the pipeline you want to use, e.g. pipeline_key = "public/gpt-j:base". You then have the option of passing additional pipeline-specific keyword arguments:
llm = PipelineAI(pipeline_key="YOUR_PIPELINE_KEY", pipeline_kwargs={...})
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/pipelineai_example.html
|
932eec7f5224-1
|
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Petals
next
PredictionGuard
Contents
Install pipeline-ai
Imports
Set the Environment API Key
Create the PipelineAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/pipelineai_example.html
|
d442b3ea64a9-0
|
.ipynb
.pdf
OpenAI
Contents
OpenAI
if you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass through
OpenAI#
OpenAI offers a spectrum of models with different levels of power suitable for different tasks.
This example goes over how to use LangChain to interact with OpenAI models
# get a token: https://platform.openai.com/account/api-keys
from getpass import getpass
OPENAI_API_KEY = getpass()
········
import os
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
from langchain.llms import OpenAI
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = OpenAI()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
' Justin Bieber was born in 1994, so we are looking for the Super Bowl winner from that year. The Super Bowl in 1994 was Super Bowl XXVIII, and the winner was the Dallas Cowboys.'
if you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass through#
os.environ[“OPENAI_PROXY”] = “http://proxy.yourcompany.com:8080”
previous
NLP Cloud
next
OpenLM
Contents
OpenAI
if you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass through
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/openai.html
|
0f344caa4efc-0
|
.ipynb
.pdf
CerebriumAI
Contents
Install cerebrium
Imports
Set the Environment API Key
Create the CerebriumAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
CerebriumAI#
Cerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models.
This notebook goes over how to use Langchain with CerebriumAI.
Install cerebrium#
The cerebrium package is required to use the CerebriumAI API. Install cerebrium using pip3 install cerebrium.
# Install the package
!pip3 install cerebrium
Imports#
import os
from langchain.llms import CerebriumAI
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from CerebriumAI. See here. You are given a 1 hour free of serverless GPU compute to test different models.
os.environ["CEREBRIUMAI_API_KEY"] = "YOUR_KEY_HERE"
Create the CerebriumAI instance#
You can specify different parameters such as the model endpoint url, max length, temperature, etc. You must provide an endpoint url.
llm = CerebriumAI(endpoint_url="YOUR ENDPOINT URL HERE")
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/cerebriumai_example.html
|
0f344caa4efc-1
|
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Beam integration for langchain
next
Cohere
Contents
Install cerebrium
Imports
Set the Environment API Key
Create the CerebriumAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/cerebriumai_example.html
|
8c7e13f11e49-0
|
.ipynb
.pdf
Petals
Contents
Install petals
Imports
Set the Environment API Key
Create the Petals instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
Petals#
Petals runs 100B+ language models at home, BitTorrent-style.
This notebook goes over how to use Langchain with Petals.
Install petals#
The petals package is required to use the Petals API. Install petals using pip3 install petals.
!pip3 install petals
Imports#
import os
from langchain.llms import Petals
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from Huggingface.
from getpass import getpass
HUGGINGFACE_API_KEY = getpass()
os.environ["HUGGINGFACE_API_KEY"] = HUGGINGFACE_API_KEY
Create the Petals instance#
You can specify different parameters such as the model name, max new tokens, temperature, etc.
# this can take several minutes to download big files!
llm = Petals(model_name="bigscience/bloom-petals")
Downloading: 1%|▏ | 40.8M/7.19G [00:24<15:44, 7.57MB/s]
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/petals_example.html
|
8c7e13f11e49-1
|
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
OpenLM
next
PipelineAI
Contents
Install petals
Imports
Set the Environment API Key
Create the Petals instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/petals_example.html
|
9bec72f1cb9f-0
|
.ipynb
.pdf
OpenLM
Contents
Setup
Using LangChain with OpenLM
OpenLM#
OpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP.
It implements the OpenAI Completion class so that it can be used as a drop-in replacement for the OpenAI API. This changeset utilizes BaseOpenAI for minimal added code.
This examples goes over how to use LangChain to interact with both OpenAI and HuggingFace. You’ll need API keys from both.
Setup#
Install dependencies and set API keys.
# Uncomment to install openlm and openai if you haven't already
# !pip install openlm
# !pip install openai
from getpass import getpass
import os
import subprocess
# Check if OPENAI_API_KEY environment variable is set
if "OPENAI_API_KEY" not in os.environ:
print("Enter your OpenAI API key:")
os.environ["OPENAI_API_KEY"] = getpass()
# Check if HF_API_TOKEN environment variable is set
if "HF_API_TOKEN" not in os.environ:
print("Enter your HuggingFace Hub API key:")
os.environ["HF_API_TOKEN"] = getpass()
Using LangChain with OpenLM#
Here we’re going to call two models in an LLMChain, text-davinci-003 from OpenAI and gpt2 on HuggingFace.
from langchain.llms import OpenLM
from langchain import PromptTemplate, LLMChain
question = "What is the capital of France?"
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
for model in ["text-davinci-003", "huggingface.co/gpt2"]:
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/openlm.html
|
9bec72f1cb9f-1
|
for model in ["text-davinci-003", "huggingface.co/gpt2"]:
llm = OpenLM(model=model)
llm_chain = LLMChain(prompt=prompt, llm=llm)
result = llm_chain.run(question)
print("""Model: {}
Result: {}""".format(model, result))
Model: text-davinci-003
Result: France is a country in Europe. The capital of France is Paris.
Model: huggingface.co/gpt2
Result: Question: What is the capital of France?
Answer: Let's think step by step. I am not going to lie, this is a complicated issue, and I don't see any solutions to all this, but it is still far more
previous
OpenAI
next
Petals
Contents
Setup
Using LangChain with OpenLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/openlm.html
|
182c29e78aab-0
|
.ipynb
.pdf
Azure OpenAI
Contents
API configuration
Deployments
Azure OpenAI#
This notebook goes over how to use Langchain with Azure OpenAI.
The Azure OpenAI API is compatible with OpenAI’s API. The openai Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you call OpenAI with the exceptions noted below.
API configuration#
You can configure the openai package to use Azure OpenAI using environment variables. The following is for bash:
# Set this to `azure`
export OPENAI_API_TYPE=azure
# The API version you want to use: set this to `2022-12-01` for the released version.
export OPENAI_API_VERSION=2022-12-01
# The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.
export OPENAI_API_BASE=https://your-resource-name.openai.azure.com
# The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.
export OPENAI_API_KEY=<your Azure OpenAI API key>
Alternatively, you can configure the API right within your running Python environment:
import os
os.environ["OPENAI_API_TYPE"] = "azure"
...
Deployments#
With Azure OpenAI, you set up your own deployments of the common GPT-3 and Codex models. When calling the API, you need to specify the deployment you want to use.
Let’s say your deployment name is text-davinci-002-prod. In the openai Python API, you can specify this deployment with the engine parameter. For example:
import openai
response = openai.Completion.create(
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/azure_openai_example.html
|
182c29e78aab-1
|
import openai
response = openai.Completion.create(
engine="text-davinci-002-prod",
prompt="This is a test",
max_tokens=5
)
!pip install openai
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_VERSION"] = "2022-12-01"
os.environ["OPENAI_API_BASE"] = "..."
os.environ["OPENAI_API_KEY"] = "..."
# Import Azure OpenAI
from langchain.llms import AzureOpenAI
# Create an instance of Azure OpenAI
# Replace the deployment name with your own
llm = AzureOpenAI(
deployment_name="td2",
model_name="text-davinci-002",
)
# Run the LLM
llm("Tell me a joke")
"\n\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!"
We can also print the LLM and see its custom print.
print(llm)
AzureOpenAI
Params: {'deployment_name': 'text-davinci-002', 'model_name': 'text-davinci-002', 'temperature': 0.7, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}
previous
Anyscale
next
Banana
Contents
API configuration
Deployments
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/azure_openai_example.html
|
d2b9c9137066-0
|
.ipynb
.pdf
Replicate
Contents
Setup
Calling a model
Chaining Calls
Replicate#
Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you’re building your own machine learning models, Replicate makes it easy to deploy them at scale.
This example goes over how to use LangChain to interact with Replicate models
Setup#
To run this notebook, you’ll need to create a replicate account and install the replicate python client.
!pip install replicate
# get a token: https://replicate.com/account
from getpass import getpass
REPLICATE_API_TOKEN = getpass()
········
import os
os.environ["REPLICATE_API_TOKEN"] = REPLICATE_API_TOKEN
from langchain.llms import Replicate
from langchain import PromptTemplate, LLMChain
Calling a model#
Find a model on the replicate explore page, and then paste in the model name and version in this format: model_name/version
For example, for this dolly model, click on the API tab. The model name/version would be: replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5
Only the model param is required, but we can add other model params when initializing.
For example, if we were running stable diffusion and wanted to change the image dimensions:
Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions': '512x512'})
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html
|
d2b9c9137066-1
|
Note that only the first output of a model will be returned.
llm = Replicate(model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")
prompt = """
Answer the following yes/no question by reasoning step by step.
Can a dog drive a car?
"""
llm(prompt)
'The legal driving age of dogs is 2. Cars are designed for humans to drive. Therefore, the final answer is yes.'
We can call any replicate model using this syntax. For example, we can call stable diffusion.
text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf",
input={'image_dimensions': '512x512'})
image_output = text2image("A cat riding a motorcycle by Picasso")
image_output
'https://replicate.delivery/pbxt/Cf07B1zqzFQLOSBQcKG7m9beE74wf7kuip5W9VxHJFembefKE/out-0.png'
The model spits out a URL. Let’s render it.
from PIL import Image
import requests
from io import BytesIO
response = requests.get(image_output)
img = Image.open(BytesIO(response.content))
img
Chaining Calls#
The whole point of langchain is to… chain! Here’s an example of how do that.
from langchain.chains import SimpleSequentialChain
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html
|
d2b9c9137066-2
|
from langchain.chains import SimpleSequentialChain
First, let’s define the LLM for this model as a flan-5, and text2image as a stable diffusion model.
dolly_llm = Replicate(model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")
text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf")
First prompt in the chain
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain = LLMChain(llm=dolly_llm, prompt=prompt)
Second prompt to get the logo for company description
second_prompt = PromptTemplate(
input_variables=["company_name"],
template="Write a description of a logo for this company: {company_name}",
)
chain_two = LLMChain(llm=dolly_llm, prompt=second_prompt)
Third prompt, let’s create the image based on the description output from prompt 2
third_prompt = PromptTemplate(
input_variables=["company_logo_description"],
template="{company_logo_description}",
)
chain_three = LLMChain(llm=text2image, prompt=third_prompt)
Now let’s run it!
# Run the chain specifying only the input variable for the first chain.
overall_chain = SimpleSequentialChain(chains=[chain, chain_two, chain_three], verbose=True)
catchphrase = overall_chain.run("colorful socks")
print(catchphrase)
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html
|
d2b9c9137066-3
|
catchphrase = overall_chain.run("colorful socks")
print(catchphrase)
> Entering new SimpleSequentialChain chain...
novelty socks
todd & co.
https://replicate.delivery/pbxt/BedAP1PPBwXFfkmeD7xDygXO4BcvApp1uvWOwUdHM4tcQfvCB/out-0.png
> Finished chain.
https://replicate.delivery/pbxt/BedAP1PPBwXFfkmeD7xDygXO4BcvApp1uvWOwUdHM4tcQfvCB/out-0.png
response = requests.get("https://replicate.delivery/pbxt/eq6foRJngThCAEBqse3nL3Km2MBfLnWQNd0Hy2SQRo2LuprCB/out-0.png")
img = Image.open(BytesIO(response.content))
img
previous
Structured Decoding with RELLM
next
Runhouse
Contents
Setup
Calling a model
Chaining Calls
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html
|
75c6d2763b67-0
|
.ipynb
.pdf
Google Cloud Platform Vertex AI PaLM
Google Cloud Platform Vertex AI PaLM#
Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there.
PaLM API on Vertex AI is a Preview offering, subject to the Pre-GA Offerings Terms of the GCP Service Specific Terms.
Pre-GA products and features may have limited support, and changes to pre-GA products and features may not be compatible with other pre-GA versions. For more information, see the launch stage descriptions. Further, by using PaLM API on Vertex AI, you agree to the Generative AI Preview terms and conditions (Preview Terms).
For PaLM API on Vertex AI, you can process personal data as outlined in the Cloud Data Processing Addendum, subject to applicable restrictions and obligations in the Agreement (as defined in the Preview Terms).
To use Vertex AI PaLM you must have the google-cloud-aiplatform Python package installed and either:
Have credentials configured for your environment (gcloud, workload identity, etc…)
Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variable
This codebase uses the google.auth library which first looks for the application credentials variable mentioned above, and then looks for system-level auth.
For more information, see:
https://cloud.google.com/docs/authentication/application-default-credentials#GAC
https://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth
#!pip install google-cloud-aiplatform
from langchain.llms import VertexAI
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = VertexAI()
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/google_vertex_ai_palm.html
|
75c6d2763b67-1
|
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = VertexAI()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
'Justin Bieber was born on March 1, 1994. The Super Bowl in 1994 was won by the San Francisco 49ers.\nThe final answer: San Francisco 49ers.'
previous
ForefrontAI
next
GooseAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/google_vertex_ai_palm.html
|
78ee56bb6eba-0
|
.ipynb
.pdf
How to use the async API for LLMs
How to use the async API for LLMs#
LangChain provides async support for LLMs by leveraging the asyncio library.
Async support is particularly useful for calling multiple LLMs concurrently, as these calls are network-bound. Currently, OpenAI, PromptLayerOpenAI, ChatOpenAI and Anthropic are supported, but async support for other LLMs is on the roadmap.
You can use the agenerate method to call an OpenAI LLM asynchronously.
import time
import asyncio
from langchain.llms import OpenAI
def generate_serially():
llm = OpenAI(temperature=0.9)
for _ in range(10):
resp = llm.generate(["Hello, how are you?"])
print(resp.generations[0][0].text)
async def async_generate(llm):
resp = await llm.agenerate(["Hello, how are you?"])
print(resp.generations[0][0].text)
async def generate_concurrently():
llm = OpenAI(temperature=0.9)
tasks = [async_generate(llm) for _ in range(10)]
await asyncio.gather(*tasks)
s = time.perf_counter()
# If running this outside of Jupyter, use asyncio.run(generate_concurrently())
await generate_concurrently()
elapsed = time.perf_counter() - s
print('\033[1m' + f"Concurrent executed in {elapsed:0.2f} seconds." + '\033[0m')
s = time.perf_counter()
generate_serially()
elapsed = time.perf_counter() - s
print('\033[1m' + f"Serial executed in {elapsed:0.2f} seconds." + '\033[0m')
|
https://python.langchain.com/en/latest/modules/models/llms/examples/async_llm.html
|
78ee56bb6eba-1
|
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, how about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about yourself?
I'm doing well, thank you! How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you! How about you?
I'm doing well, thank you. How about you?
Concurrent executed in 1.39 seconds.
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about yourself?
I'm doing well, thanks for asking. How about you?
I'm doing well, thanks! How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about yourself?
I'm doing well, thanks for asking. How about you?
Serial executed in 5.77 seconds.
previous
Generic Functionality
next
How to write a custom LLM wrapper
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/examples/async_llm.html
|
e450985cb5bd-0
|
.ipynb
.pdf
How (and why) to use the human input LLM
How (and why) to use the human input LLM#
Similar to the fake LLM, LangChain provides a pseudo LLM class that can be used for testing, debugging, or educational purposes. This allows you to mock out calls to the LLM and simulate how a human would respond if they received the prompts.
In this notebook, we go over how to use this.
We start this with using the HumanInputLLM in an agent.
from langchain.llms.human import HumanInputLLM
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
Since we will use the WikipediaQueryRun tool in this notebook, you might need to install the wikipedia package if you haven’t done so already.
%pip install wikipedia
tools = load_tools(["wikipedia"])
llm = HumanInputLLM(prompt_func=lambda prompt: print(f"\n===PROMPT====\n{prompt}\n=====END OF PROMPT======"))
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("What is 'Bocchi the Rock!'?")
> Entering new AgentExecutor chain...
===PROMPT====
Answer the following questions as best you can. You have access to the following tools:
Wikipedia: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, historical events, or other subjects. Input should be a search query.
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [Wikipedia]
Action Input: the input to the action
|
https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html
|
e450985cb5bd-1
|
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: What is 'Bocchi the Rock!'?
Thought:
=====END OF PROMPT======
I need to use a tool.
Action: Wikipedia
Action Input: Bocchi the Rock!, Japanese four-panel manga and anime series.
Observation: Page: Bocchi the Rock!
Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tankōbon volumes as of November 2022.
An anime television series adaptation produced by CloverWorks aired from October to December 2022. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim.
Page: Manga Time Kirara
Summary: Manga Time Kirara (まんがタイムきらら, Manga Taimu Kirara) is a Japanese seinen manga magazine published by Houbunsha which mainly serializes four-panel manga. The magazine is sold on the ninth of each month and was first published as a special edition of Manga Time, another Houbunsha magazine, on May 17, 2002. Characters from this magazine have appeared in a crossover role-playing game called Kirara Fantasia.
Page: Manga Time Kirara Max
|
https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html
|
e450985cb5bd-2
|
Page: Manga Time Kirara Max
Summary: Manga Time Kirara Max (まんがタイムきららMAX) is a Japanese four-panel seinen manga magazine published by Houbunsha. It is the third magazine of the "Kirara" series, after "Manga Time Kirara" and "Manga Time Kirara Carat". The first issue was released on September 29, 2004. Currently the magazine is released on the 19th of each month.
Thought:
===PROMPT====
Answer the following questions as best you can. You have access to the following tools:
Wikipedia: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, historical events, or other subjects. Input should be a search query.
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [Wikipedia]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: What is 'Bocchi the Rock!'?
Thought:I need to use a tool.
Action: Wikipedia
Action Input: Bocchi the Rock!, Japanese four-panel manga and anime series.
Observation: Page: Bocchi the Rock!
|
https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html
|
e450985cb5bd-3
|
Observation: Page: Bocchi the Rock!
Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tankōbon volumes as of November 2022.
An anime television series adaptation produced by CloverWorks aired from October to December 2022. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim.
Page: Manga Time Kirara
Summary: Manga Time Kirara (まんがタイムきらら, Manga Taimu Kirara) is a Japanese seinen manga magazine published by Houbunsha which mainly serializes four-panel manga. The magazine is sold on the ninth of each month and was first published as a special edition of Manga Time, another Houbunsha magazine, on May 17, 2002. Characters from this magazine have appeared in a crossover role-playing game called Kirara Fantasia.
Page: Manga Time Kirara Max
Summary: Manga Time Kirara Max (まんがタイムきららMAX) is a Japanese four-panel seinen manga magazine published by Houbunsha. It is the third magazine of the "Kirara" series, after "Manga Time Kirara" and "Manga Time Kirara Carat". The first issue was released on September 29, 2004. Currently the magazine is released on the 19th of each month.
Thought:
=====END OF PROMPT======
These are not relevant articles.
Action: Wikipedia
Action Input: Bocchi the Rock!, Japanese four-panel manga series written and illustrated by Aki Hamaji.
|
https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html
|
e450985cb5bd-4
|
Observation: Page: Bocchi the Rock!
Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tankōbon volumes as of November 2022.
An anime television series adaptation produced by CloverWorks aired from October to December 2022. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim.
Thought:
===PROMPT====
Answer the following questions as best you can. You have access to the following tools:
Wikipedia: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, historical events, or other subjects. Input should be a search query.
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [Wikipedia]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: What is 'Bocchi the Rock!'?
Thought:I need to use a tool.
Action: Wikipedia
Action Input: Bocchi the Rock!, Japanese four-panel manga and anime series.
Observation: Page: Bocchi the Rock!
|
https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html
|
e450985cb5bd-5
|
Observation: Page: Bocchi the Rock!
Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tankōbon volumes as of November 2022.
An anime television series adaptation produced by CloverWorks aired from October to December 2022. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim.
Page: Manga Time Kirara
Summary: Manga Time Kirara (まんがタイムきらら, Manga Taimu Kirara) is a Japanese seinen manga magazine published by Houbunsha which mainly serializes four-panel manga. The magazine is sold on the ninth of each month and was first published as a special edition of Manga Time, another Houbunsha magazine, on May 17, 2002. Characters from this magazine have appeared in a crossover role-playing game called Kirara Fantasia.
Page: Manga Time Kirara Max
Summary: Manga Time Kirara Max (まんがタイムきららMAX) is a Japanese four-panel seinen manga magazine published by Houbunsha. It is the third magazine of the "Kirara" series, after "Manga Time Kirara" and "Manga Time Kirara Carat". The first issue was released on September 29, 2004. Currently the magazine is released on the 19th of each month.
Thought:These are not relevant articles.
Action: Wikipedia
Action Input: Bocchi the Rock!, Japanese four-panel manga series written and illustrated by Aki Hamaji.
|
https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html
|
e450985cb5bd-6
|
Observation: Page: Bocchi the Rock!
Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tankōbon volumes as of November 2022.
An anime television series adaptation produced by CloverWorks aired from October to December 2022. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim.
Thought:
=====END OF PROMPT======
It worked.
Final Answer: Bocchi the Rock! is a four-panel manga series and anime television series. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim.
> Finished chain.
"Bocchi the Rock! is a four-panel manga series and anime television series. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim."
previous
How (and why) to use the fake LLM
next
How to cache LLM calls
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html
|
c6b8783e7b15-0
|
.ipynb
.pdf
How to write a custom LLM wrapper
How to write a custom LLM wrapper#
This notebook goes over how to create a custom LLM wrapper, in case you want to use your own LLM or a different wrapper than one that is supported in LangChain.
There is only one required thing that a custom LLM needs to implement:
A _call method that takes in a string, some optional stop words, and returns a string
There is a second optional thing it can implement:
An _identifying_params property that is used to help with printing of this class. Should return a dictionary.
Let’s implement a very simple custom LLM that just returns the first N characters of the input.
from typing import Any, List, Mapping, Optional
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.llms.base import LLM
class CustomLLM(LLM):
n: int
@property
def _llm_type(self) -> str:
return "custom"
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
) -> str:
if stop is not None:
raise ValueError("stop kwargs are not permitted.")
return prompt[:self.n]
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {"n": self.n}
We can now use this as an any other LLM.
llm = CustomLLM(n=10)
llm("This is a foobar thing")
'This is a '
We can also print the LLM and see its custom print.
|
https://python.langchain.com/en/latest/modules/models/llms/examples/custom_llm.html
|
c6b8783e7b15-1
|
'This is a '
We can also print the LLM and see its custom print.
print(llm)
CustomLLM
Params: {'n': 10}
previous
How to use the async API for LLMs
next
How (and why) to use the fake LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/examples/custom_llm.html
|
795b796937e1-0
|
.ipynb
.pdf
How to stream LLM and Chat Model responses
How to stream LLM and Chat Model responses#
LangChain provides streaming support for LLMs. Currently, we support streaming for the OpenAI, ChatOpenAI, and ChatAnthropic implementations, but streaming support for other LLM implementations is on the roadmap. To utilize streaming, use a CallbackHandler that implements on_llm_new_token. In this example, we are using StreamingStdOutCallbackHandler.
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI, ChatAnthropic
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.schema import HumanMessage
llm = OpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0)
resp = llm("Write me a song about sparkling water.")
Verse 1
I'm sippin' on sparkling water,
It's so refreshing and light,
It's the perfect way to quench my thirst
On a hot summer night.
Chorus
Sparkling water, sparkling water,
It's the best way to stay hydrated,
It's so crisp and so clean,
It's the perfect way to stay refreshed.
Verse 2
I'm sippin' on sparkling water,
It's so bubbly and bright,
It's the perfect way to cool me down
On a hot summer night.
Chorus
Sparkling water, sparkling water,
It's the best way to stay hydrated,
It's so crisp and so clean,
It's the perfect way to stay refreshed.
Verse 3
I'm sippin' on sparkling water,
It's so light and so clear,
It's the perfect way to keep me cool
On a hot summer night.
Chorus
Sparkling water, sparkling water,
|
https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html
|
795b796937e1-1
|
On a hot summer night.
Chorus
Sparkling water, sparkling water,
It's the best way to stay hydrated,
It's so crisp and so clean,
It's the perfect way to stay refreshed.
We still have access to the end LLMResult if using generate. However, token_usage is not currently supported for streaming.
llm.generate(["Tell me a joke."])
Q: What did the fish say when it hit the wall?
A: Dam!
LLMResult(generations=[[Generation(text='\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {}, 'model_name': 'text-davinci-003'})
Here’s an example with the ChatOpenAI chat model implementation:
chat = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0)
resp = chat([HumanMessage(content="Write me a song about sparkling water.")])
Verse 1:
Bubbles rising to the top
A refreshing drink that never stops
Clear and crisp, it's oh so pure
Sparkling water, I can't ignore
Chorus:
Sparkling water, oh how you shine
A taste so clean, it's simply divine
You quench my thirst, you make me feel alive
Sparkling water, you're my favorite vibe
Verse 2:
No sugar, no calories, just H2O
A drink that's good for me, don't you know
With lemon or lime, you're even better
Sparkling water, you're my forever
Chorus:
Sparkling water, oh how you shine
A taste so clean, it's simply divine
You quench my thirst, you make me feel alive
Sparkling water, you're my favorite vibe
|
https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html
|
795b796937e1-2
|
Sparkling water, you're my favorite vibe
Bridge:
You're my go-to drink, day or night
You make me feel so light
I'll never give you up, you're my true love
Sparkling water, you're sent from above
Chorus:
Sparkling water, oh how you shine
A taste so clean, it's simply divine
You quench my thirst, you make me feel alive
Sparkling water, you're my favorite vibe
Outro:
Sparkling water, you're the one for me
I'll never let you go, can't you see
You're my drink of choice, forevermore
Sparkling water, I adore.
Here is an example with the ChatAnthropic chat model implementation, which uses their claude model.
chat = ChatAnthropic(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0)
resp = chat([HumanMessage(content="Write me a song about sparkling water.")])
Here is my attempt at a song about sparkling water:
Sparkling water, bubbles so bright,
Dancing in the glass with delight.
Refreshing and crisp, a fizzy delight,
Quenching my thirst with each sip I take.
The carbonation tickles my tongue,
As the refreshing water song is sung.
Lime or lemon, a citrus twist,
Makes sparkling water such a bliss.
Healthy and hydrating, a drink so pure,
Sparkling water, always alluring.
Bubbles ascending in a stream,
Sparkling water, you're my dream!
previous
How to serialize LLM classes
next
How to track token usage
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html
|
34c822cd0210-0
|
.ipynb
.pdf
How to serialize LLM classes
Contents
Loading
Saving
How to serialize LLM classes#
This notebook walks through how to write and read an LLM Configuration to and from disk. This is useful if you want to save the configuration for a given LLM (e.g., the provider, the temperature, etc).
from langchain.llms import OpenAI
from langchain.llms.loading import load_llm
Loading#
First, lets go over loading an LLM from disk. LLMs can be saved on disk in two formats: json or yaml. No matter the extension, they are loaded in the same way.
!cat llm.json
{
"model_name": "text-davinci-003",
"temperature": 0.7,
"max_tokens": 256,
"top_p": 1.0,
"frequency_penalty": 0.0,
"presence_penalty": 0.0,
"n": 1,
"best_of": 1,
"request_timeout": null,
"_type": "openai"
}
llm = load_llm("llm.json")
!cat llm.yaml
_type: openai
best_of: 1
frequency_penalty: 0.0
max_tokens: 256
model_name: text-davinci-003
n: 1
presence_penalty: 0.0
request_timeout: null
temperature: 0.7
top_p: 1.0
llm = load_llm("llm.yaml")
Saving#
If you want to go from an LLM in memory to a serialized version of it, you can do so easily by calling the .save method. Again, this supports both json and yaml.
llm.save("llm.json")
|
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_serialization.html
|
34c822cd0210-1
|
llm.save("llm.json")
llm.save("llm.yaml")
previous
How to cache LLM calls
next
How to stream LLM and Chat Model responses
Contents
Loading
Saving
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_serialization.html
|
7099b084652b-0
|
.ipynb
.pdf
How (and why) to use the fake LLM
How (and why) to use the fake LLM#
We expose a fake LLM class that can be used for testing. This allows you to mock out calls to the LLM and simulate what would happen if the LLM responded in a certain way.
In this notebook we go over how to use this.
We start this with using the FakeLLM in an agent.
from langchain.llms.fake import FakeListLLM
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
tools = load_tools(["python_repl"])
responses=[
"Action: Python REPL\nAction Input: print(2 + 2)",
"Final Answer: 4"
]
llm = FakeListLLM(responses=responses)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("whats 2 + 2")
> Entering new AgentExecutor chain...
Action: Python REPL
Action Input: print(2 + 2)
Observation: 4
Thought:Final Answer: 4
> Finished chain.
'4'
previous
How to write a custom LLM wrapper
next
How (and why) to use the human input LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/examples/fake_llm.html
|
4b2adce48702-0
|
.ipynb
.pdf
How to cache LLM calls
Contents
In Memory Cache
SQLite Cache
Redis Cache
Standard Cache
Semantic Cache
GPTCache
Momento Cache
SQLAlchemy Cache
Custom SQLAlchemy Schemas
Optional Caching
Optional Caching in Chains
How to cache LLM calls#
This notebook covers how to cache results of individual LLM calls.
import langchain
from langchain.llms import OpenAI
# To make the caching really obvious, lets use a slower model.
llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2)
In Memory Cache#
from langchain.cache import InMemoryCache
langchain.llm_cache = InMemoryCache()
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
CPU times: user 35.9 ms, sys: 28.6 ms, total: 64.6 ms
Wall time: 4.83 s
"\n\nWhy couldn't the bicycle stand up by itself? It was...two tired!"
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
CPU times: user 238 µs, sys: 143 µs, total: 381 µs
Wall time: 1.76 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
SQLite Cache#
!rm .langchain.db
# We can do the same thing with a SQLite cache
from langchain.cache import SQLiteCache
langchain.llm_cache = SQLiteCache(database_path=".langchain.db")
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
|
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
|
4b2adce48702-1
|
llm("Tell me a joke")
CPU times: user 17 ms, sys: 9.76 ms, total: 26.7 ms
Wall time: 825 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
CPU times: user 2.46 ms, sys: 1.23 ms, total: 3.7 ms
Wall time: 2.67 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
Redis Cache#
Standard Cache#
Use Redis to cache prompts and responses.
# We can do the same thing with a Redis cache
# (make sure your local Redis instance is running first before running this example)
from redis import Redis
from langchain.cache import RedisCache
langchain.llm_cache = RedisCache(redis_=Redis())
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
CPU times: user 6.88 ms, sys: 8.75 ms, total: 15.6 ms
Wall time: 1.04 s
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
CPU times: user 1.59 ms, sys: 610 µs, total: 2.2 ms
Wall time: 5.58 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'
Semantic Cache#
Use Redis to cache prompts and responses and evaluate hits based on semantic similarity.
|
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
|
4b2adce48702-2
|
Semantic Cache#
Use Redis to cache prompts and responses and evaluate hits based on semantic similarity.
from langchain.embeddings import OpenAIEmbeddings
from langchain.cache import RedisSemanticCache
langchain.llm_cache = RedisSemanticCache(
redis_url="redis://localhost:6379",
embedding=OpenAIEmbeddings()
)
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
CPU times: user 351 ms, sys: 156 ms, total: 507 ms
Wall time: 3.37 s
"\n\nWhy don't scientists trust atoms?\nBecause they make up everything."
%%time
# The second time, while not a direct hit, the question is semantically similar to the original question,
# so it uses the cached result!
llm("Tell me one joke")
CPU times: user 6.25 ms, sys: 2.72 ms, total: 8.97 ms
Wall time: 262 ms
"\n\nWhy don't scientists trust atoms?\nBecause they make up everything."
GPTCache#
We can use GPTCache for exact match caching OR to cache results based on semantic similarity
Let’s first start with an example of exact match
from gptcache import Cache
from gptcache.manager.factory import manager_factory
from gptcache.processor.pre import get_prompt
from langchain.cache import GPTCache
import hashlib
def get_hashed_name(name):
return hashlib.sha256(name.encode()).hexdigest()
def init_gptcache(cache_obj: Cache, llm: str):
hashed_llm = get_hashed_name(llm)
cache_obj.init(
pre_embedding_func=get_prompt,
|
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
|
4b2adce48702-3
|
cache_obj.init(
pre_embedding_func=get_prompt,
data_manager=manager_factory(manager="map", data_dir=f"map_cache_{hashed_llm}"),
)
langchain.llm_cache = GPTCache(init_gptcache)
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
CPU times: user 21.5 ms, sys: 21.3 ms, total: 42.8 ms
Wall time: 6.2 s
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
CPU times: user 571 µs, sys: 43 µs, total: 614 µs
Wall time: 635 µs
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'
Let’s now show an example of similarity caching
from gptcache import Cache
from gptcache.adapter.api import init_similar_cache
from langchain.cache import GPTCache
import hashlib
def get_hashed_name(name):
return hashlib.sha256(name.encode()).hexdigest()
def init_gptcache(cache_obj: Cache, llm: str):
hashed_llm = get_hashed_name(llm)
init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{hashed_llm}")
langchain.llm_cache = GPTCache(init_gptcache)
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
CPU times: user 1.42 s, sys: 279 ms, total: 1.7 s
|
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
|
4b2adce48702-4
|
Wall time: 8.44 s
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# This is an exact match, so it finds it in the cache
llm("Tell me a joke")
CPU times: user 866 ms, sys: 20 ms, total: 886 ms
Wall time: 226 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# This is not an exact match, but semantically within distance so it hits!
llm("Tell me joke")
CPU times: user 853 ms, sys: 14.8 ms, total: 868 ms
Wall time: 224 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
Momento Cache#
Use Momento to cache prompts and responses.
Requires momento to use, uncomment below to install:
# !pip install momento
You’ll need to get a Momemto auth token to use this class. This can either be passed in to a momento.CacheClient if you’d like to instantiate that directly, as a named parameter auth_token to MomentoChatMessageHistory.from_client_params, or can just be set as an environment variable MOMENTO_AUTH_TOKEN.
from datetime import timedelta
from langchain.cache import MomentoCache
cache_name = "langchain"
ttl = timedelta(days=1)
langchain.llm_cache = MomentoCache.from_client_params(cache_name, ttl)
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
CPU times: user 40.7 ms, sys: 16.5 ms, total: 57.2 ms
Wall time: 1.73 s
|
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
|
4b2adce48702-5
|
Wall time: 1.73 s
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'
%%time
# The second time it is, so it goes faster
# When run in the same region as the cache, latencies are single digit ms
llm("Tell me a joke")
CPU times: user 3.16 ms, sys: 2.98 ms, total: 6.14 ms
Wall time: 57.9 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'
SQLAlchemy Cache#
# You can use SQLAlchemyCache to cache with any SQL database supported by SQLAlchemy.
# from langchain.cache import SQLAlchemyCache
# from sqlalchemy import create_engine
# engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres")
# langchain.llm_cache = SQLAlchemyCache(engine)
Custom SQLAlchemy Schemas#
# You can define your own declarative SQLAlchemyCache child class to customize the schema used for caching. For example, to support high-speed fulltext prompt indexing with Postgres, use:
from sqlalchemy import Column, Integer, String, Computed, Index, Sequence
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy_utils import TSVectorType
from langchain.cache import SQLAlchemyCache
Base = declarative_base()
class FulltextLLMCache(Base): # type: ignore
"""Postgres table for fulltext-indexed LLM Cache"""
__tablename__ = "llm_cache_fulltext"
id = Column(Integer, Sequence('cache_id'), primary_key=True)
prompt = Column(String, nullable=False)
llm = Column(String, nullable=False)
idx = Column(Integer)
response = Column(String)
|
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
|
4b2adce48702-6
|
idx = Column(Integer)
response = Column(String)
prompt_tsv = Column(TSVectorType(), Computed("to_tsvector('english', llm || ' ' || prompt)", persisted=True))
__table_args__ = (
Index("idx_fulltext_prompt_tsv", prompt_tsv, postgresql_using="gin"),
)
engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres")
langchain.llm_cache = SQLAlchemyCache(engine, FulltextLLMCache)
Optional Caching#
You can also turn off caching for specific LLMs should you choose. In the example below, even though global caching is enabled, we turn it off for a specific LLM
llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2, cache=False)
%%time
llm("Tell me a joke")
CPU times: user 5.8 ms, sys: 2.71 ms, total: 8.51 ms
Wall time: 745 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'
%%time
llm("Tell me a joke")
CPU times: user 4.91 ms, sys: 2.64 ms, total: 7.55 ms
Wall time: 623 ms
'\n\nTwo guys stole a calendar. They got six months each.'
Optional Caching in Chains#
You can also turn off caching for particular nodes in chains. Note that because of certain interfaces, its often easier to construct the chain first, and then edit the LLM afterwards.
As an example, we will load a summarizer map-reduce chain. We will cache results for the map-step, but then not freeze it for the combine step.
|
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
|
4b2adce48702-7
|
llm = OpenAI(model_name="text-davinci-002")
no_cache_llm = OpenAI(model_name="text-davinci-002", cache=False)
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.mapreduce import MapReduceChain
text_splitter = CharacterTextSplitter()
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
texts = text_splitter.split_text(state_of_the_union)
from langchain.docstore.document import Document
docs = [Document(page_content=t) for t in texts[:3]]
from langchain.chains.summarize import load_summarize_chain
chain = load_summarize_chain(llm, chain_type="map_reduce", reduce_llm=no_cache_llm)
%%time
chain.run(docs)
CPU times: user 452 ms, sys: 60.3 ms, total: 512 ms
Wall time: 5.09 s
'\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure. In response to Russian aggression in Ukraine, the United States is joining with European allies to impose sanctions and isolate Russia. American forces are being mobilized to protect NATO countries in the event that Putin decides to keep moving west. The Ukrainians are bravely fighting back, but the next few weeks will be hard for them. Putin will pay a high price for his actions in the long run. Americans should not be alarmed, as the United States is taking action to protect its interests and allies.'
When we run it again, we see that it runs substantially faster but the final answer is different. This is due to caching at the map steps, but not at the reduce step.
%%time
chain.run(docs)
|
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
|
4b2adce48702-8
|
%%time
chain.run(docs)
CPU times: user 11.5 ms, sys: 4.33 ms, total: 15.8 ms
Wall time: 1.04 s
'\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure.'
!rm .langchain.db sqlite.db
previous
How (and why) to use the human input LLM
next
How to serialize LLM classes
Contents
In Memory Cache
SQLite Cache
Redis Cache
Standard Cache
Semantic Cache
GPTCache
Momento Cache
SQLAlchemy Cache
Custom SQLAlchemy Schemas
Optional Caching
Optional Caching in Chains
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
|
bb1669d77bfc-0
|
.ipynb
.pdf
How to track token usage
How to track token usage#
This notebook goes over how to track your token usage for specific calls. It is currently only implemented for the OpenAI API.
Let’s first look at an extremely simple example of tracking token usage for a single LLM call.
from langchain.llms import OpenAI
from langchain.callbacks import get_openai_callback
llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2)
with get_openai_callback() as cb:
result = llm("Tell me a joke")
print(cb)
Tokens Used: 42
Prompt Tokens: 4
Completion Tokens: 38
Successful Requests: 1
Total Cost (USD): $0.00084
Anything inside the context manager will get tracked. Here’s an example of using it to track multiple calls in sequence.
with get_openai_callback() as cb:
result = llm("Tell me a joke")
result2 = llm("Tell me a joke")
print(cb.total_tokens)
91
If a chain or agent with multiple steps in it is used, it will track all those steps.
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
with get_openai_callback() as cb:
response = agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
|
https://python.langchain.com/en/latest/modules/models/llms/examples/token_usage_tracking.html
|
bb1669d77bfc-1
|
print(f"Total Tokens: {cb.total_tokens}")
print(f"Prompt Tokens: {cb.prompt_tokens}")
print(f"Completion Tokens: {cb.completion_tokens}")
print(f"Total Cost (USD): ${cb.total_cost}")
> Entering new AgentExecutor chain...
I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.
Action: Search
Action Input: "Olivia Wilde boyfriend"
Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
Thought: I need to find out Harry Styles' age.
Action: Search
Action Input: "Harry Styles age"
Observation: 29 years
Thought: I need to calculate 29 raised to the 0.23 power.
Action: Calculator
Action Input: 29^0.23
Observation: Answer: 2.169459462491557
Thought: I now know the final answer.
Final Answer: Harry Styles, Olivia Wilde's boyfriend, is 29 years old and his age raised to the 0.23 power is 2.169459462491557.
> Finished chain.
Total Tokens: 1506
Prompt Tokens: 1350
Completion Tokens: 156
Total Cost (USD): $0.03012
previous
How to stream LLM and Chat Model responses
next
Integrations
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/examples/token_usage_tracking.html
|
4770a60d259d-0
|
.rst
.pdf
Example Selectors
Example Selectors#
Note
Conceptual Guide
If you have a large number of examples, you may need to select which ones to include in the prompt. The ExampleSelector is the class responsible for doing so.
The base interface is defined as below:
class BaseExampleSelector(ABC):
"""Interface for selecting examples to include in prompts."""
@abstractmethod
def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:
"""Select which examples to use based on the inputs."""
The only method it needs to expose is a select_examples method. This takes in the input variables and then returns a list of examples. It is up to each specific implementation as to how those examples are selected. Let’s take a look at some below.
See below for a list of example selectors.
How to create a custom example selector
LengthBased ExampleSelector
Maximal Marginal Relevance ExampleSelector
NGram Overlap ExampleSelector
Similarity ExampleSelector
previous
Chat Prompt Template
next
How to create a custom example selector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/example_selectors.html
|
1260e4d5e97c-0
|
.ipynb
.pdf
Getting Started
Contents
PromptTemplates
to_string
to_messages
Getting Started#
This section contains everything related to prompts. A prompt is the value passed into the Language Model. This value can either be a string (for LLMs) or a list of messages (for Chat Models).
The data types of these prompts are rather simple, but their construction is anything but. Value props of LangChain here include:
A standard interface for string prompts and message prompts
A standard (to get started) interface for string prompt templates and message prompt templates
Example Selectors: methods for inserting examples into the prompt for the language model to follow
OutputParsers: methods for inserting instructions into the prompt as the format in which the language model should output information, as well as methods for then parsing that string output into a format.
We have in depth documentation for specific types of string prompts, specific types of chat prompts, example selectors, and output parsers.
Here, we cover a quick-start for a standard interface for getting started with simple prompts.
PromptTemplates#
PromptTemplates are responsible for constructing a prompt value. These PromptTemplates can do things like formatting, example selection, and more. At a high level, these are basically objects that expose a format_prompt method for constructing a prompt. Under the hood, ANYTHING can happen.
from langchain.prompts import PromptTemplate, ChatPromptTemplate
string_prompt = PromptTemplate.from_template("tell me a joke about {subject}")
chat_prompt = ChatPromptTemplate.from_template("tell me a joke about {subject}")
string_prompt_value = string_prompt.format_prompt(subject="soccer")
chat_prompt_value = chat_prompt.format_prompt(subject="soccer")
to_string#
This is what is called when passing to an LLM (which expects raw text)
string_prompt_value.to_string()
'tell me a joke about soccer'
|
https://python.langchain.com/en/latest/modules/prompts/getting_started.html
|
1260e4d5e97c-1
|
string_prompt_value.to_string()
'tell me a joke about soccer'
chat_prompt_value.to_string()
'Human: tell me a joke about soccer'
to_messages#
This is what is called when passing to ChatModel (which expects a list of messages)
string_prompt_value.to_messages()
[HumanMessage(content='tell me a joke about soccer', additional_kwargs={}, example=False)]
chat_prompt_value.to_messages()
[HumanMessage(content='tell me a joke about soccer', additional_kwargs={}, example=False)]
previous
Prompts
next
Prompt Templates
Contents
PromptTemplates
to_string
to_messages
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/getting_started.html
|
8e665713a1b9-0
|
.rst
.pdf
Output Parsers
Output Parsers#
Note
Conceptual Guide
Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.
Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:
get_format_instructions() -> str: A method which returns a string containing instructions for how the output of a language model should be formatted.
parse(str) -> Any: A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.
And then one optional one:
parse_with_prompt(str) -> Any: A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.
To start, we recommend familiarizing yourself with the Getting Started section
Output Parsers
After that, we provide deep dives on all the different types of output parsers.
CommaSeparatedListOutputParser
Enum Output Parser
OutputFixingParser
PydanticOutputParser
RetryOutputParser
Structured Output Parser
previous
Similarity ExampleSelector
next
Output Parsers
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/output_parsers.html
|
154a740fe908-0
|
.rst
.pdf
Prompt Templates
Prompt Templates#
Note
Conceptual Guide
Language models take text as input - that text is commonly referred to as a prompt.
Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input.
LangChain provides several classes and functions to make constructing and working with prompts easy.
The following sections of documentation are provided:
Getting Started: An overview of all the functionality LangChain provides for working with and constructing prompts.
How-To Guides: A collection of how-to guides. These highlight how to accomplish various objectives with our prompt class.
Reference: API reference documentation for all prompt classes.
previous
Getting Started
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates.html
|
a4388eb300bc-0
|
.ipynb
.pdf
Chat Prompt Template
Contents
Format output
Different types of MessagePromptTemplate
Chat Prompt Template#
Chat Models takes a list of chat messages as input - this list commonly referred to as a prompt.
These chat messages differ from raw string (which you would pass into a LLM model) in that every message is associated with a role.
For example, in OpenAI Chat Completion API, a chat message can be associated with the AI, human or system role. The model is supposed to follow instruction from system chat message more closely.
Therefore, LangChain provides several related prompt templates to make constructing and working with prompts easily. You are encouraged to use these chat related prompt templates instead of PromptTemplate when querying chat models to fully exploit the potential of underlying chat model.
from langchain.prompts import (
ChatPromptTemplate,
PromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
To create a message template associated with a role, you use MessagePromptTemplate.
For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:
template="You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
If you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate outside and then pass it in, eg:
prompt=PromptTemplate(
template="You are a helpful assistant that translates {input_language} to {output_language}.",
input_variables=["input_language", "output_language"],
)
|
https://python.langchain.com/en/latest/modules/prompts/chat_prompt_template.html
|
a4388eb300bc-1
|
input_variables=["input_language", "output_language"],
)
system_message_prompt_2 = SystemMessagePromptTemplate(prompt=prompt)
assert system_message_prompt == system_message_prompt_2
After that, you can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
# get a chat completion from the formatted messages
chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages()
[SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}),
HumanMessage(content='I love programming.', additional_kwargs={})]
Format output#
The output of the format method is available as string, list of messages and ChatPromptValue
As string:
output = chat_prompt.format(input_language="English", output_language="French", text="I love programming.")
output
'System: You are a helpful assistant that translates English to French.\nHuman: I love programming.'
# or alternatively
output_2 = chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_string()
assert output == output_2
As ChatPromptValue
chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.")
ChatPromptValue(messages=[SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}), HumanMessage(content='I love programming.', additional_kwargs={})])
As list of Message objects
chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages()
|
https://python.langchain.com/en/latest/modules/prompts/chat_prompt_template.html
|
a4388eb300bc-2
|
[SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}),
HumanMessage(content='I love programming.', additional_kwargs={})]
Different types of MessagePromptTemplate#
LangChain provides different types of MessagePromptTemplate. The most commonly used are AIMessagePromptTemplate, SystemMessagePromptTemplate and HumanMessagePromptTemplate, which create an AI message, system message and human message respectively.
However, in cases where the chat model supports taking chat message with arbitrary role, you can use ChatMessagePromptTemplate, which allows user to specify the role name.
from langchain.prompts import ChatMessagePromptTemplate
prompt = "May the {subject} be with you"
chat_message_prompt = ChatMessagePromptTemplate.from_template(role="Jedi", template=prompt)
chat_message_prompt.format(subject="force")
ChatMessage(content='May the force be with you', additional_kwargs={}, role='Jedi')
LangChain also provides MessagesPlaceholder, which gives you full control of what messages to be rendered during formatting. This can be useful when you are uncertain of what role you should be using for your message prompt templates or when you wish to insert a list of messages during formatting.
from langchain.prompts import MessagesPlaceholder
human_prompt = "Summarize our conversation so far in {word_count} words."
human_message_template = HumanMessagePromptTemplate.from_template(human_prompt)
chat_prompt = ChatPromptTemplate.from_messages([MessagesPlaceholder(variable_name="conversation"), human_message_template])
human_message = HumanMessage(content="What is the best way to learn programming?")
ai_message = AIMessage(content="""\
1. Choose a programming language: Decide on a programming language that you want to learn.
2. Start with the basics: Familiarize yourself with the basic programming concepts such as variables, data types and control structures.
|
https://python.langchain.com/en/latest/modules/prompts/chat_prompt_template.html
|
a4388eb300bc-3
|
3. Practice, practice, practice: The best way to learn programming is through hands-on experience\
""")
chat_prompt.format_prompt(conversation=[human_message, ai_message], word_count="10").to_messages()
[HumanMessage(content='What is the best way to learn programming?', additional_kwargs={}),
AIMessage(content='1. Choose a programming language: Decide on a programming language that you want to learn. \n\n2. Start with the basics: Familiarize yourself with the basic programming concepts such as variables, data types and control structures.\n\n3. Practice, practice, practice: The best way to learn programming is through hands-on experience', additional_kwargs={}),
HumanMessage(content='Summarize our conversation so far in 10 words.', additional_kwargs={})]
previous
Output Parsers
next
Example Selectors
Contents
Format output
Different types of MessagePromptTemplate
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/chat_prompt_template.html
|
50943b71eb7d-0
|
.ipynb
.pdf
Similarity ExampleSelector
Similarity ExampleSelector#
The SemanticSimilarityExampleSelector selects examples based on which examples are most similar to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs.
from langchain.prompts.example_selector import SemanticSimilarityExampleSelector
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
from langchain.prompts import FewShotPromptTemplate, PromptTemplate
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
# These are a lot of examples of a pretend task of creating antonyms.
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
{"input": "energetic", "output": "lethargic"},
{"input": "sunny", "output": "gloomy"},
{"input": "windy", "output": "calm"},
]
example_selector = SemanticSimilarityExampleSelector.from_examples(
# This is the list of examples available to select from.
examples,
# This is the embedding class used to produce embeddings which are used to measure semantic similarity.
OpenAIEmbeddings(),
# This is the VectorStore class that is used to store the embeddings and do a similarity search over.
Chroma,
# This is the number of examples to produce.
k=1
)
similar_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
|
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/similarity.html
|
50943b71eb7d-1
|
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
# Input is a feeling, so should select the happy/sad example
print(similar_prompt.format(adjective="worried"))
Give the antonym of every input
Input: happy
Output: sad
Input: worried
Output:
# Input is a measurement, so should select the tall/short example
print(similar_prompt.format(adjective="fat"))
Give the antonym of every input
Input: happy
Output: sad
Input: fat
Output:
# You can add new examples to the SemanticSimilarityExampleSelector as well
similar_prompt.example_selector.add_example({"input": "enthusiastic", "output": "apathetic"})
print(similar_prompt.format(adjective="joyful"))
Give the antonym of every input
Input: happy
Output: sad
Input: joyful
Output:
previous
NGram Overlap ExampleSelector
next
Output Parsers
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/similarity.html
|
76ce980910a0-0
|
.ipynb
.pdf
Maximal Marginal Relevance ExampleSelector
Maximal Marginal Relevance ExampleSelector#
The MaxMarginalRelevanceExampleSelector selects examples based on a combination of which examples are most similar to the inputs, while also optimizing for diversity. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs, and then iteratively adding them while penalizing them for closeness to already selected examples.
from langchain.prompts.example_selector import MaxMarginalRelevanceExampleSelector
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
from langchain.prompts import FewShotPromptTemplate, PromptTemplate
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
# These are a lot of examples of a pretend task of creating antonyms.
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
{"input": "energetic", "output": "lethargic"},
{"input": "sunny", "output": "gloomy"},
{"input": "windy", "output": "calm"},
]
example_selector = MaxMarginalRelevanceExampleSelector.from_examples(
# This is the list of examples available to select from.
examples,
# This is the embedding class used to produce embeddings which are used to measure semantic similarity.
OpenAIEmbeddings(),
# This is the VectorStore class that is used to store the embeddings and do a similarity search over.
FAISS,
# This is the number of examples to produce.
k=2
)
mmr_prompt = FewShotPromptTemplate(
|
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/mmr.html
|
76ce980910a0-1
|
k=2
)
mmr_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
# Input is a feeling, so should select the happy/sad example as the first one
print(mmr_prompt.format(adjective="worried"))
Give the antonym of every input
Input: happy
Output: sad
Input: windy
Output: calm
Input: worried
Output:
# Let's compare this to what we would just get if we went solely off of similarity
similar_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
similar_prompt.example_selector.k = 2
print(similar_prompt.format(adjective="worried"))
Give the antonym of every input
Input: happy
Output: sad
Input: windy
Output: calm
Input: worried
Output:
previous
LengthBased ExampleSelector
next
NGram Overlap ExampleSelector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/mmr.html
|
e6fc687e6079-0
|
.ipynb
.pdf
LengthBased ExampleSelector
LengthBased ExampleSelector#
This ExampleSelector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more.
from langchain.prompts import PromptTemplate
from langchain.prompts import FewShotPromptTemplate
from langchain.prompts.example_selector import LengthBasedExampleSelector
# These are a lot of examples of a pretend task of creating antonyms.
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
{"input": "energetic", "output": "lethargic"},
{"input": "sunny", "output": "gloomy"},
{"input": "windy", "output": "calm"},
]
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
example_selector = LengthBasedExampleSelector(
# These are the examples it has available to choose from.
examples=examples,
# This is the PromptTemplate being used to format the examples.
example_prompt=example_prompt,
# This is the maximum length that the formatted examples should be.
# Length is measured by the get_text_length function below.
max_length=25,
# This is the function used to get the length of a string, which is used
# to determine which examples to include. It is commented out because
# it is provided as a default value if none is specified.
|
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/length_based.html
|
e6fc687e6079-1
|
# it is provided as a default value if none is specified.
# get_text_length: Callable[[str], int] = lambda x: len(re.split("\n| ", x))
)
dynamic_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
# An example with small input, so it selects all examples.
print(dynamic_prompt.format(adjective="big"))
Give the antonym of every input
Input: happy
Output: sad
Input: tall
Output: short
Input: energetic
Output: lethargic
Input: sunny
Output: gloomy
Input: windy
Output: calm
Input: big
Output:
# An example with long input, so it selects only one example.
long_string = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else"
print(dynamic_prompt.format(adjective=long_string))
Give the antonym of every input
Input: happy
Output: sad
Input: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else
Output:
# You can add an example to an example selector as well.
new_example = {"input": "big", "output": "small"}
dynamic_prompt.example_selector.add_example(new_example)
print(dynamic_prompt.format(adjective="enthusiastic"))
Give the antonym of every input
Input: happy
Output: sad
Input: tall
Output: short
Input: energetic
Output: lethargic
Input: sunny
Output: gloomy
Input: windy
Output: calm
|
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/length_based.html
|
e6fc687e6079-2
|
Input: sunny
Output: gloomy
Input: windy
Output: calm
Input: big
Output: small
Input: enthusiastic
Output:
previous
How to create a custom example selector
next
Maximal Marginal Relevance ExampleSelector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/length_based.html
|
26abd306d934-0
|
.ipynb
.pdf
NGram Overlap ExampleSelector
NGram Overlap ExampleSelector#
The NGramOverlapExampleSelector selects and orders examples based on which examples are most similar to the input, according to an ngram overlap score. The ngram overlap score is a float between 0.0 and 1.0, inclusive.
The selector allows for a threshold score to be set. Examples with an ngram overlap score less than or equal to the threshold are excluded. The threshold is set to -1.0, by default, so will not exclude any examples, only reorder them. Setting the threshold to 0.0 will exclude examples that have no ngram overlaps with the input.
from langchain.prompts import PromptTemplate
from langchain.prompts.example_selector.ngram_overlap import NGramOverlapExampleSelector
from langchain.prompts import FewShotPromptTemplate, PromptTemplate
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
# These are a lot of examples of a pretend task of creating antonyms.
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
{"input": "energetic", "output": "lethargic"},
{"input": "sunny", "output": "gloomy"},
{"input": "windy", "output": "calm"},
]
# These are examples of a fictional translation task.
examples = [
{"input": "See Spot run.", "output": "Ver correr a Spot."},
{"input": "My dog barks.", "output": "Mi perro ladra."},
{"input": "Spot can run.", "output": "Spot puede correr."},
|
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html
|
26abd306d934-1
|
{"input": "Spot can run.", "output": "Spot puede correr."},
]
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
example_selector = NGramOverlapExampleSelector(
# These are the examples it has available to choose from.
examples=examples,
# This is the PromptTemplate being used to format the examples.
example_prompt=example_prompt,
# This is the threshold, at which selector stops.
# It is set to -1.0 by default.
threshold=-1.0,
# For negative threshold:
# Selector sorts examples by ngram overlap score, and excludes none.
# For threshold greater than 1.0:
# Selector excludes all examples, and returns an empty list.
# For threshold equal to 0.0:
# Selector sorts examples by ngram overlap score,
# and excludes those with no ngram overlap with input.
)
dynamic_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the Spanish translation of every input",
suffix="Input: {sentence}\nOutput:",
input_variables=["sentence"],
)
# An example input with large ngram overlap with "Spot can run."
# and no overlap with "My dog barks."
print(dynamic_prompt.format(sentence="Spot can run fast."))
Give the Spanish translation of every input
Input: Spot can run.
Output: Spot puede correr.
Input: See Spot run.
Output: Ver correr a Spot.
Input: My dog barks.
|
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html
|
26abd306d934-2
|
Output: Ver correr a Spot.
Input: My dog barks.
Output: Mi perro ladra.
Input: Spot can run fast.
Output:
# You can add examples to NGramOverlapExampleSelector as well.
new_example = {"input": "Spot plays fetch.", "output": "Spot juega a buscar."}
example_selector.add_example(new_example)
print(dynamic_prompt.format(sentence="Spot can run fast."))
Give the Spanish translation of every input
Input: Spot can run.
Output: Spot puede correr.
Input: See Spot run.
Output: Ver correr a Spot.
Input: Spot plays fetch.
Output: Spot juega a buscar.
Input: My dog barks.
Output: Mi perro ladra.
Input: Spot can run fast.
Output:
# You can set a threshold at which examples are excluded.
# For example, setting threshold equal to 0.0
# excludes examples with no ngram overlaps with input.
# Since "My dog barks." has no ngram overlaps with "Spot can run fast."
# it is excluded.
example_selector.threshold=0.0
print(dynamic_prompt.format(sentence="Spot can run fast."))
Give the Spanish translation of every input
Input: Spot can run.
Output: Spot puede correr.
Input: See Spot run.
Output: Ver correr a Spot.
Input: Spot plays fetch.
Output: Spot juega a buscar.
Input: Spot can run fast.
Output:
# Setting small nonzero threshold
example_selector.threshold=0.09
print(dynamic_prompt.format(sentence="Spot can play fetch."))
Give the Spanish translation of every input
Input: Spot can run.
Output: Spot puede correr.
Input: Spot plays fetch.
Output: Spot juega a buscar.
|
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html
|
26abd306d934-3
|
Input: Spot plays fetch.
Output: Spot juega a buscar.
Input: Spot can play fetch.
Output:
# Setting threshold greater than 1.0
example_selector.threshold=1.0+1e-9
print(dynamic_prompt.format(sentence="Spot can play fetch."))
Give the Spanish translation of every input
Input: Spot can play fetch.
Output:
previous
Maximal Marginal Relevance ExampleSelector
next
Similarity ExampleSelector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html
|
24d2afc7d1be-0
|
.md
.pdf
How to create a custom example selector
Contents
Implement custom example selector
Use custom example selector
How to create a custom example selector#
In this tutorial, we’ll create a custom example selector that selects every alternate example from a given list of examples.
An ExampleSelector must implement two methods:
An add_example method which takes in an example and adds it into the ExampleSelector
A select_examples method which takes in input variables (which are meant to be user input) and returns a list of examples to use in the few shot prompt.
Let’s implement a custom ExampleSelector that just selects two examples at random.
Note
Take a look at the current set of example selector implementations supported in LangChain here.
Implement custom example selector#
from langchain.prompts.example_selector.base import BaseExampleSelector
from typing import Dict, List
import numpy as np
class CustomExampleSelector(BaseExampleSelector):
def __init__(self, examples: List[Dict[str, str]]):
self.examples = examples
def add_example(self, example: Dict[str, str]) -> None:
"""Add new example to store for a key."""
self.examples.append(example)
def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:
"""Select which examples to use based on the inputs."""
return np.random.choice(self.examples, size=2, replace=False)
Use custom example selector#
examples = [
{"foo": "1"},
{"foo": "2"},
{"foo": "3"}
]
# Initialize example selector.
example_selector = CustomExampleSelector(examples)
# Select examples
example_selector.select_examples({"foo": "foo"})
# -> array([{'foo': '2'}, {'foo': '3'}], dtype=object)
|
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/custom_example_selector.html
|
24d2afc7d1be-1
|
# Add new example to the set of examples
example_selector.add_example({"foo": "4"})
example_selector.examples
# -> [{'foo': '1'}, {'foo': '2'}, {'foo': '3'}, {'foo': '4'}]
# Select examples
example_selector.select_examples({"foo": "foo"})
# -> array([{'foo': '1'}, {'foo': '4'}], dtype=object)
previous
Example Selectors
next
LengthBased ExampleSelector
Contents
Implement custom example selector
Use custom example selector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/custom_example_selector.html
|
730d2909981c-0
|
.ipynb
.pdf
Output Parsers
Output Parsers#
Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.
Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:
get_format_instructions() -> str: A method which returns a string containing instructions for how the output of a language model should be formatted.
parse(str) -> Any: A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.
And then one optional one:
parse_with_prompt(str, PromptValue) -> Any: A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.
Below we go over the main type of output parser, the PydanticOutputParser. See the examples folder for other options.
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field, validator
from typing import List
model_name = 'text-davinci-003'
temperature = 0.0
model = OpenAI(model_name=model_name, temperature=temperature)
# Define your desired data structure.
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke")
|
https://python.langchain.com/en/latest/modules/prompts/output_parsers/getting_started.html
|
730d2909981c-1
|
punchline: str = Field(description="answer to resolve the joke")
# You can add custom validation logic easily with Pydantic.
@validator('setup')
def question_ends_with_question_mark(cls, field):
if field[-1] != '?':
raise ValueError("Badly formed question!")
return field
# Set up a parser + inject instructions into the prompt template.
parser = PydanticOutputParser(pydantic_object=Joke)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
# And a query intented to prompt a language model to populate the data structure.
joke_query = "Tell me a joke."
_input = prompt.format_prompt(query=joke_query)
output = model(_input.to_string())
parser.parse(output)
Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')
previous
Output Parsers
next
CommaSeparatedListOutputParser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/output_parsers/getting_started.html
|
39a5b58ff4de-0
|
.ipynb
.pdf
RetryOutputParser
RetryOutputParser#
While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it can’t. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example.
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.output_parsers import PydanticOutputParser, OutputFixingParser, RetryOutputParser
from pydantic import BaseModel, Field, validator
from typing import List
template = """Based on the user question, provide an Action and Action Input for what step should be taken.
{format_instructions}
Question: {query}
Response:"""
class Action(BaseModel):
action: str = Field(description="action to take")
action_input: str = Field(description="input to the action")
parser = PydanticOutputParser(pydantic_object=Action)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
prompt_value = prompt.format_prompt(query="who is leo di caprios gf?")
bad_response = '{"action": "search"}'
If we try to parse this response as is, we will get an error
parser.parse(bad_response)
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
File ~/workplace/langchain/langchain/output_parsers/pydantic.py:24, in PydanticOutputParser.parse(self, text)
23 json_object = json.loads(json_str)
|
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/retry.html
|
39a5b58ff4de-1
|
23 json_object = json.loads(json_str)
---> 24 return self.pydantic_object.parse_obj(json_object)
26 except (json.JSONDecodeError, ValidationError) as e:
File ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:527, in pydantic.main.BaseModel.parse_obj()
File ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:342, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for Action
action_input
field required (type=value_error.missing)
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
Cell In[6], line 1
----> 1 parser.parse(bad_response)
File ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text)
27 name = self.pydantic_object.__name__
28 msg = f"Failed to parse {name} from completion {text}. Got: {e}"
---> 29 raise OutputParserException(msg)
OutputParserException: Failed to parse Action from completion {"action": "search"}. Got: 1 validation error for Action
action_input
field required (type=value_error.missing)
If we try to use the OutputFixingParser to fix this error, it will be confused - namely, it doesn’t know what to actually put for action input.
fix_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI())
fix_parser.parse(bad_response)
Action(action='search', action_input='')
|
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/retry.html
|
39a5b58ff4de-2
|
fix_parser.parse(bad_response)
Action(action='search', action_input='')
Instead, we can use the RetryOutputParser, which passes in the prompt (as well as the original output) to try again to get a better response.
from langchain.output_parsers import RetryWithErrorOutputParser
retry_parser = RetryWithErrorOutputParser.from_llm(parser=parser, llm=OpenAI(temperature=0))
retry_parser.parse_with_prompt(bad_response, prompt_value)
Action(action='search', action_input='who is leo di caprios gf?')
previous
PydanticOutputParser
next
Structured Output Parser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/retry.html
|
ffa3080fd49a-0
|
.ipynb
.pdf
OutputFixingParser
OutputFixingParser#
This output parser wraps another output parser and tries to fix any mistakes
The Pydantic guardrail simply tries to parse the LLM response. If it does not parse correctly, then it errors.
But we can do other things besides throw errors. Specifically, we can pass the misformatted output, along with the formatted instructions, to the model and ask it to fix it.
For this example, we’ll use the above OutputParser. Here’s what happens if we pass it a result that does not comply with the schema:
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field, validator
from typing import List
class Actor(BaseModel):
name: str = Field(description="name of an actor")
film_names: List[str] = Field(description="list of names of films they starred in")
actor_query = "Generate the filmography for a random actor."
parser = PydanticOutputParser(pydantic_object=Actor)
misformatted = "{'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}"
parser.parse(misformatted)
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
File ~/workplace/langchain/langchain/output_parsers/pydantic.py:23, in PydanticOutputParser.parse(self, text)
22 json_str = match.group()
---> 23 json_object = json.loads(json_str)
24 return self.pydantic_object.parse_obj(json_object)
|
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html
|
ffa3080fd49a-1
|
24 return self.pydantic_object.parse_obj(json_object)
File ~/.pyenv/versions/3.9.1/lib/python3.9/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
343 if (cls is None and object_hook is None and
344 parse_int is None and parse_float is None and
345 parse_constant is None and object_pairs_hook is None and not kw):
--> 346 return _default_decoder.decode(s)
347 if cls is None:
File ~/.pyenv/versions/3.9.1/lib/python3.9/json/decoder.py:337, in JSONDecoder.decode(self, s, _w)
333 """Return the Python representation of ``s`` (a ``str`` instance
334 containing a JSON document).
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
File ~/.pyenv/versions/3.9.1/lib/python3.9/json/decoder.py:353, in JSONDecoder.raw_decode(self, s, idx)
352 try:
--> 353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
Cell In[6], line 1
----> 1 parser.parse(misformatted)
|
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html
|
ffa3080fd49a-2
|
Cell In[6], line 1
----> 1 parser.parse(misformatted)
File ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text)
27 name = self.pydantic_object.__name__
28 msg = f"Failed to parse {name} from completion {text}. Got: {e}"
---> 29 raise OutputParserException(msg)
OutputParserException: Failed to parse Actor from completion {'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}. Got: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
Now we can construct and use a OutputFixingParser. This output parser takes as an argument another output parser but also an LLM with which to try to correct any formatting mistakes.
from langchain.output_parsers import OutputFixingParser
new_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI())
new_parser.parse(misformatted)
Actor(name='Tom Hanks', film_names=['Forrest Gump'])
previous
Enum Output Parser
next
PydanticOutputParser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html
|
efebc8656141-0
|
.ipynb
.pdf
CommaSeparatedListOutputParser
CommaSeparatedListOutputParser#
Here’s another parser strictly less powerful than Pydantic/JSON parsing.
from langchain.output_parsers import CommaSeparatedListOutputParser
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
output_parser = CommaSeparatedListOutputParser()
format_instructions = output_parser.get_format_instructions()
prompt = PromptTemplate(
template="List five {subject}.\n{format_instructions}",
input_variables=["subject"],
partial_variables={"format_instructions": format_instructions}
)
model = OpenAI(temperature=0)
_input = prompt.format(subject="ice cream flavors")
output = model(_input)
output_parser.parse(output)
['Vanilla',
'Chocolate',
'Strawberry',
'Mint Chocolate Chip',
'Cookies and Cream']
previous
Output Parsers
next
Enum Output Parser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/comma_separated.html
|
246f695acc37-0
|
.ipynb
.pdf
Structured Output Parser
Structured Output Parser#
While the Pydantic/JSON parser is more powerful, we initially experimented data structures having text fields only.
from langchain.output_parsers import StructuredOutputParser, ResponseSchema
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
Here we define the response schema we want to receive.
response_schemas = [
ResponseSchema(name="answer", description="answer to the user's question"),
ResponseSchema(name="source", description="source used to answer the user's question, should be a website.")
]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
We now get a string that contains instructions for how the response should be formatted, and we then insert that into our prompt.
format_instructions = output_parser.get_format_instructions()
prompt = PromptTemplate(
template="answer the users question as best as possible.\n{format_instructions}\n{question}",
input_variables=["question"],
partial_variables={"format_instructions": format_instructions}
)
We can now use this to format a prompt to send to the language model, and then parse the returned result.
model = OpenAI(temperature=0)
_input = prompt.format_prompt(question="what's the capital of france?")
output = model(_input.to_string())
output_parser.parse(output)
{'answer': 'Paris',
'source': 'https://www.worldatlas.com/articles/what-is-the-capital-of-france.html'}
And here’s an example of using this in a chat model
chat_model = ChatOpenAI(temperature=0)
prompt = ChatPromptTemplate(
messages=[
|
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/structured.html
|
246f695acc37-1
|
prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template("answer the users question as best as possible.\n{format_instructions}\n{question}")
],
input_variables=["question"],
partial_variables={"format_instructions": format_instructions}
)
_input = prompt.format_prompt(question="what's the capital of france?")
output = chat_model(_input.to_messages())
output_parser.parse(output.content)
{'answer': 'Paris', 'source': 'https://en.wikipedia.org/wiki/Paris'}
previous
RetryOutputParser
next
Memory
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/structured.html
|
bfd23377249b-0
|
.ipynb
.pdf
Enum Output Parser
Enum Output Parser#
This notebook shows how to use an Enum output parser
from langchain.output_parsers.enum import EnumOutputParser
from enum import Enum
class Colors(Enum):
RED = "red"
GREEN = "green"
BLUE = "blue"
parser = EnumOutputParser(enum=Colors)
parser.parse("red")
<Colors.RED: 'red'>
# Can handle spaces
parser.parse(" green")
<Colors.GREEN: 'green'>
# And new lines
parser.parse("blue\n")
<Colors.BLUE: 'blue'>
# And raises errors when appropriate
parser.parse("yellow")
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File ~/workplace/langchain/langchain/output_parsers/enum.py:25, in EnumOutputParser.parse(self, response)
24 try:
---> 25 return self.enum(response.strip())
26 except ValueError:
File ~/.pyenv/versions/3.9.1/lib/python3.9/enum.py:315, in EnumMeta.__call__(cls, value, names, module, qualname, type, start)
314 if names is None: # simple value lookup
--> 315 return cls.__new__(cls, value)
316 # otherwise, functional API: we're creating a new Enum type
File ~/.pyenv/versions/3.9.1/lib/python3.9/enum.py:611, in Enum.__new__(cls, value)
610 if result is None and exc is None:
--> 611 raise ve_exc
612 elif exc is None:
ValueError: 'yellow' is not a valid Colors
During handling of the above exception, another exception occurred:
|
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/enum.html
|
bfd23377249b-1
|
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
Cell In[8], line 2
1 # And raises errors when appropriate
----> 2 parser.parse("yellow")
File ~/workplace/langchain/langchain/output_parsers/enum.py:27, in EnumOutputParser.parse(self, response)
25 return self.enum(response.strip())
26 except ValueError:
---> 27 raise OutputParserException(
28 f"Response '{response}' is not one of the "
29 f"expected values: {self._valid_values}"
30 )
OutputParserException: Response 'yellow' is not one of the expected values: ['red', 'green', 'blue']
previous
CommaSeparatedListOutputParser
next
OutputFixingParser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/enum.html
|
c13234c82a3a-0
|
.ipynb
.pdf
PydanticOutputParser
PydanticOutputParser#
This output parser allows users to specify an arbitrary JSON schema and query LLMs for JSON outputs that conform to that schema.
Keep in mind that large language models are leaky abstractions! You’ll have to use an LLM with sufficient capacity to generate well-formed JSON. In the OpenAI family, DaVinci can do reliably but Curie’s ability already drops off dramatically.
Use Pydantic to declare your data model. Pydantic’s BaseModel like a Python dataclass, but with actual type checking + coercion.
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field, validator
from typing import List
model_name = 'text-davinci-003'
temperature = 0.0
model = OpenAI(model_name=model_name, temperature=temperature)
# Define your desired data structure.
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke")
# You can add custom validation logic easily with Pydantic.
@validator('setup')
def question_ends_with_question_mark(cls, field):
if field[-1] != '?':
raise ValueError("Badly formed question!")
return field
# And a query intented to prompt a language model to populate the data structure.
joke_query = "Tell me a joke."
# Set up a parser + inject instructions into the prompt template.
parser = PydanticOutputParser(pydantic_object=Joke)
prompt = PromptTemplate(
|
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/pydantic.html
|
c13234c82a3a-1
|
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
_input = prompt.format_prompt(query=joke_query)
output = model(_input.to_string())
parser.parse(output)
Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')
# Here's another example, but with a compound typed field.
class Actor(BaseModel):
name: str = Field(description="name of an actor")
film_names: List[str] = Field(description="list of names of films they starred in")
actor_query = "Generate the filmography for a random actor."
parser = PydanticOutputParser(pydantic_object=Actor)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
_input = prompt.format_prompt(query=actor_query)
output = model(_input.to_string())
parser.parse(output)
Actor(name='Tom Hanks', film_names=['Forrest Gump', 'Saving Private Ryan', 'The Green Mile', 'Cast Away', 'Toy Story'])
previous
OutputFixingParser
next
RetryOutputParser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/pydantic.html
|
ce309414b377-0
|
.rst
.pdf
How-To Guides
How-To Guides#
If you’re new to the library, you may want to start with the Quickstart.
The user guide here shows more advanced workflows and how to use the library in different ways.
Connecting to a Feature Store
How to create a custom prompt template
How to create a prompt template that uses few shot examples
How to work with partial Prompt Templates
How to serialize prompts
previous
Getting Started
next
Connecting to a Feature Store
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/how_to_guides.html
|
4372bedf6173-0
|
.md
.pdf
Getting Started
Contents
What is a prompt template?
Create a prompt template
Template formats
Validate template
Serialize prompt template
Pass few shot examples to a prompt template
Select examples for a prompt template
Getting Started#
In this tutorial, we will learn about:
what a prompt template is, and why it is needed,
how to create a prompt template,
how to pass few shot examples to a prompt template,
how to select examples for a prompt template.
What is a prompt template?#
A prompt template refers to a reproducible way to generate a prompt. It contains a text string (“the template”), that can take in a set of parameters from the end user and generate a prompt.
The prompt template may contain:
instructions to the language model,
a set of few shot examples to help the language model generate a better response,
a question to the language model.
The following code snippet contains an example of a prompt template:
from langchain import PromptTemplate
template = """
I want you to act as a naming consultant for new companies.
What is a good name for a company that makes {product}?
"""
prompt = PromptTemplate(
input_variables=["product"],
template=template,
)
prompt.format(product="colorful socks")
# -> I want you to act as a naming consultant for new companies.
# -> What is a good name for a company that makes colorful socks?
Create a prompt template#
You can create simple hardcoded prompts using the PromptTemplate class. Prompt templates can take any number of input variables, and can be formatted to generate a prompt.
from langchain import PromptTemplate
# An example prompt with no input variables
no_input_prompt = PromptTemplate(input_variables=[], template="Tell me a joke.")
no_input_prompt.format()
# -> "Tell me a joke."
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html
|
4372bedf6173-1
|
no_input_prompt.format()
# -> "Tell me a joke."
# An example prompt with one input variable
one_input_prompt = PromptTemplate(input_variables=["adjective"], template="Tell me a {adjective} joke.")
one_input_prompt.format(adjective="funny")
# -> "Tell me a funny joke."
# An example prompt with multiple input variables
multiple_input_prompt = PromptTemplate(
input_variables=["adjective", "content"],
template="Tell me a {adjective} joke about {content}."
)
multiple_input_prompt.format(adjective="funny", content="chickens")
# -> "Tell me a funny joke about chickens."
If you do not wish to specify input_variables manually, you can also create a PromptTemplate using from_template class method. langchain will automatically infer the input_variables based on the template passed.
template = "Tell me a {adjective} joke about {content}."
prompt_template = PromptTemplate.from_template(template)
prompt_template.input_variables
# -> ['adjective', 'content']
prompt_template.format(adjective="funny", content="chickens")
# -> Tell me a funny joke about chickens.
You can create custom prompt templates that format the prompt in any way you want. For more information, see Custom Prompt Templates.
Template formats#
By default, PromptTemplate will treat the provided template as a Python f-string. You can specify other template format through template_format argument:
# Make sure jinja2 is installed before running this
jinja2_template = "Tell me a {{ adjective }} joke about {{ content }}"
prompt_template = PromptTemplate.from_template(template=jinja2_template, template_format="jinja2")
prompt_template.format(adjective="funny", content="chickens")
# -> Tell me a funny joke about chickens.
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html
|
4372bedf6173-2
|
# -> Tell me a funny joke about chickens.
Currently, PromptTemplate only supports jinja2 and f-string templating format. If there is any other templating format that you would like to use, feel free to open an issue in the Github page.
Validate template#
By default, PromptTemplate will validate the template string by checking whether the input_variables match the variables defined in template. You can disable this behavior by setting validate_template to False
template = "I am learning langchain because {reason}."
prompt_template = PromptTemplate(template=template,
input_variables=["reason", "foo"]) # ValueError due to extra variables
prompt_template = PromptTemplate(template=template,
input_variables=["reason", "foo"],
validate_template=False) # No error
Serialize prompt template#
You can save your PromptTemplate into a file in your local filesystem. langchain will automatically infer the file format through the file extension name. Currently, langchain supports saving template to YAML and JSON file.
prompt_template.save("awesome_prompt.json") # Save to JSON file
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("awesome_prompt.json")
assert prompt_template == loaded_prompt
langchain also supports loading prompt template from LangChainHub, which contains a collection of useful prompts you can use in your project. You can read more about LangChainHub and the prompts available with it here.
from langchain.prompts import load_prompt
prompt = load_prompt("lc://prompts/conversation/prompt.json")
prompt.format(history="", input="What is 1 + 1?")
You can learn more about serializing prompt template in How to serialize prompts.
Pass few shot examples to a prompt template#
Few shot examples are a set of examples that can be used to help the language model generate a better response.
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.