id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
be9e601e96ec-2 | [SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}),
HumanMessage(content='I love programming.', additional_kwargs={})]
Different types of MessagePromptTemplate#
LangChain provides different types of MessagePromptTemplate. The most commonly used are AIMessagePromptTemplate, SystemMessagePromptTemplate and HumanMessagePromptTemplate, which create an AI message, system message and human message respectively.
However, in cases where the chat model supports taking chat message with arbitrary role, you can use ChatMessagePromptTemplate, which allows user to specify the role name.
from langchain.prompts import ChatMessagePromptTemplate
prompt = "May the {subject} be with you"
chat_message_prompt = ChatMessagePromptTemplate.from_template(role="Jedi", template=prompt)
chat_message_prompt.format(subject="force")
ChatMessage(content='May the force be with you', additional_kwargs={}, role='Jedi')
LangChain also provides MessagesPlaceholder, which gives you full control of what messages to be rendered during formatting. This can be useful when you are uncertain of what role you should be using for your message prompt templates or when you wish to insert a list of messages during formatting.
from langchain.prompts import MessagesPlaceholder
human_prompt = "Summarize our conversation so far in {word_count} words."
human_message_template = HumanMessagePromptTemplate.from_template(human_prompt)
chat_prompt = ChatPromptTemplate.from_messages([MessagesPlaceholder(variable_name="conversation"), human_message_template])
human_message = HumanMessage(content="What is the best way to learn programming?")
ai_message = AIMessage(content="""\
1. Choose a programming language: Decide on a programming language that you want to learn.
2. Start with the basics: Familiarize yourself with the basic programming concepts such as variables, data types and control structures. | https://python.langchain.com/en/latest/modules/prompts/chat_prompt_template.html |
be9e601e96ec-3 | 3. Practice, practice, practice: The best way to learn programming is through hands-on experience\
""")
chat_prompt.format_prompt(conversation=[human_message, ai_message], word_count="10").to_messages()
[HumanMessage(content='What is the best way to learn programming?', additional_kwargs={}),
AIMessage(content='1. Choose a programming language: Decide on a programming language that you want to learn. \n\n2. Start with the basics: Familiarize yourself with the basic programming concepts such as variables, data types and control structures.\n\n3. Practice, practice, practice: The best way to learn programming is through hands-on experience', additional_kwargs={}),
HumanMessage(content='Summarize our conversation so far in 10 words.', additional_kwargs={})]
previous
Output Parsers
next
Example Selectors
Contents
Format output
Different types of MessagePromptTemplate
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/chat_prompt_template.html |
6df7e220e657-0 | .rst
.pdf
Prompt Templates
Prompt Templates#
Note
Conceptual Guide
Language models take text as input - that text is commonly referred to as a prompt.
Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input.
LangChain provides several classes and functions to make constructing and working with prompts easy.
The following sections of documentation are provided:
Getting Started: An overview of all the functionality LangChain provides for working with and constructing prompts.
How-To Guides: A collection of how-to guides. These highlight how to accomplish various objectives with our prompt class.
Reference: API reference documentation for all prompt classes.
previous
Getting Started
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/prompt_templates.html |
02c1c7624c6e-0 | .rst
.pdf
Output Parsers
Output Parsers#
Note
Conceptual Guide
Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.
Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:
get_format_instructions() -> str: A method which returns a string containing instructions for how the output of a language model should be formatted.
parse(str) -> Any: A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.
And then one optional one:
parse_with_prompt(str) -> Any: A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.
To start, we recommend familiarizing yourself with the Getting Started section
Output Parsers
After that, we provide deep dives on all the different types of output parsers.
CommaSeparatedListOutputParser
Enum Output Parser
OutputFixingParser
PydanticOutputParser
RetryOutputParser
Structured Output Parser
previous
Similarity ExampleSelector
next
Output Parsers
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/output_parsers.html |
14af2eff3a92-0 | .rst
.pdf
Example Selectors
Example Selectors#
Note
Conceptual Guide
If you have a large number of examples, you may need to select which ones to include in the prompt. The ExampleSelector is the class responsible for doing so.
The base interface is defined as below:
class BaseExampleSelector(ABC):
"""Interface for selecting examples to include in prompts."""
@abstractmethod
def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:
"""Select which examples to use based on the inputs."""
The only method it needs to expose is a select_examples method. This takes in the input variables and then returns a list of examples. It is up to each specific implementation as to how those examples are selected. Let’s take a look at some below.
See below for a list of example selectors.
How to create a custom example selector
LengthBased ExampleSelector
Maximal Marginal Relevance ExampleSelector
NGram Overlap ExampleSelector
Similarity ExampleSelector
previous
Chat Prompt Template
next
How to create a custom example selector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/example_selectors.html |
fd9469a3cef0-0 | .ipynb
.pdf
Maximal Marginal Relevance ExampleSelector
Maximal Marginal Relevance ExampleSelector#
The MaxMarginalRelevanceExampleSelector selects examples based on a combination of which examples are most similar to the inputs, while also optimizing for diversity. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs, and then iteratively adding them while penalizing them for closeness to already selected examples.
from langchain.prompts.example_selector import MaxMarginalRelevanceExampleSelector
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
from langchain.prompts import FewShotPromptTemplate, PromptTemplate
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
# These are a lot of examples of a pretend task of creating antonyms.
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
{"input": "energetic", "output": "lethargic"},
{"input": "sunny", "output": "gloomy"},
{"input": "windy", "output": "calm"},
]
example_selector = MaxMarginalRelevanceExampleSelector.from_examples(
# This is the list of examples available to select from.
examples,
# This is the embedding class used to produce embeddings which are used to measure semantic similarity.
OpenAIEmbeddings(),
# This is the VectorStore class that is used to store the embeddings and do a similarity search over.
FAISS,
# This is the number of examples to produce.
k=2
)
mmr_prompt = FewShotPromptTemplate( | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/mmr.html |
fd9469a3cef0-1 | k=2
)
mmr_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
# Input is a feeling, so should select the happy/sad example as the first one
print(mmr_prompt.format(adjective="worried"))
Give the antonym of every input
Input: happy
Output: sad
Input: windy
Output: calm
Input: worried
Output:
# Let's compare this to what we would just get if we went solely off of similarity
similar_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
similar_prompt.example_selector.k = 2
print(similar_prompt.format(adjective="worried"))
Give the antonym of every input
Input: happy
Output: sad
Input: windy
Output: calm
Input: worried
Output:
previous
LengthBased ExampleSelector
next
NGram Overlap ExampleSelector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/mmr.html |
a667b4d3aed2-0 | .ipynb
.pdf
NGram Overlap ExampleSelector
NGram Overlap ExampleSelector#
The NGramOverlapExampleSelector selects and orders examples based on which examples are most similar to the input, according to an ngram overlap score. The ngram overlap score is a float between 0.0 and 1.0, inclusive.
The selector allows for a threshold score to be set. Examples with an ngram overlap score less than or equal to the threshold are excluded. The threshold is set to -1.0, by default, so will not exclude any examples, only reorder them. Setting the threshold to 0.0 will exclude examples that have no ngram overlaps with the input.
from langchain.prompts import PromptTemplate
from langchain.prompts.example_selector.ngram_overlap import NGramOverlapExampleSelector
from langchain.prompts import FewShotPromptTemplate, PromptTemplate
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
# These are a lot of examples of a pretend task of creating antonyms.
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
{"input": "energetic", "output": "lethargic"},
{"input": "sunny", "output": "gloomy"},
{"input": "windy", "output": "calm"},
]
# These are examples of a fictional translation task.
examples = [
{"input": "See Spot run.", "output": "Ver correr a Spot."},
{"input": "My dog barks.", "output": "Mi perro ladra."},
{"input": "Spot can run.", "output": "Spot puede correr."}, | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html |
a667b4d3aed2-1 | {"input": "Spot can run.", "output": "Spot puede correr."},
]
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
example_selector = NGramOverlapExampleSelector(
# These are the examples it has available to choose from.
examples=examples,
# This is the PromptTemplate being used to format the examples.
example_prompt=example_prompt,
# This is the threshold, at which selector stops.
# It is set to -1.0 by default.
threshold=-1.0,
# For negative threshold:
# Selector sorts examples by ngram overlap score, and excludes none.
# For threshold greater than 1.0:
# Selector excludes all examples, and returns an empty list.
# For threshold equal to 0.0:
# Selector sorts examples by ngram overlap score,
# and excludes those with no ngram overlap with input.
)
dynamic_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the Spanish translation of every input",
suffix="Input: {sentence}\nOutput:",
input_variables=["sentence"],
)
# An example input with large ngram overlap with "Spot can run."
# and no overlap with "My dog barks."
print(dynamic_prompt.format(sentence="Spot can run fast."))
Give the Spanish translation of every input
Input: Spot can run.
Output: Spot puede correr.
Input: See Spot run.
Output: Ver correr a Spot.
Input: My dog barks. | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html |
a667b4d3aed2-2 | Output: Ver correr a Spot.
Input: My dog barks.
Output: Mi perro ladra.
Input: Spot can run fast.
Output:
# You can add examples to NGramOverlapExampleSelector as well.
new_example = {"input": "Spot plays fetch.", "output": "Spot juega a buscar."}
example_selector.add_example(new_example)
print(dynamic_prompt.format(sentence="Spot can run fast."))
Give the Spanish translation of every input
Input: Spot can run.
Output: Spot puede correr.
Input: See Spot run.
Output: Ver correr a Spot.
Input: Spot plays fetch.
Output: Spot juega a buscar.
Input: My dog barks.
Output: Mi perro ladra.
Input: Spot can run fast.
Output:
# You can set a threshold at which examples are excluded.
# For example, setting threshold equal to 0.0
# excludes examples with no ngram overlaps with input.
# Since "My dog barks." has no ngram overlaps with "Spot can run fast."
# it is excluded.
example_selector.threshold=0.0
print(dynamic_prompt.format(sentence="Spot can run fast."))
Give the Spanish translation of every input
Input: Spot can run.
Output: Spot puede correr.
Input: See Spot run.
Output: Ver correr a Spot.
Input: Spot plays fetch.
Output: Spot juega a buscar.
Input: Spot can run fast.
Output:
# Setting small nonzero threshold
example_selector.threshold=0.09
print(dynamic_prompt.format(sentence="Spot can play fetch."))
Give the Spanish translation of every input
Input: Spot can run.
Output: Spot puede correr.
Input: Spot plays fetch.
Output: Spot juega a buscar. | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html |
a667b4d3aed2-3 | Input: Spot plays fetch.
Output: Spot juega a buscar.
Input: Spot can play fetch.
Output:
# Setting threshold greater than 1.0
example_selector.threshold=1.0+1e-9
print(dynamic_prompt.format(sentence="Spot can play fetch."))
Give the Spanish translation of every input
Input: Spot can play fetch.
Output:
previous
Maximal Marginal Relevance ExampleSelector
next
Similarity ExampleSelector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html |
4d9cf6455006-0 | .ipynb
.pdf
LengthBased ExampleSelector
LengthBased ExampleSelector#
This ExampleSelector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more.
from langchain.prompts import PromptTemplate
from langchain.prompts import FewShotPromptTemplate
from langchain.prompts.example_selector import LengthBasedExampleSelector
# These are a lot of examples of a pretend task of creating antonyms.
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
{"input": "energetic", "output": "lethargic"},
{"input": "sunny", "output": "gloomy"},
{"input": "windy", "output": "calm"},
]
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
example_selector = LengthBasedExampleSelector(
# These are the examples it has available to choose from.
examples=examples,
# This is the PromptTemplate being used to format the examples.
example_prompt=example_prompt,
# This is the maximum length that the formatted examples should be.
# Length is measured by the get_text_length function below.
max_length=25,
# This is the function used to get the length of a string, which is used
# to determine which examples to include. It is commented out because
# it is provided as a default value if none is specified. | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/length_based.html |
4d9cf6455006-1 | # it is provided as a default value if none is specified.
# get_text_length: Callable[[str], int] = lambda x: len(re.split("\n| ", x))
)
dynamic_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
# An example with small input, so it selects all examples.
print(dynamic_prompt.format(adjective="big"))
Give the antonym of every input
Input: happy
Output: sad
Input: tall
Output: short
Input: energetic
Output: lethargic
Input: sunny
Output: gloomy
Input: windy
Output: calm
Input: big
Output:
# An example with long input, so it selects only one example.
long_string = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else"
print(dynamic_prompt.format(adjective=long_string))
Give the antonym of every input
Input: happy
Output: sad
Input: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else
Output:
# You can add an example to an example selector as well.
new_example = {"input": "big", "output": "small"}
dynamic_prompt.example_selector.add_example(new_example)
print(dynamic_prompt.format(adjective="enthusiastic"))
Give the antonym of every input
Input: happy
Output: sad
Input: tall
Output: short
Input: energetic
Output: lethargic
Input: sunny
Output: gloomy
Input: windy
Output: calm | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/length_based.html |
4d9cf6455006-2 | Input: sunny
Output: gloomy
Input: windy
Output: calm
Input: big
Output: small
Input: enthusiastic
Output:
previous
How to create a custom example selector
next
Maximal Marginal Relevance ExampleSelector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/length_based.html |
d1141ea1dfa1-0 | .ipynb
.pdf
Similarity ExampleSelector
Similarity ExampleSelector#
The SemanticSimilarityExampleSelector selects examples based on which examples are most similar to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs.
from langchain.prompts.example_selector import SemanticSimilarityExampleSelector
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
from langchain.prompts import FewShotPromptTemplate, PromptTemplate
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
# These are a lot of examples of a pretend task of creating antonyms.
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
{"input": "energetic", "output": "lethargic"},
{"input": "sunny", "output": "gloomy"},
{"input": "windy", "output": "calm"},
]
example_selector = SemanticSimilarityExampleSelector.from_examples(
# This is the list of examples available to select from.
examples,
# This is the embedding class used to produce embeddings which are used to measure semantic similarity.
OpenAIEmbeddings(),
# This is the VectorStore class that is used to store the embeddings and do a similarity search over.
Chroma,
# This is the number of examples to produce.
k=1
)
similar_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input", | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/similarity.html |
d1141ea1dfa1-1 | example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
# Input is a feeling, so should select the happy/sad example
print(similar_prompt.format(adjective="worried"))
Give the antonym of every input
Input: happy
Output: sad
Input: worried
Output:
# Input is a measurement, so should select the tall/short example
print(similar_prompt.format(adjective="fat"))
Give the antonym of every input
Input: happy
Output: sad
Input: fat
Output:
# You can add new examples to the SemanticSimilarityExampleSelector as well
similar_prompt.example_selector.add_example({"input": "enthusiastic", "output": "apathetic"})
print(similar_prompt.format(adjective="joyful"))
Give the antonym of every input
Input: happy
Output: sad
Input: joyful
Output:
previous
NGram Overlap ExampleSelector
next
Output Parsers
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/similarity.html |
aec58b1cce33-0 | .md
.pdf
How to create a custom example selector
Contents
Implement custom example selector
Use custom example selector
How to create a custom example selector#
In this tutorial, we’ll create a custom example selector that selects every alternate example from a given list of examples.
An ExampleSelector must implement two methods:
An add_example method which takes in an example and adds it into the ExampleSelector
A select_examples method which takes in input variables (which are meant to be user input) and returns a list of examples to use in the few shot prompt.
Let’s implement a custom ExampleSelector that just selects two examples at random.
Note
Take a look at the current set of example selector implementations supported in LangChain here.
Implement custom example selector#
from langchain.prompts.example_selector.base import BaseExampleSelector
from typing import Dict, List
import numpy as np
class CustomExampleSelector(BaseExampleSelector):
def __init__(self, examples: List[Dict[str, str]]):
self.examples = examples
def add_example(self, example: Dict[str, str]) -> None:
"""Add new example to store for a key."""
self.examples.append(example)
def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:
"""Select which examples to use based on the inputs."""
return np.random.choice(self.examples, size=2, replace=False)
Use custom example selector#
examples = [
{"foo": "1"},
{"foo": "2"},
{"foo": "3"}
]
# Initialize example selector.
example_selector = CustomExampleSelector(examples)
# Select examples
example_selector.select_examples({"foo": "foo"})
# -> array([{'foo': '2'}, {'foo': '3'}], dtype=object) | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/custom_example_selector.html |
aec58b1cce33-1 | # Add new example to the set of examples
example_selector.add_example({"foo": "4"})
example_selector.examples
# -> [{'foo': '1'}, {'foo': '2'}, {'foo': '3'}, {'foo': '4'}]
# Select examples
example_selector.select_examples({"foo": "foo"})
# -> array([{'foo': '1'}, {'foo': '4'}], dtype=object)
previous
Example Selectors
next
LengthBased ExampleSelector
Contents
Implement custom example selector
Use custom example selector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/custom_example_selector.html |
2b0db51fc475-0 | .md
.pdf
Getting Started
Contents
What is a prompt template?
Create a prompt template
Template formats
Validate template
Serialize prompt template
Pass few shot examples to a prompt template
Select examples for a prompt template
Getting Started#
In this tutorial, we will learn about:
what a prompt template is, and why it is needed,
how to create a prompt template,
how to pass few shot examples to a prompt template,
how to select examples for a prompt template.
What is a prompt template?#
A prompt template refers to a reproducible way to generate a prompt. It contains a text string (“the template”), that can take in a set of parameters from the end user and generate a prompt.
The prompt template may contain:
instructions to the language model,
a set of few shot examples to help the language model generate a better response,
a question to the language model.
The following code snippet contains an example of a prompt template:
from langchain import PromptTemplate
template = """
I want you to act as a naming consultant for new companies.
What is a good name for a company that makes {product}?
"""
prompt = PromptTemplate(
input_variables=["product"],
template=template,
)
prompt.format(product="colorful socks")
# -> I want you to act as a naming consultant for new companies.
# -> What is a good name for a company that makes colorful socks?
Create a prompt template#
You can create simple hardcoded prompts using the PromptTemplate class. Prompt templates can take any number of input variables, and can be formatted to generate a prompt.
from langchain import PromptTemplate
# An example prompt with no input variables
no_input_prompt = PromptTemplate(input_variables=[], template="Tell me a joke.")
no_input_prompt.format()
# -> "Tell me a joke." | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html |
2b0db51fc475-1 | no_input_prompt.format()
# -> "Tell me a joke."
# An example prompt with one input variable
one_input_prompt = PromptTemplate(input_variables=["adjective"], template="Tell me a {adjective} joke.")
one_input_prompt.format(adjective="funny")
# -> "Tell me a funny joke."
# An example prompt with multiple input variables
multiple_input_prompt = PromptTemplate(
input_variables=["adjective", "content"],
template="Tell me a {adjective} joke about {content}."
)
multiple_input_prompt.format(adjective="funny", content="chickens")
# -> "Tell me a funny joke about chickens."
If you do not wish to specify input_variables manually, you can also create a PromptTemplate using from_template class method. langchain will automatically infer the input_variables based on the template passed.
template = "Tell me a {adjective} joke about {content}."
prompt_template = PromptTemplate.from_template(template)
prompt_template.input_variables
# -> ['adjective', 'content']
prompt_template.format(adjective="funny", content="chickens")
# -> Tell me a funny joke about chickens.
You can create custom prompt templates that format the prompt in any way you want. For more information, see Custom Prompt Templates.
Template formats#
By default, PromptTemplate will treat the provided template as a Python f-string. You can specify other template format through template_format argument:
# Make sure jinja2 is installed before running this
jinja2_template = "Tell me a {{ adjective }} joke about {{ content }}"
prompt_template = PromptTemplate.from_template(template=jinja2_template, template_format="jinja2")
prompt_template.format(adjective="funny", content="chickens")
# -> Tell me a funny joke about chickens. | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html |
2b0db51fc475-2 | # -> Tell me a funny joke about chickens.
Currently, PromptTemplate only supports jinja2 and f-string templating format. If there is any other templating format that you would like to use, feel free to open an issue in the Github page.
Validate template#
By default, PromptTemplate will validate the template string by checking whether the input_variables match the variables defined in template. You can disable this behavior by setting validate_template to False
template = "I am learning langchain because {reason}."
prompt_template = PromptTemplate(template=template,
input_variables=["reason", "foo"]) # ValueError due to extra variables
prompt_template = PromptTemplate(template=template,
input_variables=["reason", "foo"],
validate_template=False) # No error
Serialize prompt template#
You can save your PromptTemplate into a file in your local filesystem. langchain will automatically infer the file format through the file extension name. Currently, langchain supports saving template to YAML and JSON file.
prompt_template.save("awesome_prompt.json") # Save to JSON file
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("awesome_prompt.json")
assert prompt_template == loaded_prompt
langchain also supports loading prompt template from LangChainHub, which contains a collection of useful prompts you can use in your project. You can read more about LangChainHub and the prompts available with it here.
from langchain.prompts import load_prompt
prompt = load_prompt("lc://prompts/conversation/prompt.json")
prompt.format(history="", input="What is 1 + 1?")
You can learn more about serializing prompt template in How to serialize prompts.
Pass few shot examples to a prompt template#
Few shot examples are a set of examples that can be used to help the language model generate a better response. | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html |
2b0db51fc475-3 | To generate a prompt with few shot examples, you can use the FewShotPromptTemplate. This class takes in a PromptTemplate and a list of few shot examples. It then formats the prompt template with the few shot examples.
In this example, we’ll create a prompt to generate word antonyms.
from langchain import PromptTemplate, FewShotPromptTemplate
# First, create the list of few shot examples.
examples = [
{"word": "happy", "antonym": "sad"},
{"word": "tall", "antonym": "short"},
]
# Next, we specify the template to format the examples we have provided.
# We use the `PromptTemplate` class for this.
example_formatter_template = """Word: {word}
Antonym: {antonym}
"""
example_prompt = PromptTemplate(
input_variables=["word", "antonym"],
template=example_formatter_template,
)
# Finally, we create the `FewShotPromptTemplate` object.
few_shot_prompt = FewShotPromptTemplate(
# These are the examples we want to insert into the prompt.
examples=examples,
# This is how we want to format the examples when we insert them into the prompt.
example_prompt=example_prompt,
# The prefix is some text that goes before the examples in the prompt.
# Usually, this consists of intructions.
prefix="Give the antonym of every input\n",
# The suffix is some text that goes after the examples in the prompt.
# Usually, this is where the user input will go
suffix="Word: {input}\nAntonym: ",
# The input variables are the variables that the overall prompt expects.
input_variables=["input"], | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html |
2b0db51fc475-4 | input_variables=["input"],
# The example_separator is the string we will use to join the prefix, examples, and suffix together with.
example_separator="\n",
)
# We can now generate a prompt using the `format` method.
print(few_shot_prompt.format(input="big"))
# -> Give the antonym of every input
# ->
# -> Word: happy
# -> Antonym: sad
# ->
# -> Word: tall
# -> Antonym: short
# ->
# -> Word: big
# -> Antonym:
Select examples for a prompt template#
If you have a large number of examples, you can use the ExampleSelector to select a subset of examples that will be most informative for the Language Model. This will help you generate a prompt that is more likely to generate a good response.
Below, we’ll use the LengthBasedExampleSelector, which selects examples based on the length of the input. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more.
We’ll continue with the example from the previous section, but this time we’ll use the LengthBasedExampleSelector to select the examples.
from langchain.prompts.example_selector import LengthBasedExampleSelector
# These are a lot of examples of a pretend task of creating antonyms.
examples = [
{"word": "happy", "antonym": "sad"},
{"word": "tall", "antonym": "short"},
{"word": "energetic", "antonym": "lethargic"},
{"word": "sunny", "antonym": "gloomy"},
{"word": "windy", "antonym": "calm"},
] | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html |
2b0db51fc475-5 | {"word": "windy", "antonym": "calm"},
]
# We'll use the `LengthBasedExampleSelector` to select the examples.
example_selector = LengthBasedExampleSelector(
# These are the examples is has available to choose from.
examples=examples,
# This is the PromptTemplate being used to format the examples.
example_prompt=example_prompt,
# This is the maximum length that the formatted examples should be.
# Length is measured by the get_text_length function below.
max_length=25
# This is the function used to get the length of a string, which is used
# to determine which examples to include. It is commented out because
# it is provided as a default value if none is specified.
# get_text_length: Callable[[str], int] = lambda x: len(re.split("\n| ", x))
)
# We can now use the `example_selector` to create a `FewShotPromptTemplate`.
dynamic_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Word: {input}\nAntonym:",
input_variables=["input"],
example_separator="\n\n",
)
# We can now generate a prompt using the `format` method.
print(dynamic_prompt.format(input="big"))
# -> Give the antonym of every input
# ->
# -> Word: happy
# -> Antonym: sad
# ->
# -> Word: tall
# -> Antonym: short
# ->
# -> Word: energetic
# -> Antonym: lethargic
# ->
# -> Word: sunny | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html |
2b0db51fc475-6 | # -> Antonym: lethargic
# ->
# -> Word: sunny
# -> Antonym: gloomy
# ->
# -> Word: windy
# -> Antonym: calm
# ->
# -> Word: big
# -> Antonym:
In contrast, if we provide a very long input, the LengthBasedExampleSelector will select fewer examples to include in the prompt.
long_string = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else"
print(dynamic_prompt.format(input=long_string))
# -> Give the antonym of every input
# -> Word: happy
# -> Antonym: sad
# ->
# -> Word: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else
# -> Antonym:
LangChain comes with a few example selectors that you can use. For more details on how to use them, see Example Selectors.
You can create custom example selectors that select examples based on any criteria you want. For more details on how to do this, see Creating a custom example selector.
previous
Prompt Templates
next
How-To Guides
Contents
What is a prompt template?
Create a prompt template
Template formats
Validate template
Serialize prompt template
Pass few shot examples to a prompt template
Select examples for a prompt template
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html |
70ec3f72dee6-0 | .rst
.pdf
How-To Guides
How-To Guides#
If you’re new to the library, you may want to start with the Quickstart.
The user guide here shows more advanced workflows and how to use the library in different ways.
Connecting to a Feature Store
How to create a custom prompt template
How to create a prompt template that uses few shot examples
How to work with partial Prompt Templates
How to serialize prompts
previous
Getting Started
next
Connecting to a Feature Store
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/how_to_guides.html |
d878248ba617-0 | .ipynb
.pdf
How to serialize prompts
Contents
PromptTemplate
Loading from YAML
Loading from JSON
Loading Template from a File
FewShotPromptTemplate
Examples
Loading from YAML
Loading from JSON
Examples in the Config
Example Prompt from a File
PromptTempalte with OutputParser
How to serialize prompts#
It is often preferrable to store prompts not as python code but as files. This can make it easy to share, store, and version prompts. This notebook covers how to do that in LangChain, walking through all the different types of prompts and the different serialization options.
At a high level, the following design principles are applied to serialization:
Both JSON and YAML are supported. We want to support serialization methods that are human readable on disk, and YAML and JSON are two of the most popular methods for that. Note that this rule applies to prompts. For other assets, like Examples, different serialization methods may be supported.
We support specifying everything in one file, or storing different components (templates, examples, etc) in different files and referencing them. For some cases, storing everything in file makes the most sense, but for others it is preferrable to split up some of the assets (long templates, large examples, reusable components). LangChain supports both.
There is also a single entry point to load prompts from disk, making it easy to load any type of prompt.
# All prompts are loaded through the `load_prompt` function.
from langchain.prompts import load_prompt
PromptTemplate#
This section covers examples for loading a PromptTemplate.
Loading from YAML#
This shows an example of loading a PromptTemplate from YAML.
!cat simple_prompt.yaml
_type: prompt
input_variables:
["adjective", "content"]
template:
Tell me a {adjective} joke about {content}.
prompt = load_prompt("simple_prompt.yaml") | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html |
d878248ba617-1 | prompt = load_prompt("simple_prompt.yaml")
print(prompt.format(adjective="funny", content="chickens"))
Tell me a funny joke about chickens.
Loading from JSON#
This shows an example of loading a PromptTemplate from JSON.
!cat simple_prompt.json
{
"_type": "prompt",
"input_variables": ["adjective", "content"],
"template": "Tell me a {adjective} joke about {content}."
}
prompt = load_prompt("simple_prompt.json")
print(prompt.format(adjective="funny", content="chickens"))
Tell me a funny joke about chickens.
Loading Template from a File#
This shows an example of storing the template in a separate file and then referencing it in the config. Notice that the key changes from template to template_path.
!cat simple_template.txt
Tell me a {adjective} joke about {content}.
!cat simple_prompt_with_template_file.json
{
"_type": "prompt",
"input_variables": ["adjective", "content"],
"template_path": "simple_template.txt"
}
prompt = load_prompt("simple_prompt_with_template_file.json")
print(prompt.format(adjective="funny", content="chickens"))
Tell me a funny joke about chickens.
FewShotPromptTemplate#
This section covers examples for loading few shot prompt templates.
Examples#
This shows an example of what examples stored as json might look like.
!cat examples.json
[
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"}
]
And here is what the same examples stored as yaml might look like.
!cat examples.yaml
- input: happy
output: sad
- input: tall
output: short
Loading from YAML# | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html |
d878248ba617-2 | output: sad
- input: tall
output: short
Loading from YAML#
This shows an example of loading a few shot example from YAML.
!cat few_shot_prompt.yaml
_type: few_shot
input_variables:
["adjective"]
prefix:
Write antonyms for the following words.
example_prompt:
_type: prompt
input_variables:
["input", "output"]
template:
"Input: {input}\nOutput: {output}"
examples:
examples.json
suffix:
"Input: {adjective}\nOutput:"
prompt = load_prompt("few_shot_prompt.yaml")
print(prompt.format(adjective="funny"))
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:
The same would work if you loaded examples from the yaml file.
!cat few_shot_prompt_yaml_examples.yaml
_type: few_shot
input_variables:
["adjective"]
prefix:
Write antonyms for the following words.
example_prompt:
_type: prompt
input_variables:
["input", "output"]
template:
"Input: {input}\nOutput: {output}"
examples:
examples.yaml
suffix:
"Input: {adjective}\nOutput:"
prompt = load_prompt("few_shot_prompt_yaml_examples.yaml")
print(prompt.format(adjective="funny"))
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:
Loading from JSON#
This shows an example of loading a few shot example from JSON.
!cat few_shot_prompt.json
{
"_type": "few_shot", | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html |
d878248ba617-3 | !cat few_shot_prompt.json
{
"_type": "few_shot",
"input_variables": ["adjective"],
"prefix": "Write antonyms for the following words.",
"example_prompt": {
"_type": "prompt",
"input_variables": ["input", "output"],
"template": "Input: {input}\nOutput: {output}"
},
"examples": "examples.json",
"suffix": "Input: {adjective}\nOutput:"
}
prompt = load_prompt("few_shot_prompt.json")
print(prompt.format(adjective="funny"))
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:
Examples in the Config#
This shows an example of referencing the examples directly in the config.
!cat few_shot_prompt_examples_in.json
{
"_type": "few_shot",
"input_variables": ["adjective"],
"prefix": "Write antonyms for the following words.",
"example_prompt": {
"_type": "prompt",
"input_variables": ["input", "output"],
"template": "Input: {input}\nOutput: {output}"
},
"examples": [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"}
],
"suffix": "Input: {adjective}\nOutput:"
}
prompt = load_prompt("few_shot_prompt_examples_in.json")
print(prompt.format(adjective="funny"))
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:
Example Prompt from a File# | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html |
d878248ba617-4 | Output: short
Input: funny
Output:
Example Prompt from a File#
This shows an example of loading the PromptTemplate that is used to format the examples from a separate file. Note that the key changes from example_prompt to example_prompt_path.
!cat example_prompt.json
{
"_type": "prompt",
"input_variables": ["input", "output"],
"template": "Input: {input}\nOutput: {output}"
}
!cat few_shot_prompt_example_prompt.json
{
"_type": "few_shot",
"input_variables": ["adjective"],
"prefix": "Write antonyms for the following words.",
"example_prompt_path": "example_prompt.json",
"examples": "examples.json",
"suffix": "Input: {adjective}\nOutput:"
}
prompt = load_prompt("few_shot_prompt_example_prompt.json")
print(prompt.format(adjective="funny"))
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:
PromptTempalte with OutputParser#
This shows an example of loading a prompt along with an OutputParser from a file.
! cat prompt_with_output_parser.json
{
"input_variables": [
"question",
"student_answer"
],
"output_parser": {
"regex": "(.*?)\\nScore: (.*)",
"output_keys": [
"answer",
"score"
],
"default_output_key": null,
"_type": "regex_parser"
},
"partial_variables": {}, | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html |
d878248ba617-5 | "_type": "regex_parser"
},
"partial_variables": {},
"template": "Given the following question and student answer, provide a correct answer and score the student answer.\nQuestion: {question}\nStudent Answer: {student_answer}\nCorrect Answer:",
"template_format": "f-string",
"validate_template": true,
"_type": "prompt"
}
prompt = load_prompt("prompt_with_output_parser.json")
prompt.output_parser.parse("George Washington was born in 1732 and died in 1799.\nScore: 1/2")
{'answer': 'George Washington was born in 1732 and died in 1799.',
'score': '1/2'}
previous
How to work with partial Prompt Templates
next
Prompts
Contents
PromptTemplate
Loading from YAML
Loading from JSON
Loading Template from a File
FewShotPromptTemplate
Examples
Loading from YAML
Loading from JSON
Examples in the Config
Example Prompt from a File
PromptTempalte with OutputParser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html |
b9c67122c9bb-0 | .ipynb
.pdf
How to work with partial Prompt Templates
Contents
Partial With Strings
Partial With Functions
How to work with partial Prompt Templates#
A prompt template is a class with a .format method which takes in a key-value map and returns a string (a prompt) to pass to the language model. Like other methods, it can make sense to “partial” a prompt template - eg pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values.
LangChain supports this in two ways: we allow for partially formatted prompts (1) with string values, (2) with functions that return string values. These two different ways support different use cases. In the documentation below we go over the motivations for both use cases as well as how to do it in LangChain.
Partial With Strings#
One common use case for wanting to partial a prompt template is if you get some of the variables before others. For example, suppose you have a prompt template that requires two variables, foo and baz. If you get the foo value early on in the chain, but the baz value later, it can be annoying to wait until you have both variables in the same place to pass them to the prompt template. Instead, you can partial the prompt template with the foo value, and then pass the partialed prompt template along and just use that. Below is an example of doing this:
from langchain.prompts import PromptTemplate
prompt = PromptTemplate(template="{foo}{bar}", input_variables=["foo", "bar"])
partial_prompt = prompt.partial(foo="foo");
print(partial_prompt.format(bar="baz"))
foobaz
You can also just initialize the prompt with the partialed variables.
prompt = PromptTemplate(template="{foo}{bar}", input_variables=["bar"], partial_variables={"foo": "foo"})
print(prompt.format(bar="baz"))
foobaz | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/partial.html |
b9c67122c9bb-1 | print(prompt.format(bar="baz"))
foobaz
Partial With Functions#
The other common use is to partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can’t hard code it in the prompt, and passing it along with the other input variables is a bit annoying. In this case, it’s very handy to be able to partial the prompt with a function that always returns the current date.
from datetime import datetime
def _get_datetime():
now = datetime.now()
return now.strftime("%m/%d/%Y, %H:%M:%S")
prompt = PromptTemplate(
template="Tell me a {adjective} joke about the day {date}",
input_variables=["adjective", "date"]
);
partial_prompt = prompt.partial(date=_get_datetime)
print(partial_prompt.format(adjective="funny"))
Tell me a funny joke about the day 02/27/2023, 22:15:16
You can also just initialize the prompt with the partialed variables, which often makes more sense in this workflow.
prompt = PromptTemplate(
template="Tell me a {adjective} joke about the day {date}",
input_variables=["adjective"],
partial_variables={"date": _get_datetime}
);
print(prompt.format(adjective="funny"))
Tell me a funny joke about the day 02/27/2023, 22:15:16
previous
How to create a prompt template that uses few shot examples
next
How to serialize prompts
Contents
Partial With Strings
Partial With Functions
By Harrison Chase | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/partial.html |
b9c67122c9bb-2 | Contents
Partial With Strings
Partial With Functions
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/partial.html |
232bbb4ed535-0 | .ipynb
.pdf
How to create a custom prompt template
Contents
Why are custom prompt templates needed?
Creating a Custom Prompt Template
Use the custom prompt template
How to create a custom prompt template#
Let’s suppose we want the LLM to generate English language explanations of a function given its name. To achieve this task, we will create a custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function.
Why are custom prompt templates needed?#
LangChain provides a set of default prompt templates that can be used to generate prompts for a variety of tasks. However, there may be cases where the default prompt templates do not meet your needs. For example, you may want to create a prompt template with specific dynamic instructions for your language model. In such cases, you can create a custom prompt template.
Take a look at the current set of default prompt templates here.
Creating a Custom Prompt Template#
There are essentially two distinct prompt templates available - string prompt templates and chat prompt templates. String prompt templates provides a simple prompt in string format, while chat prompt templates produces a more structured prompt to be used with a chat API.
In this guide, we will create a custom prompt using a string prompt template.
To create a custom string prompt template, there are two requirements:
It has an input_variables attribute that exposes what input variables the prompt template expects.
It exposes a format method that takes in keyword arguments corresponding to the expected input_variables and returns the formatted prompt.
We will create a custom prompt template that takes in the function name as input and formats the prompt to provide the source code of the function. To achieve this, let’s first create a function that will return the source code of a function given its name.
import inspect
def get_source_code(function_name):
# Get the source code of the function | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/custom_prompt_template.html |
232bbb4ed535-1 | import inspect
def get_source_code(function_name):
# Get the source code of the function
return inspect.getsource(function_name)
Next, we’ll create a custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function.
from langchain.prompts import StringPromptTemplate
from pydantic import BaseModel, validator
class FunctionExplainerPromptTemplate(StringPromptTemplate, BaseModel):
""" A custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function. """
@validator("input_variables")
def validate_input_variables(cls, v):
""" Validate that the input variables are correct. """
if len(v) != 1 or "function_name" not in v:
raise ValueError("function_name must be the only input_variable.")
return v
def format(self, **kwargs) -> str:
# Get the source code of the function
source_code = get_source_code(kwargs["function_name"])
# Generate the prompt to be sent to the language model
prompt = f"""
Given the function name and source code, generate an English language explanation of the function.
Function Name: {kwargs["function_name"].__name__}
Source Code:
{source_code}
Explanation:
"""
return prompt
def _prompt_type(self):
return "function-explainer"
Use the custom prompt template#
Now that we have created a custom prompt template, we can use it to generate prompts for our task.
fn_explainer = FunctionExplainerPromptTemplate(input_variables=["function_name"])
# Generate a prompt for the function "get_source_code"
prompt = fn_explainer.format(function_name=get_source_code)
print(prompt) | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/custom_prompt_template.html |
232bbb4ed535-2 | prompt = fn_explainer.format(function_name=get_source_code)
print(prompt)
Given the function name and source code, generate an English language explanation of the function.
Function Name: get_source_code
Source Code:
def get_source_code(function_name):
# Get the source code of the function
return inspect.getsource(function_name)
Explanation:
previous
Connecting to a Feature Store
next
How to create a prompt template that uses few shot examples
Contents
Why are custom prompt templates needed?
Creating a Custom Prompt Template
Use the custom prompt template
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/custom_prompt_template.html |
f2b5fe25e8d4-0 | .ipynb
.pdf
Connecting to a Feature Store
Contents
Feast
Load Feast Store
Prompts
Use in a chain
Tecton
Prerequisites
Define and Load Features
Prompts
Use in a chain
Featureform
Initialize Featureform
Prompts
Use in a chain
Connecting to a Feature Store#
Feature stores are a concept from traditional machine learning that make sure data fed into models is up-to-date and relevant. For more on this, see here.
This concept is extremely relevant when considering putting LLM applications in production. In order to personalize LLM applications, you may want to combine LLMs with up-to-date information about particular users. Feature stores can be a great way to keep that data fresh, and LangChain provides an easy way to combine that data with LLMs.
In this notebook we will show how to connect prompt templates to feature stores. The basic idea is to call a feature store from inside a prompt template to retrieve values that are then formatted into the prompt.
Feast#
To start, we will use the popular open source feature store framework Feast.
This assumes you have already run the steps in the README around getting started. We will build of off that example in getting started, and create and LLMChain to write a note to a specific driver regarding their up-to-date statistics.
Load Feast Store#
Again, this should be set up according to the instructions in the Feast README
from feast import FeatureStore
# You may need to update the path depending on where you stored it
feast_repo_path = "../../../../../my_feature_repo/feature_repo/"
store = FeatureStore(repo_path=feast_repo_path)
Prompts#
Here we will set up a custom FeastPromptTemplate. This prompt template will take in a driver id, look up their stats, and format those stats into a prompt. | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html |
f2b5fe25e8d4-1 | Note that the input to this prompt template is just driver_id, since that is the only user defined piece (all other variables are looked up inside the prompt template).
from langchain.prompts import PromptTemplate, StringPromptTemplate
template = """Given the driver's up to date stats, write them note relaying those stats to them.
If they have a conversation rate above .5, give them a compliment. Otherwise, make a silly joke about chickens at the end to make them feel better
Here are the drivers stats:
Conversation rate: {conv_rate}
Acceptance rate: {acc_rate}
Average Daily Trips: {avg_daily_trips}
Your response:"""
prompt = PromptTemplate.from_template(template)
class FeastPromptTemplate(StringPromptTemplate):
def format(self, **kwargs) -> str:
driver_id = kwargs.pop("driver_id")
feature_vector = store.get_online_features(
features=[
'driver_hourly_stats:conv_rate',
'driver_hourly_stats:acc_rate',
'driver_hourly_stats:avg_daily_trips'
],
entity_rows=[{"driver_id": driver_id}]
).to_dict()
kwargs["conv_rate"] = feature_vector["conv_rate"][0]
kwargs["acc_rate"] = feature_vector["acc_rate"][0]
kwargs["avg_daily_trips"] = feature_vector["avg_daily_trips"][0]
return prompt.format(**kwargs)
prompt_template = FeastPromptTemplate(input_variables=["driver_id"])
print(prompt_template.format(driver_id=1001))
Given the driver's up to date stats, write them note relaying those stats to them.
If they have a conversation rate above .5, give them a compliment. Otherwise, make a silly joke about chickens at the end to make them feel better
Here are the drivers stats: | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html |
f2b5fe25e8d4-2 | Here are the drivers stats:
Conversation rate: 0.4745151400566101
Acceptance rate: 0.055561766028404236
Average Daily Trips: 936
Your response:
Use in a chain#
We can now use this in a chain, successfully creating a chain that achieves personalization backed by a feature store
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
chain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template)
chain.run(1001)
"Hi there! I wanted to update you on your current stats. Your acceptance rate is 0.055561766028404236 and your average daily trips are 936. While your conversation rate is currently 0.4745151400566101, I have no doubt that with a little extra effort, you'll be able to exceed that .5 mark! Keep up the great work! And remember, even chickens can't always cross the road, but they still give it their best shot."
Tecton#
Above, we showed how you could use Feast, a popular open source and self-managed feature store, with LangChain. Our examples below will show a similar integration using Tecton. Tecton is a fully managed feature platform built to orchestrate the complete ML feature lifecycle, from transformation to online serving, with enterprise-grade SLAs.
Prerequisites#
Tecton Deployment (sign up at https://tecton.ai)
TECTON_API_KEY environment variable set to a valid Service Account key
Define and Load Features#
We will use the user_transaction_counts Feature View from the Tecton tutorial as part of a Feature Service. For simplicity, we are only using a single Feature View; however, more sophisticated applications may require more feature views to retrieve the features needed for its prompt.
user_transaction_metrics = FeatureService( | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html |
f2b5fe25e8d4-3 | user_transaction_metrics = FeatureService(
name = "user_transaction_metrics",
features = [user_transaction_counts]
)
The above Feature Service is expected to be applied to a live workspace. For this example, we will be using the “prod” workspace.
import tecton
workspace = tecton.get_workspace("prod")
feature_service = workspace.get_feature_service("user_transaction_metrics")
Prompts#
Here we will set up a custom TectonPromptTemplate. This prompt template will take in a user_id , look up their stats, and format those stats into a prompt.
Note that the input to this prompt template is just user_id, since that is the only user defined piece (all other variables are looked up inside the prompt template).
from langchain.prompts import PromptTemplate, StringPromptTemplate
template = """Given the vendor's up to date transaction stats, write them a note based on the following rules:
1. If they had a transaction in the last day, write a short congratulations message on their recent sales
2. If no transaction in the last day, but they had a transaction in the last 30 days, playfully encourage them to sell more.
3. Always add a silly joke about chickens at the end
Here are the vendor's stats:
Number of Transactions Last Day: {transaction_count_1d}
Number of Transactions Last 30 Days: {transaction_count_30d}
Your response:"""
prompt = PromptTemplate.from_template(template)
class TectonPromptTemplate(StringPromptTemplate):
def format(self, **kwargs) -> str:
user_id = kwargs.pop("user_id")
feature_vector = feature_service.get_online_features(join_keys={"user_id": user_id}).to_dict()
kwargs["transaction_count_1d"] = feature_vector["user_transaction_counts.transaction_count_1d_1d"] | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html |
f2b5fe25e8d4-4 | kwargs["transaction_count_30d"] = feature_vector["user_transaction_counts.transaction_count_30d_1d"]
return prompt.format(**kwargs)
prompt_template = TectonPromptTemplate(input_variables=["user_id"])
print(prompt_template.format(user_id="user_469998441571"))
Given the vendor's up to date transaction stats, write them a note based on the following rules:
1. If they had a transaction in the last day, write a short congratulations message on their recent sales
2. If no transaction in the last day, but they had a transaction in the last 30 days, playfully encourage them to sell more.
3. Always add a silly joke about chickens at the end
Here are the vendor's stats:
Number of Transactions Last Day: 657
Number of Transactions Last 30 Days: 20326
Your response:
Use in a chain#
We can now use this in a chain, successfully creating a chain that achieves personalization backed by the Tecton Feature Platform
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
chain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template)
chain.run("user_469998441571")
'Wow, congratulations on your recent sales! Your business is really soaring like a chicken on a hot air balloon! Keep up the great work!'
Featureform#
Finally, we will use Featureform an open-source and enterprise-grade feature store to run the same example. Featureform allows you to work with your infrastructure like Spark or locally to define your feature transformations.
Initialize Featureform#
You can follow in the instructions in the README to initialize your transformations and features in Featureform.
import featureform as ff
client = ff.Client(host="demo.featureform.com")
Prompts# | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html |
f2b5fe25e8d4-5 | client = ff.Client(host="demo.featureform.com")
Prompts#
Here we will set up a custom FeatureformPromptTemplate. This prompt template will take in the average amount a user pays per transactions.
Note that the input to this prompt template is just avg_transaction, since that is the only user defined piece (all other variables are looked up inside the prompt template).
from langchain.prompts import PromptTemplate, StringPromptTemplate
template = """Given the amount a user spends on average per transaction, let them know if they are a high roller. Otherwise, make a silly joke about chickens at the end to make them feel better
Here are the user's stats:
Average Amount per Transaction: ${avg_transcation}
Your response:"""
prompt = PromptTemplate.from_template(template)
class FeatureformPromptTemplate(StringPromptTemplate):
def format(self, **kwargs) -> str:
user_id = kwargs.pop("user_id")
fpf = client.features([("avg_transactions", "quickstart")], {"user": user_id})
return prompt.format(**kwargs)
prompt_template = FeatureformPrompTemplate(input_variables=["user_id"])
print(prompt_template.format(user_id="C1410926"))
Use in a chain#
We can now use this in a chain, successfully creating a chain that achieves personalization backed by the Featureform Feature Platform
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
chain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template)
chain.run("C1410926")
previous
How-To Guides
next
How to create a custom prompt template
Contents
Feast
Load Feast Store
Prompts
Use in a chain
Tecton
Prerequisites
Define and Load Features
Prompts
Use in a chain
Featureform
Initialize Featureform | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html |
f2b5fe25e8d4-6 | Define and Load Features
Prompts
Use in a chain
Featureform
Initialize Featureform
Prompts
Use in a chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html |
b8c0c7053d61-0 | .ipynb
.pdf
How to create a prompt template that uses few shot examples
Contents
Use Case
Using an example set
Create the example set
Create a formatter for the few shot examples
Feed examples and formatter to FewShotPromptTemplate
Using an example selector
Feed examples into ExampleSelector
Feed example selector into FewShotPromptTemplate
How to create a prompt template that uses few shot examples#
In this tutorial, we’ll learn how to create a prompt template that uses few shot examples.
We’ll use the FewShotPromptTemplate class to create a prompt template that uses few shot examples. This class either takes in a set of examples, or an ExampleSelector object. In this tutorial, we’ll go over both options.
Use Case#
In this tutorial, we’ll configure few shot examples for self-ask with search.
Using an example set#
Create the example set#
To get started, create a list of few shot examples. Each example should be a dictionary with the keys being the input variables and the values being the values for those input variables.
from langchain.prompts.few_shot import FewShotPromptTemplate
from langchain.prompts.prompt import PromptTemplate
examples = [
{
"question": "Who lived longer, Muhammad Ali or Alan Turing?",
"answer":
"""
Are follow up questions needed here: Yes.
Follow up: How old was Muhammad Ali when he died?
Intermediate answer: Muhammad Ali was 74 years old when he died.
Follow up: How old was Alan Turing when he died?
Intermediate answer: Alan Turing was 41 years old when he died.
So the final answer is: Muhammad Ali
"""
},
{
"question": "When was the founder of craigslist born?",
"answer":
"""
Are follow up questions needed here: Yes. | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html |
b8c0c7053d61-1 | "answer":
"""
Are follow up questions needed here: Yes.
Follow up: Who was the founder of craigslist?
Intermediate answer: Craigslist was founded by Craig Newmark.
Follow up: When was Craig Newmark born?
Intermediate answer: Craig Newmark was born on December 6, 1952.
So the final answer is: December 6, 1952
"""
},
{
"question": "Who was the maternal grandfather of George Washington?",
"answer":
"""
Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball
"""
},
{
"question": "Are both the directors of Jaws and Casino Royale from the same country?",
"answer":
"""
Are follow up questions needed here: Yes.
Follow up: Who is the director of Jaws?
Intermediate Answer: The director of Jaws is Steven Spielberg.
Follow up: Where is Steven Spielberg from?
Intermediate Answer: The United States.
Follow up: Who is the director of Casino Royale?
Intermediate Answer: The director of Casino Royale is Martin Campbell.
Follow up: Where is Martin Campbell from?
Intermediate Answer: New Zealand.
So the final answer is: No
"""
}
]
Create a formatter for the few shot examples#
Configure a formatter that will format the few shot examples into a string. This formatter should be a PromptTemplate object.
example_prompt = PromptTemplate(input_variables=["question", "answer"], template="Question: {question}\n{answer}") | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html |
b8c0c7053d61-2 | print(example_prompt.format(**examples[0]))
Question: Who lived longer, Muhammad Ali or Alan Turing?
Are follow up questions needed here: Yes.
Follow up: How old was Muhammad Ali when he died?
Intermediate answer: Muhammad Ali was 74 years old when he died.
Follow up: How old was Alan Turing when he died?
Intermediate answer: Alan Turing was 41 years old when he died.
So the final answer is: Muhammad Ali
Feed examples and formatter to FewShotPromptTemplate#
Finally, create a FewShotPromptTemplate object. This object takes in the few shot examples and the formatter for the few shot examples.
prompt = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
suffix="Question: {input}",
input_variables=["input"]
)
print(prompt.format(input="Who was the father of Mary Ball Washington?"))
Question: Who lived longer, Muhammad Ali or Alan Turing?
Are follow up questions needed here: Yes.
Follow up: How old was Muhammad Ali when he died?
Intermediate answer: Muhammad Ali was 74 years old when he died.
Follow up: How old was Alan Turing when he died?
Intermediate answer: Alan Turing was 41 years old when he died.
So the final answer is: Muhammad Ali
Question: When was the founder of craigslist born?
Are follow up questions needed here: Yes.
Follow up: Who was the founder of craigslist?
Intermediate answer: Craigslist was founded by Craig Newmark.
Follow up: When was Craig Newmark born?
Intermediate answer: Craig Newmark was born on December 6, 1952.
So the final answer is: December 6, 1952
Question: Who was the maternal grandfather of George Washington?
Are follow up questions needed here: Yes. | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html |
b8c0c7053d61-3 | Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball
Question: Are both the directors of Jaws and Casino Royale from the same country?
Are follow up questions needed here: Yes.
Follow up: Who is the director of Jaws?
Intermediate Answer: The director of Jaws is Steven Spielberg.
Follow up: Where is Steven Spielberg from?
Intermediate Answer: The United States.
Follow up: Who is the director of Casino Royale?
Intermediate Answer: The director of Casino Royale is Martin Campbell.
Follow up: Where is Martin Campbell from?
Intermediate Answer: New Zealand.
So the final answer is: No
Question: Who was the father of Mary Ball Washington?
Using an example selector#
Feed examples into ExampleSelector#
We will reuse the example set and the formatter from the previous section. However, instead of feeding the examples directly into the FewShotPromptTemplate object, we will feed them into an ExampleSelector object.
In this tutorial, we will use the SemanticSimilarityExampleSelector class. This class selects few shot examples based on their similarity to the input. It uses an embedding model to compute the similarity between the input and the few shot examples, as well as a vector store to perform the nearest neighbor search.
from langchain.prompts.example_selector import SemanticSimilarityExampleSelector
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
example_selector = SemanticSimilarityExampleSelector.from_examples(
# This is the list of examples available to select from.
examples, | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html |
b8c0c7053d61-4 | # This is the list of examples available to select from.
examples,
# This is the embedding class used to produce embeddings which are used to measure semantic similarity.
OpenAIEmbeddings(),
# This is the VectorStore class that is used to store the embeddings and do a similarity search over.
Chroma,
# This is the number of examples to produce.
k=1
)
# Select the most similar example to the input.
question = "Who was the father of Mary Ball Washington?"
selected_examples = example_selector.select_examples({"question": question})
print(f"Examples most similar to the input: {question}")
for example in selected_examples:
print("\n")
for k, v in example.items():
print(f"{k}: {v}")
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
Examples most similar to the input: Who was the father of Mary Ball Washington?
question: Who was the maternal grandfather of George Washington?
answer:
Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball
Feed example selector into FewShotPromptTemplate#
Finally, create a FewShotPromptTemplate object. This object takes in the example selector and the formatter for the few shot examples.
prompt = FewShotPromptTemplate(
example_selector=example_selector,
example_prompt=example_prompt,
suffix="Question: {input}",
input_variables=["input"]
) | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html |
b8c0c7053d61-5 | suffix="Question: {input}",
input_variables=["input"]
)
print(prompt.format(input="Who was the father of Mary Ball Washington?"))
Question: Who was the maternal grandfather of George Washington?
Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball
Question: Who was the father of Mary Ball Washington?
previous
How to create a custom prompt template
next
How to work with partial Prompt Templates
Contents
Use Case
Using an example set
Create the example set
Create a formatter for the few shot examples
Feed examples and formatter to FewShotPromptTemplate
Using an example selector
Feed examples into ExampleSelector
Feed example selector into FewShotPromptTemplate
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html |
f48b9e03f75f-0 | .ipynb
.pdf
Output Parsers
Output Parsers#
Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.
Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:
get_format_instructions() -> str: A method which returns a string containing instructions for how the output of a language model should be formatted.
parse(str) -> Any: A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.
And then one optional one:
parse_with_prompt(str, PromptValue) -> Any: A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.
Below we go over the main type of output parser, the PydanticOutputParser. See the examples folder for other options.
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field, validator
from typing import List
model_name = 'text-davinci-003'
temperature = 0.0
model = OpenAI(model_name=model_name, temperature=temperature)
# Define your desired data structure.
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke") | https://python.langchain.com/en/latest/modules/prompts/output_parsers/getting_started.html |
f48b9e03f75f-1 | punchline: str = Field(description="answer to resolve the joke")
# You can add custom validation logic easily with Pydantic.
@validator('setup')
def question_ends_with_question_mark(cls, field):
if field[-1] != '?':
raise ValueError("Badly formed question!")
return field
# Set up a parser + inject instructions into the prompt template.
parser = PydanticOutputParser(pydantic_object=Joke)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
# And a query intented to prompt a language model to populate the data structure.
joke_query = "Tell me a joke."
_input = prompt.format_prompt(query=joke_query)
output = model(_input.to_string())
parser.parse(output)
Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')
previous
Output Parsers
next
CommaSeparatedListOutputParser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/output_parsers/getting_started.html |
8e1d52b25683-0 | .ipynb
.pdf
RetryOutputParser
RetryOutputParser#
While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it can’t. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example.
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.output_parsers import PydanticOutputParser, OutputFixingParser, RetryOutputParser
from pydantic import BaseModel, Field, validator
from typing import List
template = """Based on the user question, provide an Action and Action Input for what step should be taken.
{format_instructions}
Question: {query}
Response:"""
class Action(BaseModel):
action: str = Field(description="action to take")
action_input: str = Field(description="input to the action")
parser = PydanticOutputParser(pydantic_object=Action)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
prompt_value = prompt.format_prompt(query="who is leo di caprios gf?")
bad_response = '{"action": "search"}'
If we try to parse this response as is, we will get an error
parser.parse(bad_response)
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
File ~/workplace/langchain/langchain/output_parsers/pydantic.py:24, in PydanticOutputParser.parse(self, text)
23 json_object = json.loads(json_str) | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/retry.html |
8e1d52b25683-1 | 23 json_object = json.loads(json_str)
---> 24 return self.pydantic_object.parse_obj(json_object)
26 except (json.JSONDecodeError, ValidationError) as e:
File ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:527, in pydantic.main.BaseModel.parse_obj()
File ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:342, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for Action
action_input
field required (type=value_error.missing)
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
Cell In[6], line 1
----> 1 parser.parse(bad_response)
File ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text)
27 name = self.pydantic_object.__name__
28 msg = f"Failed to parse {name} from completion {text}. Got: {e}"
---> 29 raise OutputParserException(msg)
OutputParserException: Failed to parse Action from completion {"action": "search"}. Got: 1 validation error for Action
action_input
field required (type=value_error.missing)
If we try to use the OutputFixingParser to fix this error, it will be confused - namely, it doesn’t know what to actually put for action input.
fix_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI())
fix_parser.parse(bad_response)
Action(action='search', action_input='') | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/retry.html |
8e1d52b25683-2 | fix_parser.parse(bad_response)
Action(action='search', action_input='')
Instead, we can use the RetryOutputParser, which passes in the prompt (as well as the original output) to try again to get a better response.
from langchain.output_parsers import RetryWithErrorOutputParser
retry_parser = RetryWithErrorOutputParser.from_llm(parser=parser, llm=OpenAI(temperature=0))
retry_parser.parse_with_prompt(bad_response, prompt_value)
Action(action='search', action_input='who is leo di caprios gf?')
previous
PydanticOutputParser
next
Structured Output Parser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/retry.html |
4f4b2ef3edc7-0 | .ipynb
.pdf
OutputFixingParser
OutputFixingParser#
This output parser wraps another output parser and tries to fix any mistakes
The Pydantic guardrail simply tries to parse the LLM response. If it does not parse correctly, then it errors.
But we can do other things besides throw errors. Specifically, we can pass the misformatted output, along with the formatted instructions, to the model and ask it to fix it.
For this example, we’ll use the above OutputParser. Here’s what happens if we pass it a result that does not comply with the schema:
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field, validator
from typing import List
class Actor(BaseModel):
name: str = Field(description="name of an actor")
film_names: List[str] = Field(description="list of names of films they starred in")
actor_query = "Generate the filmography for a random actor."
parser = PydanticOutputParser(pydantic_object=Actor)
misformatted = "{'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}"
parser.parse(misformatted)
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
File ~/workplace/langchain/langchain/output_parsers/pydantic.py:23, in PydanticOutputParser.parse(self, text)
22 json_str = match.group()
---> 23 json_object = json.loads(json_str)
24 return self.pydantic_object.parse_obj(json_object) | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html |
4f4b2ef3edc7-1 | 24 return self.pydantic_object.parse_obj(json_object)
File ~/.pyenv/versions/3.9.1/lib/python3.9/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
343 if (cls is None and object_hook is None and
344 parse_int is None and parse_float is None and
345 parse_constant is None and object_pairs_hook is None and not kw):
--> 346 return _default_decoder.decode(s)
347 if cls is None:
File ~/.pyenv/versions/3.9.1/lib/python3.9/json/decoder.py:337, in JSONDecoder.decode(self, s, _w)
333 """Return the Python representation of ``s`` (a ``str`` instance
334 containing a JSON document).
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
File ~/.pyenv/versions/3.9.1/lib/python3.9/json/decoder.py:353, in JSONDecoder.raw_decode(self, s, idx)
352 try:
--> 353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
Cell In[6], line 1
----> 1 parser.parse(misformatted) | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html |
4f4b2ef3edc7-2 | Cell In[6], line 1
----> 1 parser.parse(misformatted)
File ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text)
27 name = self.pydantic_object.__name__
28 msg = f"Failed to parse {name} from completion {text}. Got: {e}"
---> 29 raise OutputParserException(msg)
OutputParserException: Failed to parse Actor from completion {'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}. Got: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
Now we can construct and use a OutputFixingParser. This output parser takes as an argument another output parser but also an LLM with which to try to correct any formatting mistakes.
from langchain.output_parsers import OutputFixingParser
new_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI())
new_parser.parse(misformatted)
Actor(name='Tom Hanks', film_names=['Forrest Gump'])
previous
Enum Output Parser
next
PydanticOutputParser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html |
1f452ee13c2c-0 | .ipynb
.pdf
CommaSeparatedListOutputParser
CommaSeparatedListOutputParser#
Here’s another parser strictly less powerful than Pydantic/JSON parsing.
from langchain.output_parsers import CommaSeparatedListOutputParser
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
output_parser = CommaSeparatedListOutputParser()
format_instructions = output_parser.get_format_instructions()
prompt = PromptTemplate(
template="List five {subject}.\n{format_instructions}",
input_variables=["subject"],
partial_variables={"format_instructions": format_instructions}
)
model = OpenAI(temperature=0)
_input = prompt.format(subject="ice cream flavors")
output = model(_input)
output_parser.parse(output)
['Vanilla',
'Chocolate',
'Strawberry',
'Mint Chocolate Chip',
'Cookies and Cream']
previous
Output Parsers
next
Enum Output Parser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/comma_separated.html |
da21e7da4fcb-0 | .ipynb
.pdf
Enum Output Parser
Enum Output Parser#
This notebook shows how to use an Enum output parser
from langchain.output_parsers.enum import EnumOutputParser
from enum import Enum
class Colors(Enum):
RED = "red"
GREEN = "green"
BLUE = "blue"
parser = EnumOutputParser(enum=Colors)
parser.parse("red")
<Colors.RED: 'red'>
# Can handle spaces
parser.parse(" green")
<Colors.GREEN: 'green'>
# And new lines
parser.parse("blue\n")
<Colors.BLUE: 'blue'>
# And raises errors when appropriate
parser.parse("yellow")
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File ~/workplace/langchain/langchain/output_parsers/enum.py:25, in EnumOutputParser.parse(self, response)
24 try:
---> 25 return self.enum(response.strip())
26 except ValueError:
File ~/.pyenv/versions/3.9.1/lib/python3.9/enum.py:315, in EnumMeta.__call__(cls, value, names, module, qualname, type, start)
314 if names is None: # simple value lookup
--> 315 return cls.__new__(cls, value)
316 # otherwise, functional API: we're creating a new Enum type
File ~/.pyenv/versions/3.9.1/lib/python3.9/enum.py:611, in Enum.__new__(cls, value)
610 if result is None and exc is None:
--> 611 raise ve_exc
612 elif exc is None:
ValueError: 'yellow' is not a valid Colors
During handling of the above exception, another exception occurred: | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/enum.html |
da21e7da4fcb-1 | During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
Cell In[8], line 2
1 # And raises errors when appropriate
----> 2 parser.parse("yellow")
File ~/workplace/langchain/langchain/output_parsers/enum.py:27, in EnumOutputParser.parse(self, response)
25 return self.enum(response.strip())
26 except ValueError:
---> 27 raise OutputParserException(
28 f"Response '{response}' is not one of the "
29 f"expected values: {self._valid_values}"
30 )
OutputParserException: Response 'yellow' is not one of the expected values: ['red', 'green', 'blue']
previous
CommaSeparatedListOutputParser
next
OutputFixingParser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/enum.html |
86286a7e8dd6-0 | .ipynb
.pdf
Structured Output Parser
Structured Output Parser#
While the Pydantic/JSON parser is more powerful, we initially experimented data structures having text fields only.
from langchain.output_parsers import StructuredOutputParser, ResponseSchema
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
Here we define the response schema we want to receive.
response_schemas = [
ResponseSchema(name="answer", description="answer to the user's question"),
ResponseSchema(name="source", description="source used to answer the user's question, should be a website.")
]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
We now get a string that contains instructions for how the response should be formatted, and we then insert that into our prompt.
format_instructions = output_parser.get_format_instructions()
prompt = PromptTemplate(
template="answer the users question as best as possible.\n{format_instructions}\n{question}",
input_variables=["question"],
partial_variables={"format_instructions": format_instructions}
)
We can now use this to format a prompt to send to the language model, and then parse the returned result.
model = OpenAI(temperature=0)
_input = prompt.format_prompt(question="what's the capital of france?")
output = model(_input.to_string())
output_parser.parse(output)
{'answer': 'Paris',
'source': 'https://www.worldatlas.com/articles/what-is-the-capital-of-france.html'}
And here’s an example of using this in a chat model
chat_model = ChatOpenAI(temperature=0)
prompt = ChatPromptTemplate(
messages=[ | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/structured.html |
86286a7e8dd6-1 | prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template("answer the users question as best as possible.\n{format_instructions}\n{question}")
],
input_variables=["question"],
partial_variables={"format_instructions": format_instructions}
)
_input = prompt.format_prompt(question="what's the capital of france?")
output = chat_model(_input.to_messages())
output_parser.parse(output.content)
{'answer': 'Paris', 'source': 'https://en.wikipedia.org/wiki/Paris'}
previous
RetryOutputParser
next
Memory
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/structured.html |
3cf9be57f12b-0 | .ipynb
.pdf
PydanticOutputParser
PydanticOutputParser#
This output parser allows users to specify an arbitrary JSON schema and query LLMs for JSON outputs that conform to that schema.
Keep in mind that large language models are leaky abstractions! You’ll have to use an LLM with sufficient capacity to generate well-formed JSON. In the OpenAI family, DaVinci can do reliably but Curie’s ability already drops off dramatically.
Use Pydantic to declare your data model. Pydantic’s BaseModel like a Python dataclass, but with actual type checking + coercion.
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field, validator
from typing import List
model_name = 'text-davinci-003'
temperature = 0.0
model = OpenAI(model_name=model_name, temperature=temperature)
# Define your desired data structure.
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke")
# You can add custom validation logic easily with Pydantic.
@validator('setup')
def question_ends_with_question_mark(cls, field):
if field[-1] != '?':
raise ValueError("Badly formed question!")
return field
# And a query intented to prompt a language model to populate the data structure.
joke_query = "Tell me a joke."
# Set up a parser + inject instructions into the prompt template.
parser = PydanticOutputParser(pydantic_object=Joke)
prompt = PromptTemplate( | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/pydantic.html |
3cf9be57f12b-1 | prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
_input = prompt.format_prompt(query=joke_query)
output = model(_input.to_string())
parser.parse(output)
Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')
# Here's another example, but with a compound typed field.
class Actor(BaseModel):
name: str = Field(description="name of an actor")
film_names: List[str] = Field(description="list of names of films they starred in")
actor_query = "Generate the filmography for a random actor."
parser = PydanticOutputParser(pydantic_object=Actor)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
_input = prompt.format_prompt(query=actor_query)
output = model(_input.to_string())
parser.parse(output)
Actor(name='Tom Hanks', film_names=['Forrest Gump', 'Saving Private Ryan', 'The Green Mile', 'Cast Away', 'Toy Story'])
previous
OutputFixingParser
next
RetryOutputParser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/pydantic.html |
783b1af6044a-0 | .ipynb
.pdf
Getting Started
Contents
One Line Index Creation
Walkthrough
Getting Started#
LangChain primarily focuses on constructing indexes with the goal of using them as a Retriever. In order to best understand what this means, it’s worth highlighting what the base Retriever interface is. The BaseRetriever class in LangChain is as follows:
from abc import ABC, abstractmethod
from typing import List
from langchain.schema import Document
class BaseRetriever(ABC):
@abstractmethod
def get_relevant_documents(self, query: str) -> List[Document]:
"""Get texts relevant for a query.
Args:
query: string to find relevant texts for
Returns:
List of relevant documents
"""
It’s that simple! The get_relevant_documents method can be implemented however you see fit.
Of course, we also help construct what we think useful Retrievers are. The main type of Retriever that we focus on is a Vectorstore retriever. We will focus on that for the rest of this guide.
In order to understand what a vectorstore retriever is, it’s important to understand what a Vectorstore is. So let’s look at that.
By default, LangChain uses Chroma as the vectorstore to index and search embeddings. To walk through this tutorial, we’ll first need to install chromadb.
pip install chromadb
This example showcases question answering over documents.
We have chosen this as the example for getting started because it nicely combines a lot of different elements (Text splitters, embeddings, vectorstores) and then also shows how to use them in a chain.
Question answering over documents consists of four steps:
Create an index
Create a Retriever from that index
Create a question answering chain
Ask questions! | https://python.langchain.com/en/latest/modules/indexes/getting_started.html |
783b1af6044a-1 | Create a Retriever from that index
Create a question answering chain
Ask questions!
Each of the steps has multiple sub steps and potential configurations. In this notebook we will primarily focus on (1). We will start by showing the one-liner for doing so, but then break down what is actually going on.
First, let’s import some common classes we’ll use no matter what.
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
Next in the generic setup, let’s specify the document loader we want to use. You can download the state_of_the_union.txt file here
from langchain.document_loaders import TextLoader
loader = TextLoader('../state_of_the_union.txt', encoding='utf8')
One Line Index Creation#
To get started as quickly as possible, we can use the VectorstoreIndexCreator.
from langchain.indexes import VectorstoreIndexCreator
index = VectorstoreIndexCreator().from_loaders([loader])
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
Now that the index is created, we can use it to ask questions of the data! Note that under the hood this is actually doing a few steps as well, which we will cover later in this guide.
query = "What did the president say about Ketanji Brown Jackson"
index.query(query)
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
query = "What did the president say about Ketanji Brown Jackson"
index.query_with_sources(query) | https://python.langchain.com/en/latest/modules/indexes/getting_started.html |
783b1af6044a-2 | index.query_with_sources(query)
{'question': 'What did the president say about Ketanji Brown Jackson',
'answer': " The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, one of the nation's top legal minds, to continue Justice Breyer's legacy of excellence, and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\n",
'sources': '../state_of_the_union.txt'}
What is returned from the VectorstoreIndexCreator is VectorStoreIndexWrapper, which provides these nice query and query_with_sources functionality. If we just wanted to access the vectorstore directly, we can also do that.
index.vectorstore
<langchain.vectorstores.chroma.Chroma at 0x119aa5940>
If we then want to access the VectorstoreRetriever, we can do that with:
index.vectorstore.as_retriever()
VectorStoreRetriever(vectorstore=<langchain.vectorstores.chroma.Chroma object at 0x119aa5940>, search_kwargs={})
Walkthrough#
Okay, so what’s actually going on? How is this index getting created?
A lot of the magic is being hid in this VectorstoreIndexCreator. What is this doing?
There are three main steps going on after the documents are loaded:
Splitting documents into chunks
Creating embeddings for each document
Storing documents and embeddings in a vectorstore
Let’s walk through this in code
documents = loader.load()
Next, we will split the documents into chunks.
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
We will then select which embeddings we want to use. | https://python.langchain.com/en/latest/modules/indexes/getting_started.html |
783b1af6044a-3 | We will then select which embeddings we want to use.
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
We now create the vectorstore to use as the index.
from langchain.vectorstores import Chroma
db = Chroma.from_documents(texts, embeddings)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
So that’s creating the index. Then, we expose this index in a retriever interface.
retriever = db.as_retriever()
Then, as before, we create a chain and use it to answer questions!
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=retriever)
query = "What did the president say about Ketanji Brown Jackson"
qa.run(query)
" The President said that Judge Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He said she is a consensus builder and has received a broad range of support from organizations such as the Fraternal Order of Police and former judges appointed by Democrats and Republicans."
VectorstoreIndexCreator is just a wrapper around all this logic. It is configurable in the text splitter it uses, the embeddings it uses, and the vectorstore it uses. For example, you can configure it as below:
index_creator = VectorstoreIndexCreator(
vectorstore_cls=Chroma,
embedding=OpenAIEmbeddings(),
text_splitter=CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
) | https://python.langchain.com/en/latest/modules/indexes/getting_started.html |
783b1af6044a-4 | )
Hopefully this highlights what is going on under the hood of VectorstoreIndexCreator. While we think it’s important to have a simple way to create indexes, we also think it’s important to understand what’s going on under the hood.
previous
Indexes
next
Document Loaders
Contents
One Line Index Creation
Walkthrough
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/getting_started.html |
9afbabe57351-0 | .rst
.pdf
Document Loaders
Contents
Transform loaders
Public dataset or service loaders
Proprietary dataset or service loaders
Document Loaders#
Note
Conceptual Guide
Combining language models with your own text data is a powerful way to differentiate them.
The first step in doing this is to load the data into “Documents” - a fancy way of say some pieces of text.
The document loader is aimed at making this easy.
The following document loaders are provided:
Transform loaders#
These transform loaders transform data from a specific format into the Document format.
For example, there are transformers for CSV and SQL.
Mostly, these loaders input data from files but sometime from URLs.
A primary driver of a lot of these transformers is the Unstructured python package.
This package transforms many types of files - text, powerpoint, images, html, pdf, etc - into text data.
For detailed instructions on how to get set up with Unstructured, see installation guidelines here.
CoNLL-U
Copy Paste
CSV
Email
EPub
EverNote
Facebook Chat
File Directory
HTML
Images
Jupyter Notebook
JSON
Markdown
Microsoft PowerPoint
Microsoft Word
Open Document Format (ODT)
Pandas DataFrame
PDF
Sitemap
Subtitle
Telegram
TOML
Unstructured File
URL
Selenium URL Loader
Playwright URL Loader
WebBaseLoader
Weather
WhatsApp Chat
Public dataset or service loaders#
These datasets and sources are created for public domain and we use queries to search there
and download necessary documents.
For example, Hacker News service.
We don’t need any access permissions to these datasets and services.
Arxiv
AZLyrics
BiliBili
College Confidential
Gutenberg
Hacker News
HuggingFace dataset
iFixit
IMSDb
MediaWikiDump
Wikipedia
YouTube transcripts | https://python.langchain.com/en/latest/modules/indexes/document_loaders.html |
9afbabe57351-1 | iFixit
IMSDb
MediaWikiDump
Wikipedia
YouTube transcripts
Proprietary dataset or service loaders#
These datasets and services are not from the public domain.
These loaders mostly transform data from specific formats of applications or cloud services,
for example Google Drive.
We need access tokens and sometime other parameters to get access to these datasets and services.
Airbyte JSON
Apify Dataset
AWS S3 Directory
AWS S3 File
Azure Blob Storage Container
Azure Blob Storage File
Blackboard
Blockchain
ChatGPT Data
Confluence
Diffbot
Discord
Docugami
DuckDB
Figma
GitBook
Git
Google BigQuery
Google Cloud Storage Directory
Google Cloud Storage File
Google Drive
Image captions
Iugu
Joplin
Microsoft OneDrive
Modern Treasury
Notion DB 2/2
Notion DB 1/2
Obsidian
Psychic
ReadTheDocs Documentation
Reddit
Roam
Slack
Spreedly
Stripe
2Markdown
Twitter
previous
Getting Started
next
CoNLL-U
Contents
Transform loaders
Public dataset or service loaders
Proprietary dataset or service loaders
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders.html |
af6f273943df-0 | .rst
.pdf
Retrievers
Retrievers#
Note
Conceptual Guide
The retriever interface is a generic interface that makes it easy to combine documents with
language models. This interface exposes a get_relevant_documents method which takes in a query
(a string) and returns a list of documents.
Please see below for a list of all the retrievers supported.
Arxiv
Azure Cognitive Search Retriever
ChatGPT Plugin
Self-querying with Chroma
Cohere Reranker
Contextual Compression
Stringing compressors and document transformers together
Databerry
ElasticSearch BM25
kNN
Metal
Pinecone Hybrid Search
Self-querying
SVM
TF-IDF
Time Weighted VectorStore
VectorStore
Vespa
Weaviate Hybrid Search
Self-querying with Weaviate
Wikipedia
Zep Memory
previous
Zilliz
next
Arxiv
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers.html |
910283876af9-0 | .rst
.pdf
Vectorstores
Vectorstores#
Note
Conceptual Guide
Vectorstores are one of the most important components of building indexes.
For an introduction to vectorstores and generic functionality see:
Getting Started
We also have documentation for all the types of vectorstores that are supported.
Please see below for that list.
AnalyticDB
Annoy
Atlas
Chroma
Deep Lake
DocArrayHnswSearch
DocArrayInMemorySearch
ElasticSearch
FAISS
LanceDB
Milvus
MyScale
OpenSearch
PGVector
Pinecone
Qdrant
Redis
Supabase (Postgres)
Tair
Typesense
Vectara
Weaviate
Persistance
Retriever options
Zilliz
previous
tiktoken (OpenAI) tokenizer
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores.html |
67698c23ea4d-0 | .rst
.pdf
Text Splitters
Text Splitters#
Note
Conceptual Guide
When you want to deal with long pieces of text, it is necessary to split up that text into chunks.
As simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together. What “semantically related” means could depend on the type of text.
This notebook showcases several ways to do that.
At a high level, text splitters work as following:
Split the text up into small, semantically meaningful chunks (often sentences).
Start combining these small chunks into a larger chunk until you reach a certain size (as measured by some function).
Once you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks).
That means there are two different axes along which you can customize your text splitter:
How the text is split
How the chunk size is measured
For an introduction to the default text splitter and generic functionality see:
Getting Started
Usage examples for the text splitters:
Character
LaTeX
Markdown
NLTK
Python code
Recursive Character
spaCy
tiktoken (OpenAI)
Most LLMs are constrained by the number of tokens that you can pass in, which is not the same as the number of characters.
In order to get a more accurate estimate, we can use tokenizers to count the number of tokens in the text.
We use this number inside the ..TextSplitter classes.
This implemented as the from_<tokenizer> methods of the ..TextSplitter classes:
Hugging Face tokenizer
tiktoken (OpenAI) tokenizer
previous
Twitter
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase. | https://python.langchain.com/en/latest/modules/indexes/text_splitters.html |
67698c23ea4d-1 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters.html |
a86199f0fbc8-0 | .ipynb
.pdf
Contextual Compression
Contents
Contextual Compression
Using a vanilla vector store retriever
Adding contextual compression with an LLMChainExtractor
More built-in compressors: filters
LLMChainFilter
EmbeddingsFilter
Stringing compressors and document transformers together
Contextual Compression#
This notebook introduces the concept of DocumentCompressors and the ContextualCompressionRetriever. The core idea is simple: given a specific query, we should be able to return only the documents relevant to that query, and only the parts of those documents that are relevant. The ContextualCompressionsRetriever is a wrapper for another retriever that iterates over the initial output of the base retriever and filters and compresses those initial documents, so that only the most relevant information is returned.
# Helper function for printing docs
def pretty_print_docs(docs):
print(f"\n{'-' * 100}\n".join([f"Document {i+1}:\n\n" + d.page_content for i, d in enumerate(docs)]))
Using a vanilla vector store retriever#
Let’s start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can see that given an example question our retriever returns one or two relevant docs and a few irrelevant docs. And even the relevant docs have a lot of irrelevant information in them.
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.document_loaders import TextLoader
from langchain.vectorstores import FAISS
documents = TextLoader('../../../state_of_the_union.txt').load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents) | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
a86199f0fbc8-1 | texts = text_splitter.split_documents(documents)
retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever()
docs = retriever.get_relevant_documents("What did the president say about Ketanji Brown Jackson")
pretty_print_docs(docs)
Document 1:
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
----------------------------------------------------------------------------------------------------
Document 2:
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling.
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
a86199f0fbc8-2 | We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.
----------------------------------------------------------------------------------------------------
Document 3:
And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong.
As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential.
While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.
And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things.
So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together.
First, beat the opioid epidemic.
----------------------------------------------------------------------------------------------------
Document 4:
Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers.
And as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up.
That ends on my watch.
Medicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect.
We’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees.
Let’s pass the Paycheck Fairness Act and paid leave.
Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
a86199f0fbc8-3 | Let’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.
Adding contextual compression with an LLMChainExtractor#
Now let’s wrap our base retriever with a ContextualCompressionRetriever. We’ll add an LLMChainExtractor, which will iterate over the initially returned documents and extract from each only the content that is relevant to the query.
from langchain.llms import OpenAI
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import LLMChainExtractor
llm = OpenAI(temperature=0)
compressor = LLMChainExtractor.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(base_compressor=compressor, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence."
----------------------------------------------------------------------------------------------------
Document 2:
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
More built-in compressors: filters#
LLMChainFilter# | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
a86199f0fbc8-4 | More built-in compressors: filters#
LLMChainFilter#
The LLMChainFilter is slightly simpler but more robust compressor that uses an LLM chain to decide which of the initially retrieved documents to filter out and which ones to return, without manipulating the document contents.
from langchain.retrievers.document_compressors import LLMChainFilter
_filter = LLMChainFilter.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(base_compressor=_filter, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
EmbeddingsFilter#
Making an extra LLM call over each retrieved document is expensive and slow. The EmbeddingsFilter provides a cheaper and faster option by embedding the documents and query and only returning those documents which have sufficiently similar embeddings to the query.
from langchain.embeddings import OpenAIEmbeddings
from langchain.retrievers.document_compressors import EmbeddingsFilter | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
a86199f0fbc8-5 | from langchain.retrievers.document_compressors import EmbeddingsFilter
embeddings = OpenAIEmbeddings()
embeddings_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)
compression_retriever = ContextualCompressionRetriever(base_compressor=embeddings_filter, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
----------------------------------------------------------------------------------------------------
Document 2:
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
a86199f0fbc8-6 | We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling.
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.
We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.
----------------------------------------------------------------------------------------------------
Document 3:
And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong.
As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential.
While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.
And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things.
So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together.
First, beat the opioid epidemic.
Stringing compressors and document transformers together#
Using the DocumentCompressorPipeline we can also easily combine multiple compressors in sequence. Along with compressors we can add BaseDocumentTransformers to our pipeline, which don’t perform any contextual compression but simply perform some transformation on a set of documents. For example TextSplitters can be used as document transformers to split documents into smaller pieces, and the EmbeddingsRedundantFilter can be used to filter out redundant documents based on embedding similarity between documents. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
a86199f0fbc8-7 | Below we create a compressor pipeline by first splitting our docs into smaller chunks, then removing redundant documents, and then filtering based on relevance to the query.
from langchain.document_transformers import EmbeddingsRedundantFilter
from langchain.retrievers.document_compressors import DocumentCompressorPipeline
from langchain.text_splitter import CharacterTextSplitter
splitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=0, separator=". ")
redundant_filter = EmbeddingsRedundantFilter(embeddings=embeddings)
relevant_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)
pipeline_compressor = DocumentCompressorPipeline(
transformers=[splitter, redundant_filter, relevant_filter]
)
compression_retriever = ContextualCompressionRetriever(base_compressor=pipeline_compressor, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson
----------------------------------------------------------------------------------------------------
Document 2:
As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential.
While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year
----------------------------------------------------------------------------------------------------
Document 3:
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder
previous
Cohere Reranker
next
Databerry
Contents | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
a86199f0fbc8-8 | previous
Cohere Reranker
next
Databerry
Contents
Contextual Compression
Using a vanilla vector store retriever
Adding contextual compression with an LLMChainExtractor
More built-in compressors: filters
LLMChainFilter
EmbeddingsFilter
Stringing compressors and document transformers together
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
b7249e1bebd4-0 | .ipynb
.pdf
SVM
Contents
Create New Retriever with Texts
Use Retriever
SVM#
Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.
This notebook goes over how to use a retriever that under the hood uses an SVM using scikit-learn package.
Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.ipynb
#!pip install scikit-learn
#!pip install lark
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.retrievers import SVMRetriever
from langchain.embeddings import OpenAIEmbeddings
Create New Retriever with Texts#
retriever = SVMRetriever.from_texts(["foo", "bar", "world", "hello", "foo bar"], OpenAIEmbeddings())
Use Retriever#
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
result
[Document(page_content='foo', metadata={}),
Document(page_content='foo bar', metadata={}),
Document(page_content='hello', metadata={}),
Document(page_content='world', metadata={})]
previous
Self-querying
next
TF-IDF
Contents
Create New Retriever with Texts
Use Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/svm.html |
5a146812f8c5-0 | .ipynb
.pdf
Arxiv
Contents
Installation
Examples
Running retriever
Question Answering on facts
Arxiv#
arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.
This notebook shows how to retrieve scientific articles from Arxiv.org into the Document format that is used downstream.
Installation#
First, you need to install arxiv python package.
#!pip install arxiv
ArxivRetriever has these arguments:
optional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.
optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), Title, Authors, Summary. If True, other fields also downloaded.
get_relevant_documents() has one argument, query: free text which used to find documents in Arxiv.org
Examples#
Running retriever#
from langchain.retrievers import ArxivRetriever
retriever = ArxivRetriever(load_max_docs=2)
docs = retriever.get_relevant_documents(query='1605.08386')
docs[0].metadata # meta-information of the Document
{'Published': '2016-05-26',
'Title': 'Heat-bath random walks with Markov bases',
'Authors': 'Caprice Stanley, Tobias Windisch', | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html |
5a146812f8c5-1 | 'Authors': 'Caprice Stanley, Tobias Windisch',
'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on\nfibers of a fixed integer matrix can be bounded from above by a constant. We\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditions on the set of moves so that the heat-bath random\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\ndimension.'}
docs[0].page_content[:400] # a content of the Document
'arXiv:1605.08386v1 [math.CO] 26 May 2016\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\nCAPRICE STANLEY AND TOBIAS WINDISCH\nAbstract. Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on fibers of a\nfixed integer matrix can be bounded from above by a constant. We then study the mixing\nbehaviour of heat-b'
Question Answering on facts#
# get a token: https://platform.openai.com/account/api-keys
from getpass import getpass
OPENAI_API_KEY = getpass()
import os
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
model = ChatOpenAI(model_name='gpt-3.5-turbo') # switch to 'gpt-4'
qa = ConversationalRetrievalChain.from_llm(model,retriever=retriever)
questions = [ | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html |
5a146812f8c5-2 | questions = [
"What are Heat-bath random walks with Markov base?",
"What is the ImageBind model?",
"How does Compositional Reasoning with Large Language Models works?",
]
chat_history = []
for question in questions:
result = qa({"question": question, "chat_history": chat_history})
chat_history.append((question, result['answer']))
print(f"-> **Question**: {question} \n")
print(f"**Answer**: {result['answer']} \n")
-> **Question**: What are Heat-bath random walks with Markov base?
**Answer**: I'm not sure, as I don't have enough context to provide a definitive answer. The term "Heat-bath random walks with Markov base" is not mentioned in the given text. Could you provide more information or context about where you encountered this term?
-> **Question**: What is the ImageBind model?
**Answer**: ImageBind is an approach developed by Facebook AI Research to learn a joint embedding across six different modalities, including images, text, audio, depth, thermal, and IMU data. The approach uses the binding property of images to align each modality's embedding to image embeddings and achieve an emergent alignment across all modalities. This enables novel multimodal capabilities, including cross-modal retrieval, embedding-space arithmetic, and audio-to-image generation, among others. The approach sets a new state-of-the-art on emergent zero-shot recognition tasks across modalities, outperforming specialist supervised models. Additionally, it shows strong few-shot recognition results and serves as a new way to evaluate vision models for visual and non-visual tasks.
-> **Question**: How does Compositional Reasoning with Large Language Models works? | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html |
5a146812f8c5-3 | -> **Question**: How does Compositional Reasoning with Large Language Models works?
**Answer**: Compositional reasoning with large language models refers to the ability of these models to correctly identify and represent complex concepts by breaking them down into smaller, more basic parts and combining them in a structured way. This involves understanding the syntax and semantics of language and using that understanding to build up more complex meanings from simpler ones.
In the context of the paper "Does CLIP Bind Concepts? Probing Compositionality in Large Image Models", the authors focus specifically on the ability of a large pretrained vision and language model (CLIP) to encode compositional concepts and to bind variables in a structure-sensitive way. They examine CLIP's ability to compose concepts in a single-object setting, as well as in situations where concept binding is needed.
The authors situate their work within the tradition of research on compositional distributional semantics models (CDSMs), which seek to bridge the gap between distributional models and formal semantics by building architectures which operate over vectors yet still obey traditional theories of linguistic composition. They compare the performance of CLIP with several architectures from research on CDSMs to evaluate its ability to encode and reason about compositional concepts.
questions = [
"What are Heat-bath random walks with Markov base? Include references to answer.",
]
chat_history = []
for question in questions:
result = qa({"question": question, "chat_history": chat_history})
chat_history.append((question, result['answer']))
print(f"-> **Question**: {question} \n")
print(f"**Answer**: {result['answer']} \n")
-> **Question**: What are Heat-bath random walks with Markov base? Include references to answer. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html |
5a146812f8c5-4 | **Answer**: Heat-bath random walks with Markov base (HB-MB) is a class of stochastic processes that have been studied in the field of statistical mechanics and condensed matter physics. In these processes, a particle moves in a lattice by making a transition to a neighboring site, which is chosen according to a probability distribution that depends on the energy of the particle and the energy of its surroundings.
The HB-MB process was introduced by Bortz, Kalos, and Lebowitz in 1975 as a way to simulate the dynamics of interacting particles in a lattice at thermal equilibrium. The method has been used to study a variety of physical phenomena, including phase transitions, critical behavior, and transport properties.
References:
Bortz, A. B., Kalos, M. H., & Lebowitz, J. L. (1975). A new algorithm for Monte Carlo simulation of Ising spin systems. Journal of Computational Physics, 17(1), 10-18.
Binder, K., & Heermann, D. W. (2010). Monte Carlo simulation in statistical physics: an introduction. Springer Science & Business Media.
previous
Retrievers
next
Azure Cognitive Search Retriever
Contents
Installation
Examples
Running retriever
Question Answering on facts
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html |
80fa7d767fb0-0 | .ipynb
.pdf
VectorStore
Contents
Maximum Marginal Relevance Retrieval
Similarity Score Threshold Retrieval
Specifying top k
VectorStore#
The index - and therefore the retriever - that LangChain has the most support for is the VectorStoreRetriever. As the name suggests, this retriever is backed heavily by a VectorStore.
Once you construct a VectorStore, its very easy to construct a retriever. Let’s walk through an example.
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = FAISS.from_documents(texts, embeddings)
Exiting: Cleaning up .chroma directory
retriever = db.as_retriever()
docs = retriever.get_relevant_documents("what did he say about ketanji brown jackson")
Maximum Marginal Relevance Retrieval#
By default, the vectorstore retriever uses similarity search. If the underlying vectorstore support maximum marginal relevance search, you can specify that as the search type.
retriever = db.as_retriever(search_type="mmr")
docs = retriever.get_relevant_documents("what did he say abotu ketanji brown jackson")
Similarity Score Threshold Retrieval#
You can also a retrieval method that sets a similarity score threshold and only returns documents with a score above that threshold
retriever = db.as_retriever(search_type="similarity_score_threshold", search_kwargs={"score_threshold": .5}) | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vectorstore.html |
80fa7d767fb0-1 | docs = retriever.get_relevant_documents("what did he say abotu ketanji brown jackson")
Specifying top k#
You can also specify search kwargs like k to use when doing retrieval.
retriever = db.as_retriever(search_kwargs={"k": 1})
docs = retriever.get_relevant_documents("what did he say abotu ketanji brown jackson")
len(docs)
1
previous
Time Weighted VectorStore
next
Vespa
Contents
Maximum Marginal Relevance Retrieval
Similarity Score Threshold Retrieval
Specifying top k
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vectorstore.html |
d334dd3182da-0 | .ipynb
.pdf
Self-querying
Contents
Creating a Pinecone index
Creating our self-querying retriever
Testing it out
Filter k
Self-querying#
In the notebook we’ll demo the SelfQueryRetriever, which, as the name suggests, has the ability to query itself. Specifically, given any natural language query, the retriever uses a query-constructing LLM chain to write a structured query and then applies that structured query to it’s underlying VectorStore. This allows the retriever to not only use the user-input query for semantic similarity comparison with the contents of stored documented, but to also extract filters from the user query on the metadata of stored documents and to execute those filters.
Creating a Pinecone index#
First we’ll want to create a Pinecone VectorStore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
To use Pinecone, you to have pinecone package installed and you must have an API key and an Environment. Here are the installation instructions.
NOTE: The self-query retriever requires you to have lark package installed.
# !pip install lark
#!pip install pinecone-client
import os
import pinecone
pinecone.init(api_key=os.environ["PINECONE_API_KEY"], environment=os.environ["PINECONE_ENV"])
/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pinecone/index.py:4: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)
from tqdm.autonotebook import tqdm
from langchain.schema import Document
from langchain.embeddings.openai import OpenAIEmbeddings | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html |
d334dd3182da-1 | from langchain.schema import Document
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Pinecone
embeddings = OpenAIEmbeddings()
# create new index
pinecone.create_index("langchain-self-retriever-demo", dimension=1536)
docs = [
Document(page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": ["action", "science fiction"]}),
Document(page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}),
Document(page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}),
Document(page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}),
Document(page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}),
Document(page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={"year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": ["science fiction", "thriller"], "rating": 9.9})
]
vectorstore = Pinecone.from_documents(
docs, embeddings, index_name="langchain-self-retriever-demo"
)
Creating our self-querying retriever# | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html |
d334dd3182da-2 | )
Creating our self-querying retriever#
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.llms import OpenAI
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
metadata_field_info=[
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating",
description="A 1-10 rating for the movie",
type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
Testing it out#
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
query='dinosaur' filter=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': ['action', 'science fiction'], 'rating': 7.7, 'year': 1993.0}), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html |
d334dd3182da-3 | Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995.0}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006.0}),
Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'director': 'Christopher Nolan', 'rating': 8.2, 'year': 2010.0})]
# This example only specifies a filter
retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006.0}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': ['science fiction', 'thriller'], 'rating': 9.9, 'year': 1979.0})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html |
d334dd3182da-4 | [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'director': 'Greta Gerwig', 'rating': 8.3, 'year': 2019.0})]
# This example specifies a composite filter
retriever.get_relevant_documents("What's a highly rated (above 8.5) science fiction film?")
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)])
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': ['science fiction', 'thriller'], 'rating': 9.9, 'year': 1979.0})]
# This example specifies a query and composite filter
retriever.get_relevant_documents("What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990.0), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005.0), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')])
[Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995.0})]
Filter k#
We can also use the self query retriever to specify k: the number of documents to fetch. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html |
d334dd3182da-5 | We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True
)
# This example only specifies a relevant query
retriever.get_relevant_documents("What are two movies about dinosaurs")
previous
Pinecone Hybrid Search
next
SVM
Contents
Creating a Pinecone index
Creating our self-querying retriever
Testing it out
Filter k
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html |
Subsets and Splits