id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
4372bedf6173-3
|
To generate a prompt with few shot examples, you can use the FewShotPromptTemplate. This class takes in a PromptTemplate and a list of few shot examples. It then formats the prompt template with the few shot examples.
In this example, we’ll create a prompt to generate word antonyms.
from langchain import PromptTemplate, FewShotPromptTemplate
# First, create the list of few shot examples.
examples = [
{"word": "happy", "antonym": "sad"},
{"word": "tall", "antonym": "short"},
]
# Next, we specify the template to format the examples we have provided.
# We use the `PromptTemplate` class for this.
example_formatter_template = """Word: {word}
Antonym: {antonym}
"""
example_prompt = PromptTemplate(
input_variables=["word", "antonym"],
template=example_formatter_template,
)
# Finally, we create the `FewShotPromptTemplate` object.
few_shot_prompt = FewShotPromptTemplate(
# These are the examples we want to insert into the prompt.
examples=examples,
# This is how we want to format the examples when we insert them into the prompt.
example_prompt=example_prompt,
# The prefix is some text that goes before the examples in the prompt.
# Usually, this consists of intructions.
prefix="Give the antonym of every input\n",
# The suffix is some text that goes after the examples in the prompt.
# Usually, this is where the user input will go
suffix="Word: {input}\nAntonym: ",
# The input variables are the variables that the overall prompt expects.
input_variables=["input"],
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html
|
4372bedf6173-4
|
input_variables=["input"],
# The example_separator is the string we will use to join the prefix, examples, and suffix together with.
example_separator="\n",
)
# We can now generate a prompt using the `format` method.
print(few_shot_prompt.format(input="big"))
# -> Give the antonym of every input
# ->
# -> Word: happy
# -> Antonym: sad
# ->
# -> Word: tall
# -> Antonym: short
# ->
# -> Word: big
# -> Antonym:
Select examples for a prompt template#
If you have a large number of examples, you can use the ExampleSelector to select a subset of examples that will be most informative for the Language Model. This will help you generate a prompt that is more likely to generate a good response.
Below, we’ll use the LengthBasedExampleSelector, which selects examples based on the length of the input. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more.
We’ll continue with the example from the previous section, but this time we’ll use the LengthBasedExampleSelector to select the examples.
from langchain.prompts.example_selector import LengthBasedExampleSelector
# These are a lot of examples of a pretend task of creating antonyms.
examples = [
{"word": "happy", "antonym": "sad"},
{"word": "tall", "antonym": "short"},
{"word": "energetic", "antonym": "lethargic"},
{"word": "sunny", "antonym": "gloomy"},
{"word": "windy", "antonym": "calm"},
]
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html
|
4372bedf6173-5
|
{"word": "windy", "antonym": "calm"},
]
# We'll use the `LengthBasedExampleSelector` to select the examples.
example_selector = LengthBasedExampleSelector(
# These are the examples is has available to choose from.
examples=examples,
# This is the PromptTemplate being used to format the examples.
example_prompt=example_prompt,
# This is the maximum length that the formatted examples should be.
# Length is measured by the get_text_length function below.
max_length=25
# This is the function used to get the length of a string, which is used
# to determine which examples to include. It is commented out because
# it is provided as a default value if none is specified.
# get_text_length: Callable[[str], int] = lambda x: len(re.split("\n| ", x))
)
# We can now use the `example_selector` to create a `FewShotPromptTemplate`.
dynamic_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Word: {input}\nAntonym:",
input_variables=["input"],
example_separator="\n\n",
)
# We can now generate a prompt using the `format` method.
print(dynamic_prompt.format(input="big"))
# -> Give the antonym of every input
# ->
# -> Word: happy
# -> Antonym: sad
# ->
# -> Word: tall
# -> Antonym: short
# ->
# -> Word: energetic
# -> Antonym: lethargic
# ->
# -> Word: sunny
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html
|
4372bedf6173-6
|
# -> Antonym: lethargic
# ->
# -> Word: sunny
# -> Antonym: gloomy
# ->
# -> Word: windy
# -> Antonym: calm
# ->
# -> Word: big
# -> Antonym:
In contrast, if we provide a very long input, the LengthBasedExampleSelector will select fewer examples to include in the prompt.
long_string = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else"
print(dynamic_prompt.format(input=long_string))
# -> Give the antonym of every input
# -> Word: happy
# -> Antonym: sad
# ->
# -> Word: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else
# -> Antonym:
LangChain comes with a few example selectors that you can use. For more details on how to use them, see Example Selectors.
You can create custom example selectors that select examples based on any criteria you want. For more details on how to do this, see Creating a custom example selector.
previous
Prompt Templates
next
How-To Guides
Contents
What is a prompt template?
Create a prompt template
Template formats
Validate template
Serialize prompt template
Pass few shot examples to a prompt template
Select examples for a prompt template
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html
|
4bfca44fb255-0
|
.ipynb
.pdf
How to serialize prompts
Contents
PromptTemplate
Loading from YAML
Loading from JSON
Loading Template from a File
FewShotPromptTemplate
Examples
Loading from YAML
Loading from JSON
Examples in the Config
Example Prompt from a File
PromptTempalte with OutputParser
How to serialize prompts#
It is often preferrable to store prompts not as python code but as files. This can make it easy to share, store, and version prompts. This notebook covers how to do that in LangChain, walking through all the different types of prompts and the different serialization options.
At a high level, the following design principles are applied to serialization:
Both JSON and YAML are supported. We want to support serialization methods that are human readable on disk, and YAML and JSON are two of the most popular methods for that. Note that this rule applies to prompts. For other assets, like Examples, different serialization methods may be supported.
We support specifying everything in one file, or storing different components (templates, examples, etc) in different files and referencing them. For some cases, storing everything in file makes the most sense, but for others it is preferrable to split up some of the assets (long templates, large examples, reusable components). LangChain supports both.
There is also a single entry point to load prompts from disk, making it easy to load any type of prompt.
# All prompts are loaded through the `load_prompt` function.
from langchain.prompts import load_prompt
PromptTemplate#
This section covers examples for loading a PromptTemplate.
Loading from YAML#
This shows an example of loading a PromptTemplate from YAML.
!cat simple_prompt.yaml
_type: prompt
input_variables:
["adjective", "content"]
template:
Tell me a {adjective} joke about {content}.
prompt = load_prompt("simple_prompt.yaml")
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
|
4bfca44fb255-1
|
prompt = load_prompt("simple_prompt.yaml")
print(prompt.format(adjective="funny", content="chickens"))
Tell me a funny joke about chickens.
Loading from JSON#
This shows an example of loading a PromptTemplate from JSON.
!cat simple_prompt.json
{
"_type": "prompt",
"input_variables": ["adjective", "content"],
"template": "Tell me a {adjective} joke about {content}."
}
prompt = load_prompt("simple_prompt.json")
print(prompt.format(adjective="funny", content="chickens"))
Tell me a funny joke about chickens.
Loading Template from a File#
This shows an example of storing the template in a separate file and then referencing it in the config. Notice that the key changes from template to template_path.
!cat simple_template.txt
Tell me a {adjective} joke about {content}.
!cat simple_prompt_with_template_file.json
{
"_type": "prompt",
"input_variables": ["adjective", "content"],
"template_path": "simple_template.txt"
}
prompt = load_prompt("simple_prompt_with_template_file.json")
print(prompt.format(adjective="funny", content="chickens"))
Tell me a funny joke about chickens.
FewShotPromptTemplate#
This section covers examples for loading few shot prompt templates.
Examples#
This shows an example of what examples stored as json might look like.
!cat examples.json
[
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"}
]
And here is what the same examples stored as yaml might look like.
!cat examples.yaml
- input: happy
output: sad
- input: tall
output: short
Loading from YAML#
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
|
4bfca44fb255-2
|
output: sad
- input: tall
output: short
Loading from YAML#
This shows an example of loading a few shot example from YAML.
!cat few_shot_prompt.yaml
_type: few_shot
input_variables:
["adjective"]
prefix:
Write antonyms for the following words.
example_prompt:
_type: prompt
input_variables:
["input", "output"]
template:
"Input: {input}\nOutput: {output}"
examples:
examples.json
suffix:
"Input: {adjective}\nOutput:"
prompt = load_prompt("few_shot_prompt.yaml")
print(prompt.format(adjective="funny"))
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:
The same would work if you loaded examples from the yaml file.
!cat few_shot_prompt_yaml_examples.yaml
_type: few_shot
input_variables:
["adjective"]
prefix:
Write antonyms for the following words.
example_prompt:
_type: prompt
input_variables:
["input", "output"]
template:
"Input: {input}\nOutput: {output}"
examples:
examples.yaml
suffix:
"Input: {adjective}\nOutput:"
prompt = load_prompt("few_shot_prompt_yaml_examples.yaml")
print(prompt.format(adjective="funny"))
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:
Loading from JSON#
This shows an example of loading a few shot example from JSON.
!cat few_shot_prompt.json
{
"_type": "few_shot",
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
|
4bfca44fb255-3
|
!cat few_shot_prompt.json
{
"_type": "few_shot",
"input_variables": ["adjective"],
"prefix": "Write antonyms for the following words.",
"example_prompt": {
"_type": "prompt",
"input_variables": ["input", "output"],
"template": "Input: {input}\nOutput: {output}"
},
"examples": "examples.json",
"suffix": "Input: {adjective}\nOutput:"
}
prompt = load_prompt("few_shot_prompt.json")
print(prompt.format(adjective="funny"))
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:
Examples in the Config#
This shows an example of referencing the examples directly in the config.
!cat few_shot_prompt_examples_in.json
{
"_type": "few_shot",
"input_variables": ["adjective"],
"prefix": "Write antonyms for the following words.",
"example_prompt": {
"_type": "prompt",
"input_variables": ["input", "output"],
"template": "Input: {input}\nOutput: {output}"
},
"examples": [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"}
],
"suffix": "Input: {adjective}\nOutput:"
}
prompt = load_prompt("few_shot_prompt_examples_in.json")
print(prompt.format(adjective="funny"))
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:
Example Prompt from a File#
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
|
4bfca44fb255-4
|
Output: short
Input: funny
Output:
Example Prompt from a File#
This shows an example of loading the PromptTemplate that is used to format the examples from a separate file. Note that the key changes from example_prompt to example_prompt_path.
!cat example_prompt.json
{
"_type": "prompt",
"input_variables": ["input", "output"],
"template": "Input: {input}\nOutput: {output}"
}
!cat few_shot_prompt_example_prompt.json
{
"_type": "few_shot",
"input_variables": ["adjective"],
"prefix": "Write antonyms for the following words.",
"example_prompt_path": "example_prompt.json",
"examples": "examples.json",
"suffix": "Input: {adjective}\nOutput:"
}
prompt = load_prompt("few_shot_prompt_example_prompt.json")
print(prompt.format(adjective="funny"))
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:
PromptTempalte with OutputParser#
This shows an example of loading a prompt along with an OutputParser from a file.
! cat prompt_with_output_parser.json
{
"input_variables": [
"question",
"student_answer"
],
"output_parser": {
"regex": "(.*?)\\nScore: (.*)",
"output_keys": [
"answer",
"score"
],
"default_output_key": null,
"_type": "regex_parser"
},
"partial_variables": {},
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
|
4bfca44fb255-5
|
"_type": "regex_parser"
},
"partial_variables": {},
"template": "Given the following question and student answer, provide a correct answer and score the student answer.\nQuestion: {question}\nStudent Answer: {student_answer}\nCorrect Answer:",
"template_format": "f-string",
"validate_template": true,
"_type": "prompt"
}
prompt = load_prompt("prompt_with_output_parser.json")
prompt.output_parser.parse("George Washington was born in 1732 and died in 1799.\nScore: 1/2")
{'answer': 'George Washington was born in 1732 and died in 1799.',
'score': '1/2'}
previous
How to work with partial Prompt Templates
next
Prompts
Contents
PromptTemplate
Loading from YAML
Loading from JSON
Loading Template from a File
FewShotPromptTemplate
Examples
Loading from YAML
Loading from JSON
Examples in the Config
Example Prompt from a File
PromptTempalte with OutputParser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
|
ea5791769832-0
|
.ipynb
.pdf
How to work with partial Prompt Templates
Contents
Partial With Strings
Partial With Functions
How to work with partial Prompt Templates#
A prompt template is a class with a .format method which takes in a key-value map and returns a string (a prompt) to pass to the language model. Like other methods, it can make sense to “partial” a prompt template - eg pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values.
LangChain supports this in two ways: we allow for partially formatted prompts (1) with string values, (2) with functions that return string values. These two different ways support different use cases. In the documentation below we go over the motivations for both use cases as well as how to do it in LangChain.
Partial With Strings#
One common use case for wanting to partial a prompt template is if you get some of the variables before others. For example, suppose you have a prompt template that requires two variables, foo and baz. If you get the foo value early on in the chain, but the baz value later, it can be annoying to wait until you have both variables in the same place to pass them to the prompt template. Instead, you can partial the prompt template with the foo value, and then pass the partialed prompt template along and just use that. Below is an example of doing this:
from langchain.prompts import PromptTemplate
prompt = PromptTemplate(template="{foo}{bar}", input_variables=["foo", "bar"])
partial_prompt = prompt.partial(foo="foo");
print(partial_prompt.format(bar="baz"))
foobaz
You can also just initialize the prompt with the partialed variables.
prompt = PromptTemplate(template="{foo}{bar}", input_variables=["bar"], partial_variables={"foo": "foo"})
print(prompt.format(bar="baz"))
foobaz
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/partial.html
|
ea5791769832-1
|
print(prompt.format(bar="baz"))
foobaz
Partial With Functions#
The other common use is to partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can’t hard code it in the prompt, and passing it along with the other input variables is a bit annoying. In this case, it’s very handy to be able to partial the prompt with a function that always returns the current date.
from datetime import datetime
def _get_datetime():
now = datetime.now()
return now.strftime("%m/%d/%Y, %H:%M:%S")
prompt = PromptTemplate(
template="Tell me a {adjective} joke about the day {date}",
input_variables=["adjective", "date"]
);
partial_prompt = prompt.partial(date=_get_datetime)
print(partial_prompt.format(adjective="funny"))
Tell me a funny joke about the day 02/27/2023, 22:15:16
You can also just initialize the prompt with the partialed variables, which often makes more sense in this workflow.
prompt = PromptTemplate(
template="Tell me a {adjective} joke about the day {date}",
input_variables=["adjective"],
partial_variables={"date": _get_datetime}
);
print(prompt.format(adjective="funny"))
Tell me a funny joke about the day 02/27/2023, 22:15:16
previous
How to create a prompt template that uses few shot examples
next
How to serialize prompts
Contents
Partial With Strings
Partial With Functions
By Harrison Chase
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/partial.html
|
ea5791769832-2
|
Contents
Partial With Strings
Partial With Functions
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/partial.html
|
31abe763ae6b-0
|
.ipynb
.pdf
How to create a prompt template that uses few shot examples
Contents
Use Case
Using an example set
Create the example set
Create a formatter for the few shot examples
Feed examples and formatter to FewShotPromptTemplate
Using an example selector
Feed examples into ExampleSelector
Feed example selector into FewShotPromptTemplate
How to create a prompt template that uses few shot examples#
In this tutorial, we’ll learn how to create a prompt template that uses few shot examples.
We’ll use the FewShotPromptTemplate class to create a prompt template that uses few shot examples. This class either takes in a set of examples, or an ExampleSelector object. In this tutorial, we’ll go over both options.
Use Case#
In this tutorial, we’ll configure few shot examples for self-ask with search.
Using an example set#
Create the example set#
To get started, create a list of few shot examples. Each example should be a dictionary with the keys being the input variables and the values being the values for those input variables.
from langchain.prompts.few_shot import FewShotPromptTemplate
from langchain.prompts.prompt import PromptTemplate
examples = [
{
"question": "Who lived longer, Muhammad Ali or Alan Turing?",
"answer":
"""
Are follow up questions needed here: Yes.
Follow up: How old was Muhammad Ali when he died?
Intermediate answer: Muhammad Ali was 74 years old when he died.
Follow up: How old was Alan Turing when he died?
Intermediate answer: Alan Turing was 41 years old when he died.
So the final answer is: Muhammad Ali
"""
},
{
"question": "When was the founder of craigslist born?",
"answer":
"""
Are follow up questions needed here: Yes.
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
|
31abe763ae6b-1
|
"answer":
"""
Are follow up questions needed here: Yes.
Follow up: Who was the founder of craigslist?
Intermediate answer: Craigslist was founded by Craig Newmark.
Follow up: When was Craig Newmark born?
Intermediate answer: Craig Newmark was born on December 6, 1952.
So the final answer is: December 6, 1952
"""
},
{
"question": "Who was the maternal grandfather of George Washington?",
"answer":
"""
Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball
"""
},
{
"question": "Are both the directors of Jaws and Casino Royale from the same country?",
"answer":
"""
Are follow up questions needed here: Yes.
Follow up: Who is the director of Jaws?
Intermediate Answer: The director of Jaws is Steven Spielberg.
Follow up: Where is Steven Spielberg from?
Intermediate Answer: The United States.
Follow up: Who is the director of Casino Royale?
Intermediate Answer: The director of Casino Royale is Martin Campbell.
Follow up: Where is Martin Campbell from?
Intermediate Answer: New Zealand.
So the final answer is: No
"""
}
]
Create a formatter for the few shot examples#
Configure a formatter that will format the few shot examples into a string. This formatter should be a PromptTemplate object.
example_prompt = PromptTemplate(input_variables=["question", "answer"], template="Question: {question}\n{answer}")
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
|
31abe763ae6b-2
|
print(example_prompt.format(**examples[0]))
Question: Who lived longer, Muhammad Ali or Alan Turing?
Are follow up questions needed here: Yes.
Follow up: How old was Muhammad Ali when he died?
Intermediate answer: Muhammad Ali was 74 years old when he died.
Follow up: How old was Alan Turing when he died?
Intermediate answer: Alan Turing was 41 years old when he died.
So the final answer is: Muhammad Ali
Feed examples and formatter to FewShotPromptTemplate#
Finally, create a FewShotPromptTemplate object. This object takes in the few shot examples and the formatter for the few shot examples.
prompt = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
suffix="Question: {input}",
input_variables=["input"]
)
print(prompt.format(input="Who was the father of Mary Ball Washington?"))
Question: Who lived longer, Muhammad Ali or Alan Turing?
Are follow up questions needed here: Yes.
Follow up: How old was Muhammad Ali when he died?
Intermediate answer: Muhammad Ali was 74 years old when he died.
Follow up: How old was Alan Turing when he died?
Intermediate answer: Alan Turing was 41 years old when he died.
So the final answer is: Muhammad Ali
Question: When was the founder of craigslist born?
Are follow up questions needed here: Yes.
Follow up: Who was the founder of craigslist?
Intermediate answer: Craigslist was founded by Craig Newmark.
Follow up: When was Craig Newmark born?
Intermediate answer: Craig Newmark was born on December 6, 1952.
So the final answer is: December 6, 1952
Question: Who was the maternal grandfather of George Washington?
Are follow up questions needed here: Yes.
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
|
31abe763ae6b-3
|
Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball
Question: Are both the directors of Jaws and Casino Royale from the same country?
Are follow up questions needed here: Yes.
Follow up: Who is the director of Jaws?
Intermediate Answer: The director of Jaws is Steven Spielberg.
Follow up: Where is Steven Spielberg from?
Intermediate Answer: The United States.
Follow up: Who is the director of Casino Royale?
Intermediate Answer: The director of Casino Royale is Martin Campbell.
Follow up: Where is Martin Campbell from?
Intermediate Answer: New Zealand.
So the final answer is: No
Question: Who was the father of Mary Ball Washington?
Using an example selector#
Feed examples into ExampleSelector#
We will reuse the example set and the formatter from the previous section. However, instead of feeding the examples directly into the FewShotPromptTemplate object, we will feed them into an ExampleSelector object.
In this tutorial, we will use the SemanticSimilarityExampleSelector class. This class selects few shot examples based on their similarity to the input. It uses an embedding model to compute the similarity between the input and the few shot examples, as well as a vector store to perform the nearest neighbor search.
from langchain.prompts.example_selector import SemanticSimilarityExampleSelector
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
example_selector = SemanticSimilarityExampleSelector.from_examples(
# This is the list of examples available to select from.
examples,
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
|
31abe763ae6b-4
|
# This is the list of examples available to select from.
examples,
# This is the embedding class used to produce embeddings which are used to measure semantic similarity.
OpenAIEmbeddings(),
# This is the VectorStore class that is used to store the embeddings and do a similarity search over.
Chroma,
# This is the number of examples to produce.
k=1
)
# Select the most similar example to the input.
question = "Who was the father of Mary Ball Washington?"
selected_examples = example_selector.select_examples({"question": question})
print(f"Examples most similar to the input: {question}")
for example in selected_examples:
print("\n")
for k, v in example.items():
print(f"{k}: {v}")
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
Examples most similar to the input: Who was the father of Mary Ball Washington?
question: Who was the maternal grandfather of George Washington?
answer:
Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball
Feed example selector into FewShotPromptTemplate#
Finally, create a FewShotPromptTemplate object. This object takes in the example selector and the formatter for the few shot examples.
prompt = FewShotPromptTemplate(
example_selector=example_selector,
example_prompt=example_prompt,
suffix="Question: {input}",
input_variables=["input"]
)
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
|
31abe763ae6b-5
|
suffix="Question: {input}",
input_variables=["input"]
)
print(prompt.format(input="Who was the father of Mary Ball Washington?"))
Question: Who was the maternal grandfather of George Washington?
Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball
Question: Who was the father of Mary Ball Washington?
previous
How to create a custom prompt template
next
How to work with partial Prompt Templates
Contents
Use Case
Using an example set
Create the example set
Create a formatter for the few shot examples
Feed examples and formatter to FewShotPromptTemplate
Using an example selector
Feed examples into ExampleSelector
Feed example selector into FewShotPromptTemplate
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
|
4abb54220b76-0
|
.ipynb
.pdf
Connecting to a Feature Store
Contents
Feast
Load Feast Store
Prompts
Use in a chain
Tecton
Prerequisites
Define and Load Features
Prompts
Use in a chain
Featureform
Initialize Featureform
Prompts
Use in a chain
Connecting to a Feature Store#
Feature stores are a concept from traditional machine learning that make sure data fed into models is up-to-date and relevant. For more on this, see here.
This concept is extremely relevant when considering putting LLM applications in production. In order to personalize LLM applications, you may want to combine LLMs with up-to-date information about particular users. Feature stores can be a great way to keep that data fresh, and LangChain provides an easy way to combine that data with LLMs.
In this notebook we will show how to connect prompt templates to feature stores. The basic idea is to call a feature store from inside a prompt template to retrieve values that are then formatted into the prompt.
Feast#
To start, we will use the popular open source feature store framework Feast.
This assumes you have already run the steps in the README around getting started. We will build of off that example in getting started, and create and LLMChain to write a note to a specific driver regarding their up-to-date statistics.
Load Feast Store#
Again, this should be set up according to the instructions in the Feast README
from feast import FeatureStore
# You may need to update the path depending on where you stored it
feast_repo_path = "../../../../../my_feature_repo/feature_repo/"
store = FeatureStore(repo_path=feast_repo_path)
Prompts#
Here we will set up a custom FeastPromptTemplate. This prompt template will take in a driver id, look up their stats, and format those stats into a prompt.
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
|
4abb54220b76-1
|
Note that the input to this prompt template is just driver_id, since that is the only user defined piece (all other variables are looked up inside the prompt template).
from langchain.prompts import PromptTemplate, StringPromptTemplate
template = """Given the driver's up to date stats, write them note relaying those stats to them.
If they have a conversation rate above .5, give them a compliment. Otherwise, make a silly joke about chickens at the end to make them feel better
Here are the drivers stats:
Conversation rate: {conv_rate}
Acceptance rate: {acc_rate}
Average Daily Trips: {avg_daily_trips}
Your response:"""
prompt = PromptTemplate.from_template(template)
class FeastPromptTemplate(StringPromptTemplate):
def format(self, **kwargs) -> str:
driver_id = kwargs.pop("driver_id")
feature_vector = store.get_online_features(
features=[
'driver_hourly_stats:conv_rate',
'driver_hourly_stats:acc_rate',
'driver_hourly_stats:avg_daily_trips'
],
entity_rows=[{"driver_id": driver_id}]
).to_dict()
kwargs["conv_rate"] = feature_vector["conv_rate"][0]
kwargs["acc_rate"] = feature_vector["acc_rate"][0]
kwargs["avg_daily_trips"] = feature_vector["avg_daily_trips"][0]
return prompt.format(**kwargs)
prompt_template = FeastPromptTemplate(input_variables=["driver_id"])
print(prompt_template.format(driver_id=1001))
Given the driver's up to date stats, write them note relaying those stats to them.
If they have a conversation rate above .5, give them a compliment. Otherwise, make a silly joke about chickens at the end to make them feel better
Here are the drivers stats:
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
|
4abb54220b76-2
|
Here are the drivers stats:
Conversation rate: 0.4745151400566101
Acceptance rate: 0.055561766028404236
Average Daily Trips: 936
Your response:
Use in a chain#
We can now use this in a chain, successfully creating a chain that achieves personalization backed by a feature store
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
chain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template)
chain.run(1001)
"Hi there! I wanted to update you on your current stats. Your acceptance rate is 0.055561766028404236 and your average daily trips are 936. While your conversation rate is currently 0.4745151400566101, I have no doubt that with a little extra effort, you'll be able to exceed that .5 mark! Keep up the great work! And remember, even chickens can't always cross the road, but they still give it their best shot."
Tecton#
Above, we showed how you could use Feast, a popular open source and self-managed feature store, with LangChain. Our examples below will show a similar integration using Tecton. Tecton is a fully managed feature platform built to orchestrate the complete ML feature lifecycle, from transformation to online serving, with enterprise-grade SLAs.
Prerequisites#
Tecton Deployment (sign up at https://tecton.ai)
TECTON_API_KEY environment variable set to a valid Service Account key
Define and Load Features#
We will use the user_transaction_counts Feature View from the Tecton tutorial as part of a Feature Service. For simplicity, we are only using a single Feature View; however, more sophisticated applications may require more feature views to retrieve the features needed for its prompt.
user_transaction_metrics = FeatureService(
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
|
4abb54220b76-3
|
user_transaction_metrics = FeatureService(
name = "user_transaction_metrics",
features = [user_transaction_counts]
)
The above Feature Service is expected to be applied to a live workspace. For this example, we will be using the “prod” workspace.
import tecton
workspace = tecton.get_workspace("prod")
feature_service = workspace.get_feature_service("user_transaction_metrics")
Prompts#
Here we will set up a custom TectonPromptTemplate. This prompt template will take in a user_id , look up their stats, and format those stats into a prompt.
Note that the input to this prompt template is just user_id, since that is the only user defined piece (all other variables are looked up inside the prompt template).
from langchain.prompts import PromptTemplate, StringPromptTemplate
template = """Given the vendor's up to date transaction stats, write them a note based on the following rules:
1. If they had a transaction in the last day, write a short congratulations message on their recent sales
2. If no transaction in the last day, but they had a transaction in the last 30 days, playfully encourage them to sell more.
3. Always add a silly joke about chickens at the end
Here are the vendor's stats:
Number of Transactions Last Day: {transaction_count_1d}
Number of Transactions Last 30 Days: {transaction_count_30d}
Your response:"""
prompt = PromptTemplate.from_template(template)
class TectonPromptTemplate(StringPromptTemplate):
def format(self, **kwargs) -> str:
user_id = kwargs.pop("user_id")
feature_vector = feature_service.get_online_features(join_keys={"user_id": user_id}).to_dict()
kwargs["transaction_count_1d"] = feature_vector["user_transaction_counts.transaction_count_1d_1d"]
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
|
4abb54220b76-4
|
kwargs["transaction_count_30d"] = feature_vector["user_transaction_counts.transaction_count_30d_1d"]
return prompt.format(**kwargs)
prompt_template = TectonPromptTemplate(input_variables=["user_id"])
print(prompt_template.format(user_id="user_469998441571"))
Given the vendor's up to date transaction stats, write them a note based on the following rules:
1. If they had a transaction in the last day, write a short congratulations message on their recent sales
2. If no transaction in the last day, but they had a transaction in the last 30 days, playfully encourage them to sell more.
3. Always add a silly joke about chickens at the end
Here are the vendor's stats:
Number of Transactions Last Day: 657
Number of Transactions Last 30 Days: 20326
Your response:
Use in a chain#
We can now use this in a chain, successfully creating a chain that achieves personalization backed by the Tecton Feature Platform
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
chain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template)
chain.run("user_469998441571")
'Wow, congratulations on your recent sales! Your business is really soaring like a chicken on a hot air balloon! Keep up the great work!'
Featureform#
Finally, we will use Featureform an open-source and enterprise-grade feature store to run the same example. Featureform allows you to work with your infrastructure like Spark or locally to define your feature transformations.
Initialize Featureform#
You can follow in the instructions in the README to initialize your transformations and features in Featureform.
import featureform as ff
client = ff.Client(host="demo.featureform.com")
Prompts#
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
|
4abb54220b76-5
|
client = ff.Client(host="demo.featureform.com")
Prompts#
Here we will set up a custom FeatureformPromptTemplate. This prompt template will take in the average amount a user pays per transactions.
Note that the input to this prompt template is just avg_transaction, since that is the only user defined piece (all other variables are looked up inside the prompt template).
from langchain.prompts import PromptTemplate, StringPromptTemplate
template = """Given the amount a user spends on average per transaction, let them know if they are a high roller. Otherwise, make a silly joke about chickens at the end to make them feel better
Here are the user's stats:
Average Amount per Transaction: ${avg_transcation}
Your response:"""
prompt = PromptTemplate.from_template(template)
class FeatureformPromptTemplate(StringPromptTemplate):
def format(self, **kwargs) -> str:
user_id = kwargs.pop("user_id")
fpf = client.features([("avg_transactions", "quickstart")], {"user": user_id})
return prompt.format(**kwargs)
prompt_template = FeatureformPrompTemplate(input_variables=["user_id"])
print(prompt_template.format(user_id="C1410926"))
Use in a chain#
We can now use this in a chain, successfully creating a chain that achieves personalization backed by the Featureform Feature Platform
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
chain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template)
chain.run("C1410926")
previous
How-To Guides
next
How to create a custom prompt template
Contents
Feast
Load Feast Store
Prompts
Use in a chain
Tecton
Prerequisites
Define and Load Features
Prompts
Use in a chain
Featureform
Initialize Featureform
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
|
4abb54220b76-6
|
Define and Load Features
Prompts
Use in a chain
Featureform
Initialize Featureform
Prompts
Use in a chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
|
92d7de0b5f07-0
|
.ipynb
.pdf
How to create a custom prompt template
Contents
Why are custom prompt templates needed?
Creating a Custom Prompt Template
Use the custom prompt template
How to create a custom prompt template#
Let’s suppose we want the LLM to generate English language explanations of a function given its name. To achieve this task, we will create a custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function.
Why are custom prompt templates needed?#
LangChain provides a set of default prompt templates that can be used to generate prompts for a variety of tasks. However, there may be cases where the default prompt templates do not meet your needs. For example, you may want to create a prompt template with specific dynamic instructions for your language model. In such cases, you can create a custom prompt template.
Take a look at the current set of default prompt templates here.
Creating a Custom Prompt Template#
There are essentially two distinct prompt templates available - string prompt templates and chat prompt templates. String prompt templates provides a simple prompt in string format, while chat prompt templates produces a more structured prompt to be used with a chat API.
In this guide, we will create a custom prompt using a string prompt template.
To create a custom string prompt template, there are two requirements:
It has an input_variables attribute that exposes what input variables the prompt template expects.
It exposes a format method that takes in keyword arguments corresponding to the expected input_variables and returns the formatted prompt.
We will create a custom prompt template that takes in the function name as input and formats the prompt to provide the source code of the function. To achieve this, let’s first create a function that will return the source code of a function given its name.
import inspect
def get_source_code(function_name):
# Get the source code of the function
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/custom_prompt_template.html
|
92d7de0b5f07-1
|
import inspect
def get_source_code(function_name):
# Get the source code of the function
return inspect.getsource(function_name)
Next, we’ll create a custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function.
from langchain.prompts import StringPromptTemplate
from pydantic import BaseModel, validator
class FunctionExplainerPromptTemplate(StringPromptTemplate, BaseModel):
""" A custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function. """
@validator("input_variables")
def validate_input_variables(cls, v):
""" Validate that the input variables are correct. """
if len(v) != 1 or "function_name" not in v:
raise ValueError("function_name must be the only input_variable.")
return v
def format(self, **kwargs) -> str:
# Get the source code of the function
source_code = get_source_code(kwargs["function_name"])
# Generate the prompt to be sent to the language model
prompt = f"""
Given the function name and source code, generate an English language explanation of the function.
Function Name: {kwargs["function_name"].__name__}
Source Code:
{source_code}
Explanation:
"""
return prompt
def _prompt_type(self):
return "function-explainer"
Use the custom prompt template#
Now that we have created a custom prompt template, we can use it to generate prompts for our task.
fn_explainer = FunctionExplainerPromptTemplate(input_variables=["function_name"])
# Generate a prompt for the function "get_source_code"
prompt = fn_explainer.format(function_name=get_source_code)
print(prompt)
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/custom_prompt_template.html
|
92d7de0b5f07-2
|
prompt = fn_explainer.format(function_name=get_source_code)
print(prompt)
Given the function name and source code, generate an English language explanation of the function.
Function Name: get_source_code
Source Code:
def get_source_code(function_name):
# Get the source code of the function
return inspect.getsource(function_name)
Explanation:
previous
Connecting to a Feature Store
next
How to create a prompt template that uses few shot examples
Contents
Why are custom prompt templates needed?
Creating a Custom Prompt Template
Use the custom prompt template
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/custom_prompt_template.html
|
81a0df052dc2-0
|
.rst
.pdf
How-To Guides
How-To Guides#
A chain is made up of links, which can be either primitives or other chains.
Primitives can be either prompts, models, arbitrary functions, or other chains.
The examples here are broken up into three sections:
Generic Functionality
Covers both generic chains (that are useful in a wide variety of applications) as well as generic functionality related to those chains.
Async API for Chain
Creating a custom Chain
Loading from LangChainHub
LLM Chain
Additional ways of running LLM Chain
Parsing the outputs
Initialize from string
Router Chains
Sequential Chains
Serialization
Transformation Chain
Index-related Chains
Chains related to working with indexes.
Analyze Document
Chat Over Documents with Chat History
Graph QA
Hypothetical Document Embeddings
Question Answering with Sources
Question Answering
Summarization
Retrieval Question/Answering
Retrieval Question Answering with Sources
Vector DB Text Generation
All other chains
All other types of chains!
API Chains
Self-Critique Chain with Constitutional AI
FLARE
GraphCypherQAChain
BashChain
LLMCheckerChain
LLM Math
LLMRequestsChain
LLMSummarizationCheckerChain
Moderation
Router Chains: Selecting from multiple prompts with MultiPromptChain
Router Chains: Selecting from multiple prompts with MultiRetrievalQAChain
OpenAPI Chain
PAL
SQL Chain example
previous
Getting Started
next
Async API for Chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/chains/how_to_guides.html
|
58c006644484-0
|
.ipynb
.pdf
Getting Started
Contents
Why do we need chains?
Quick start: Using LLMChain
Different ways of calling chains
Add memory to chains
Debug Chain
Combine chains with the SequentialChain
Create a custom chain with the Chain class
Getting Started#
In this tutorial, we will learn about creating simple chains in LangChain. We will learn how to create a chain, add components to it, and run it.
In this tutorial, we will cover:
Using a simple LLM chain
Creating sequential chains
Creating a custom chain
Why do we need chains?#
Chains allow us to combine multiple components together to create a single, coherent application. For example, we can create a chain that takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM. We can build more complex chains by combining multiple chains together, or by combining chains with other components.
Quick start: Using LLMChain#
The LLMChain is a simple chain that takes in a prompt template, formats it with the user input and returns the response from an LLM.
To use the LLMChain, first create a prompt template.
from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM.
from langchain.chains import LLMChain
chain = LLMChain(llm=llm, prompt=prompt)
# Run the chain only specifying the input variable.
print(chain.run("colorful socks"))
Colorful Toes Co.
|
https://python.langchain.com/en/latest/modules/chains/getting_started.html
|
58c006644484-1
|
print(chain.run("colorful socks"))
Colorful Toes Co.
If there are multiple variables, you can input them all at once using a dictionary.
prompt = PromptTemplate(
input_variables=["company", "product"],
template="What is a good name for {company} that makes {product}?",
)
chain = LLMChain(llm=llm, prompt=prompt)
print(chain.run({
'company': "ABC Startup",
'product': "colorful socks"
}))
Socktopia Colourful Creations.
You can use a chat model in an LLMChain as well:
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
)
human_message_prompt = HumanMessagePromptTemplate(
prompt=PromptTemplate(
template="What is a good name for a company that makes {product}?",
input_variables=["product"],
)
)
chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt])
chat = ChatOpenAI(temperature=0.9)
chain = LLMChain(llm=chat, prompt=chat_prompt_template)
print(chain.run("colorful socks"))
Rainbow Socks Co.
Different ways of calling chains#
All classes inherited from Chain offer a few ways of running chain logic. The most direct one is by using __call__:
chat = ChatOpenAI(temperature=0)
prompt_template = "Tell me a {adjective} joke"
llm_chain = LLMChain(
llm=chat,
prompt=PromptTemplate.from_template(prompt_template)
)
llm_chain(inputs={"adjective":"corny"})
{'adjective': 'corny',
|
https://python.langchain.com/en/latest/modules/chains/getting_started.html
|
58c006644484-2
|
llm_chain(inputs={"adjective":"corny"})
{'adjective': 'corny',
'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}
By default, __call__ returns both the input and output key values. You can configure it to only return output key values by setting return_only_outputs to True.
llm_chain("corny", return_only_outputs=True)
{'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}
If the Chain only outputs one output key (i.e. only has one element in its output_keys), you can use run method. Note that run outputs a string instead of a dictionary.
# llm_chain only has one output key, so we can use run
llm_chain.output_keys
['text']
llm_chain.run({"adjective":"corny"})
'Why did the tomato turn red? Because it saw the salad dressing!'
In the case of one input key, you can input the string directly without specifying the input mapping.
# These two are equivalent
llm_chain.run({"adjective":"corny"})
llm_chain.run("corny")
# These two are also equivalent
llm_chain("corny")
llm_chain({"adjective":"corny"})
{'adjective': 'corny',
'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}
Tips: You can easily integrate a Chain object as a Tool in your Agent via its run method. See an example here.
Add memory to chains#
Chain supports taking a BaseMemory object as its memory argument, allowing Chain object to persist data across multiple calls. In other words, it makes Chain a stateful object.
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
conversation = ConversationChain(
|
https://python.langchain.com/en/latest/modules/chains/getting_started.html
|
58c006644484-3
|
from langchain.memory import ConversationBufferMemory
conversation = ConversationChain(
llm=chat,
memory=ConversationBufferMemory()
)
conversation.run("Answer briefly. What are the first 3 colors of a rainbow?")
# -> The first three colors of a rainbow are red, orange, and yellow.
conversation.run("And the next 4?")
# -> The next four colors of a rainbow are green, blue, indigo, and violet.
'The next four colors of a rainbow are green, blue, indigo, and violet.'
Essentially, BaseMemory defines an interface of how langchain stores memory. It allows reading of stored data through load_memory_variables method and storing new data through save_context method. You can learn more about it in Memory section.
Debug Chain#
It can be hard to debug Chain object solely from its output as most Chain objects involve a fair amount of input prompt preprocessing and LLM output post-processing. Setting verbose to True will print out some internal states of the Chain object while it is being ran.
conversation = ConversationChain(
llm=chat,
memory=ConversationBufferMemory(),
verbose=True
)
conversation.run("What is ChatGPT?")
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: What is ChatGPT?
AI:
> Finished chain.
|
https://python.langchain.com/en/latest/modules/chains/getting_started.html
|
58c006644484-4
|
Current conversation:
Human: What is ChatGPT?
AI:
> Finished chain.
'ChatGPT is an AI language model developed by OpenAI. It is based on the GPT-3 architecture and is capable of generating human-like responses to text prompts. ChatGPT has been trained on a massive amount of text data and can understand and respond to a wide range of topics. It is often used for chatbots, virtual assistants, and other conversational AI applications.'
Combine chains with the SequentialChain#
The next step after calling a language model is to make a series of calls to a language model. We can do this using sequential chains, which are chains that execute their links in a predefined order. Specifically, we will use the SimpleSequentialChain. This is the simplest type of a sequential chain, where each step has a single input/output, and the output of one step is the input to the next.
In this tutorial, our sequential chain will:
First, create a company name for a product. We will reuse the LLMChain we’d previously initialized to create this company name.
Then, create a catchphrase for the product. We will initialize a new LLMChain to create this catchphrase, as shown below.
second_prompt = PromptTemplate(
input_variables=["company_name"],
template="Write a catchphrase for the following company: {company_name}",
)
chain_two = LLMChain(llm=llm, prompt=second_prompt)
Now we can combine the two LLMChains, so that we can create a company name and a catchphrase in a single step.
from langchain.chains import SimpleSequentialChain
overall_chain = SimpleSequentialChain(chains=[chain, chain_two], verbose=True)
# Run the chain specifying only the input variable for the first chain.
catchphrase = overall_chain.run("colorful socks")
print(catchphrase)
|
https://python.langchain.com/en/latest/modules/chains/getting_started.html
|
58c006644484-5
|
catchphrase = overall_chain.run("colorful socks")
print(catchphrase)
> Entering new SimpleSequentialChain chain...
Rainbow Socks Co.
"Put a little rainbow in your step!"
> Finished chain.
"Put a little rainbow in your step!"
Create a custom chain with the Chain class#
LangChain provides many chains out of the box, but sometimes you may want to create a custom chain for your specific use case. For this example, we will create a custom chain that concatenates the outputs of 2 LLMChains.
In order to create a custom chain:
Start by subclassing the Chain class,
Fill out the input_keys and output_keys properties,
Add the _call method that shows how to execute the chain.
These steps are demonstrated in the example below:
from langchain.chains import LLMChain
from langchain.chains.base import Chain
from typing import Dict, List
class ConcatenateChain(Chain):
chain_1: LLMChain
chain_2: LLMChain
@property
def input_keys(self) -> List[str]:
# Union of the input keys of the two chains.
all_input_vars = set(self.chain_1.input_keys).union(set(self.chain_2.input_keys))
return list(all_input_vars)
@property
def output_keys(self) -> List[str]:
return ['concat_output']
def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:
output_1 = self.chain_1.run(inputs)
output_2 = self.chain_2.run(inputs)
return {'concat_output': output_1 + output_2}
Now, we can try running the chain that we called.
prompt_1 = PromptTemplate(
input_variables=["product"],
|
https://python.langchain.com/en/latest/modules/chains/getting_started.html
|
58c006644484-6
|
prompt_1 = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain_1 = LLMChain(llm=llm, prompt=prompt_1)
prompt_2 = PromptTemplate(
input_variables=["product"],
template="What is a good slogan for a company that makes {product}?",
)
chain_2 = LLMChain(llm=llm, prompt=prompt_2)
concat_chain = ConcatenateChain(chain_1=chain_1, chain_2=chain_2)
concat_output = concat_chain.run("colorful socks")
print(f"Concatenated output:\n{concat_output}")
Concatenated output:
Funky Footwear Company
"Brighten Up Your Day with Our Colorful Socks!"
That’s it! For more details about how to do cool things with Chains, check out the how-to guide for chains.
previous
Chains
next
How-To Guides
Contents
Why do we need chains?
Quick start: Using LLMChain
Different ways of calling chains
Add memory to chains
Debug Chain
Combine chains with the SequentialChain
Create a custom chain with the Chain class
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/chains/getting_started.html
|
914ac8b965da-0
|
.ipynb
.pdf
Graph QA
Contents
Create the graph
Querying the graph
Save the graph
Graph QA#
This notebook goes over how to do question answering over a graph data structure.
Create the graph#
In this section, we construct an example graph. At the moment, this works best for small pieces of text.
from langchain.indexes import GraphIndexCreator
from langchain.llms import OpenAI
from langchain.document_loaders import TextLoader
index_creator = GraphIndexCreator(llm=OpenAI(temperature=0))
with open("../../state_of_the_union.txt") as f:
all_text = f.read()
We will use just a small snippet, because extracting the knowledge triplets is a bit intensive at the moment.
text = "\n".join(all_text.split("\n\n")[105:108])
text
'It won’t look like much, but if you stop and look closely, you’ll see a “Field of dreams,” the ground on which America’s future will be built. \nThis is where Intel, the American company that helped build Silicon Valley, is going to build its $20 billion semiconductor “mega site”. \nUp to eight state-of-the-art factories in one place. 10,000 new good-paying jobs. '
graph = index_creator.from_text(text)
We can inspect the created graph.
graph.get_triples()
[('Intel', '$20 billion semiconductor "mega site"', 'is going to build'),
('Intel', 'state-of-the-art factories', 'is building'),
('Intel', '10,000 new good-paying jobs', 'is creating'),
('Intel', 'Silicon Valley', 'is helping build'),
('Field of dreams',
"America's future will be built",
'is the ground on which')]
Querying the graph#
|
https://python.langchain.com/en/latest/modules/chains/index_examples/graph_qa.html
|
914ac8b965da-1
|
'is the ground on which')]
Querying the graph#
We can now use the graph QA chain to ask question of the graph
from langchain.chains import GraphQAChain
chain = GraphQAChain.from_llm(OpenAI(temperature=0), graph=graph, verbose=True)
chain.run("what is Intel going to build?")
> Entering new GraphQAChain chain...
Entities Extracted:
Intel
Full Context:
Intel is going to build $20 billion semiconductor "mega site"
Intel is building state-of-the-art factories
Intel is creating 10,000 new good-paying jobs
Intel is helping build Silicon Valley
> Finished chain.
' Intel is going to build a $20 billion semiconductor "mega site" with state-of-the-art factories, creating 10,000 new good-paying jobs and helping to build Silicon Valley.'
Save the graph#
We can also save and load the graph.
graph.write_to_gml("graph.gml")
from langchain.indexes.graph import NetworkxEntityGraph
loaded_graph = NetworkxEntityGraph.from_gml("graph.gml")
loaded_graph.get_triples()
[('Intel', '$20 billion semiconductor "mega site"', 'is going to build'),
('Intel', 'state-of-the-art factories', 'is building'),
('Intel', '10,000 new good-paying jobs', 'is creating'),
('Intel', 'Silicon Valley', 'is helping build'),
('Field of dreams',
"America's future will be built",
'is the ground on which')]
previous
Chat Over Documents with Chat History
next
Hypothetical Document Embeddings
Contents
Create the graph
Querying the graph
Save the graph
By Harrison Chase
© Copyright 2023, Harrison Chase.
|
https://python.langchain.com/en/latest/modules/chains/index_examples/graph_qa.html
|
914ac8b965da-2
|
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/chains/index_examples/graph_qa.html
|
21ddd6fde779-0
|
.ipynb
.pdf
Retrieval Question Answering with Sources
Contents
Chain Type
Retrieval Question Answering with Sources#
This notebook goes over how to do question-answering with sources over an Index. It does this by using the RetrievalQAWithSourcesChain, which does the lookup of the documents from an Index.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.embeddings.cohere import CohereEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch
from langchain.vectorstores import Chroma
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": f"{i}-pl"} for i in range(len(texts))])
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
from langchain.chains import RetrievalQAWithSourcesChain
from langchain import OpenAI
chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever())
chain({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True)
{'answer': ' The president honored Justice Breyer for his service and mentioned his legacy of excellence.\n',
'sources': '31-pl'}
Chain Type#
|
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa_with_sources.html
|
21ddd6fde779-1
|
'sources': '31-pl'}
Chain Type#
You can easily specify different chain types to load and use in the RetrievalQAWithSourcesChain chain. For a more detailed walkthrough of these types, please see this notebook.
There are two ways to load different chain types. First, you can specify the chain type argument in the from_chain_type method. This allows you to pass in the name of the chain type you want to use. For example, in the below we change the chain type to map_reduce.
chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="map_reduce", retriever=docsearch.as_retriever())
chain({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True)
{'answer': ' The president said "Justice Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."\n',
'sources': '31-pl'}
The above way allows you to really simply change the chain_type, but it does provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly (as you did in this notebook) and then pass that directly to the the RetrievalQAWithSourcesChain chain with the combine_documents_chain parameter. For example:
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
qa_chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff")
qa = RetrievalQAWithSourcesChain(combine_documents_chain=qa_chain, retriever=docsearch.as_retriever())
qa({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True)
|
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa_with_sources.html
|
21ddd6fde779-2
|
{'answer': ' The president honored Justice Breyer for his service and mentioned his legacy of excellence.\n',
'sources': '31-pl'}
previous
Retrieval Question/Answering
next
Vector DB Text Generation
Contents
Chain Type
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa_with_sources.html
|
661b27073dab-0
|
.ipynb
.pdf
Chat Over Documents with Chat History
Contents
Pass in chat history
Return Source Documents
ConversationalRetrievalChain with search_distance
ConversationalRetrievalChain with map_reduce
ConversationalRetrievalChain with Question Answering with sources
ConversationalRetrievalChain with streaming to stdout
get_chat_history Function
Chat Over Documents with Chat History#
This notebook goes over how to set up a chain to chat over documents with chat history using a ConversationalRetrievalChain. The only difference between this chain and the RetrievalQAChain is that this allows for passing in of a chat history which can be used to allow for follow up questions.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.llms import OpenAI
from langchain.chains import ConversationalRetrievalChain
Load in documents. You can replace this with a loader for whatever type of data you want
from langchain.document_loaders import TextLoader
loader = TextLoader("../../state_of_the_union.txt")
documents = loader.load()
If you had multiple loaders that you wanted to combine, you do something like:
# loaders = [....]
# docs = []
# for loader in loaders:
# docs.extend(loader.load())
We now split the documents, create embeddings for them, and put them in a vectorstore. This allows us to do semantic search over them.
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_documents(documents, embeddings)
Using embedded DuckDB without persistence: data will be transient
|
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
|
661b27073dab-1
|
Using embedded DuckDB without persistence: data will be transient
We can now create a memory object, which is neccessary to track the inputs/outputs and hold a conversation.
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
We now initialize the ConversationalRetrievalChain
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), memory=memory)
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query})
result["answer"]
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
query = "Did he mention who she suceeded"
result = qa({"question": query})
result['answer']
' Ketanji Brown Jackson succeeded Justice Stephen Breyer on the United States Supreme Court.'
Pass in chat history#
In the above example, we used a Memory object to track chat history. We can also just pass it in explicitly. In order to do this, we need to initialize a chain without any memory object.
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever())
Here’s an example of asking a question with no chat history
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})
result["answer"]
|
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
|
661b27073dab-2
|
result = qa({"question": query, "chat_history": chat_history})
result["answer"]
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
Here’s an example of asking a question with some chat history
chat_history = [(query, result["answer"])]
query = "Did he mention who she suceeded"
result = qa({"question": query, "chat_history": chat_history})
result['answer']
' Ketanji Brown Jackson succeeded Justice Stephen Breyer on the United States Supreme Court.'
Return Source Documents#
You can also easily return source documents from the ConversationalRetrievalChain. This is useful for when you want to inspect what documents were returned.
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), return_source_documents=True)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})
result['source_documents'][0]
|
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
|
661b27073dab-3
|
result['source_documents'][0]
Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../state_of_the_union.txt'})
ConversationalRetrievalChain with search_distance#
If you are using a vector store that supports filtering by search distance, you can add a threshold value parameter.
vectordbkwargs = {"search_distance": 0.9}
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), return_source_documents=True)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history, "vectordbkwargs": vectordbkwargs})
ConversationalRetrievalChain with map_reduce#
We can also use different types of combine document chains with the ConversationalRetrievalChain chain.
from langchain.chains import LLMChain
from langchain.chains.question_answering import load_qa_chain
|
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
|
661b27073dab-4
|
from langchain.chains.question_answering import load_qa_chain
from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT
llm = OpenAI(temperature=0)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_chain(llm, chain_type="map_reduce")
chain = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(),
question_generator=question_generator,
combine_docs_chain=doc_chain,
)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = chain({"question": query, "chat_history": chat_history})
result['answer']
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
ConversationalRetrievalChain with Question Answering with sources#
You can also use this chain with the question answering with sources chain.
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
llm = OpenAI(temperature=0)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_with_sources_chain(llm, chain_type="map_reduce")
chain = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(),
question_generator=question_generator,
combine_docs_chain=doc_chain,
)
chat_history = []
|
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
|
661b27073dab-5
|
combine_docs_chain=doc_chain,
)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = chain({"question": query, "chat_history": chat_history})
result['answer']
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \nSOURCES: ../../state_of_the_union.txt"
ConversationalRetrievalChain with streaming to stdout#
Output from the chain will be streamed to stdout token by token in this example.
from langchain.chains.llm import LLMChain
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT, QA_PROMPT
from langchain.chains.question_answering import load_qa_chain
# Construct a ConversationalRetrievalChain with a streaming llm for combine docs
# and a separate, non-streaming llm for question generation
llm = OpenAI(temperature=0)
streaming_llm = OpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_chain(streaming_llm, chain_type="stuff", prompt=QA_PROMPT)
qa = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(), combine_docs_chain=doc_chain, question_generator=question_generator)
chat_history = []
|
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
|
661b27073dab-6
|
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})
The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
chat_history = [(query, result["answer"])]
query = "Did he mention who she suceeded"
result = qa({"question": query, "chat_history": chat_history})
Ketanji Brown Jackson succeeded Justice Stephen Breyer on the United States Supreme Court.
get_chat_history Function#
You can also specify a get_chat_history function, which can be used to format the chat_history string.
def get_chat_history(inputs) -> str:
res = []
for human, ai in inputs:
res.append(f"Human:{human}\nAI:{ai}")
return "\n".join(res)
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), get_chat_history=get_chat_history)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})
result['answer']
|
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
|
661b27073dab-7
|
result = qa({"question": query, "chat_history": chat_history})
result['answer']
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
previous
Analyze Document
next
Graph QA
Contents
Pass in chat history
Return Source Documents
ConversationalRetrievalChain with search_distance
ConversationalRetrievalChain with map_reduce
ConversationalRetrievalChain with Question Answering with sources
ConversationalRetrievalChain with streaming to stdout
get_chat_history Function
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
|
829eb0f45a9e-0
|
.ipynb
.pdf
Question Answering with Sources
Contents
Prepare Data
Quickstart
The stuff Chain
The map_reduce Chain
The refine Chain
The map-rerank Chain
Question Answering with Sources#
This notebook walks through how to use LangChain for question answering with sources over a list of documents. It covers four different chain types: stuff, map_reduce, refine,map-rerank. For a more in depth explanation of what these chain types are, see here.
Prepare Data#
First we prepare the data. For this example we do similarity search over a vector database, but these documents could be fetched in any manner (the point of this notebook to highlight what to do AFTER you fetch the documents).
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.embeddings.cohere import CohereEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch
from langchain.vectorstores import Chroma
from langchain.docstore.document import Document
from langchain.prompts import PromptTemplate
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": str(i)} for i in range(len(texts))])
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
query = "What did the president say about Justice Breyer"
docs = docsearch.similarity_search(query)
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
|
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
|
829eb0f45a9e-1
|
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
from langchain.llms import OpenAI
Quickstart#
If you just want to get started as quickly as possible, this is the recommended way to do it:
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff")
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'}
If you want more control and understanding over what is happening, please see the information below.
The stuff Chain#
This sections shows results of using the stuff Chain to do question answering with sources.
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff")
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'}
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
template = """Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES").
If you don't know the answer, just say that you don't know. Don't try to make up an answer.
ALWAYS return a "SOURCES" part in your answer.
Respond in Italian.
QUESTION: {question}
=========
{summaries}
=========
FINAL ANSWER IN ITALIAN:"""
PROMPT = PromptTemplate(template=template, input_variables=["summaries", "question"])
|
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
|
829eb0f45a9e-2
|
PROMPT = PromptTemplate(template=template, input_variables=["summaries", "question"])
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff", prompt=PROMPT)
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'output_text': '\nNon so cosa abbia detto il presidente riguardo a Justice Breyer.\nSOURCES: 30, 31, 33'}
The map_reduce Chain#
This sections shows results of using the map_reduce Chain to do question answering with sources.
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce")
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'}
Intermediate Steps
We can also return the intermediate steps for map_reduce chains, should we want to inspect them. This is done with the return_intermediate_steps variable.
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'intermediate_steps': [' "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."',
' None',
' None',
' None'],
|
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
|
829eb0f45a9e-3
|
' None',
' None',
' None'],
'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'}
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
question_prompt_template = """Use the following portion of a long document to see if any of the text is relevant to answer the question.
Return any relevant text in Italian.
{context}
Question: {question}
Relevant text, if any, in Italian:"""
QUESTION_PROMPT = PromptTemplate(
template=question_prompt_template, input_variables=["context", "question"]
)
combine_prompt_template = """Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES").
If you don't know the answer, just say that you don't know. Don't try to make up an answer.
ALWAYS return a "SOURCES" part in your answer.
Respond in Italian.
QUESTION: {question}
=========
{summaries}
=========
FINAL ANSWER IN ITALIAN:"""
COMBINE_PROMPT = PromptTemplate(
template=combine_prompt_template, input_variables=["summaries", "question"]
)
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True, question_prompt=QUESTION_PROMPT, combine_prompt=COMBINE_PROMPT)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
|
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
|
829eb0f45a9e-4
|
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'intermediate_steps': ["\nStasera vorrei onorare qualcuno che ha dedicato la sua vita a servire questo paese: il giustizia Stephen Breyer - un veterano dell'esercito, uno studioso costituzionale e un giustizia in uscita della Corte Suprema degli Stati Uniti. Giustizia Breyer, grazie per il tuo servizio.",
' Non pertinente.',
' Non rilevante.',
" Non c'è testo pertinente."],
'output_text': ' Non conosco la risposta. SOURCES: 30, 31, 33, 20.'}
Batch Size
When using the map_reduce chain, one thing to keep in mind is the batch size you are using during the map step. If this is too high, it could cause rate limiting errors. You can control this by setting the batch size on the LLM used. Note that this only applies for LLMs with this parameter. Below is an example of doing so:
llm = OpenAI(batch_size=5, temperature=0)
The refine Chain#
This sections shows results of using the refine Chain to do question answering with sources.
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="refine")
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
|
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
|
829eb0f45a9e-5
|
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'output_text': "\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked him for his service and praised his career as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He noted Justice Breyer's reputation as a consensus builder and the broad range of support he has received from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also highlighted the importance of securing the border and fixing the immigration system in order to advance liberty and justice, and mentioned the new technology, joint patrols, dedicated immigration judges, and commitments to support partners in South and Central America that have been put in place. He also expressed his commitment to the LGBTQ+ community, noting the need for the bipartisan Equality Act and the importance of protecting transgender Americans from state laws targeting them. He also highlighted his commitment to bipartisanship, noting the 80 bipartisan bills he signed into law last year, and his plans to strengthen the Violence Against Women Act. Additionally, he announced that the Justice Department will name a chief prosecutor for pandemic fraud and his plan to lower the deficit by more than one trillion dollars in a"}
Intermediate Steps
We can also return the intermediate steps for refine chains, should we want to inspect them. This is done with the return_intermediate_steps variable.
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
|
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
|
829eb0f45a9e-6
|
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'intermediate_steps': ['\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service.',
'\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. \n\nSource: 31',
|
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
|
829eb0f45a9e-7
|
'\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. He also mentioned the need to pass the bipartisan Equality Act to protect LGBTQ+ Americans, and to strengthen the Violence Against Women Act that he had written three decades ago. \n\nSource: 31, 33',
|
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
|
829eb0f45a9e-8
|
'\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. He also mentioned the need to pass the bipartisan Equality Act to protect LGBTQ+ Americans, and to strengthen the Violence Against Women Act that he had written three decades ago. Additionally, he mentioned his plan to lower costs to give families a fair shot, lower the deficit, and go after criminals who stole billions in relief money meant for small businesses and millions of Americans. He also announced that the Justice Department will name a chief prosecutor for pandemic fraud. \n\nSource: 20, 31, 33'],
|
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
|
829eb0f45a9e-9
|
'output_text': '\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. He also mentioned the need to pass the bipartisan Equality Act to protect LGBTQ+ Americans, and to strengthen the Violence Against Women Act that he had written three decades ago. Additionally, he mentioned his plan to lower costs to give families a fair shot, lower the deficit, and go after criminals who stole billions in relief money meant for small businesses and millions of Americans. He also announced that the Justice Department will name a chief prosecutor for pandemic fraud. \n\nSource: 20, 31, 33'}
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
refine_template = (
"The original question is as follows: {question}\n"
"We have provided an existing answer, including sources: {existing_answer}\n"
"We have the opportunity to refine the existing answer"
"(only if needed) with some more context below.\n"
"------------\n"
"{context_str}\n"
"------------\n"
"Given the new context, refine the original answer to better "
"answer the question (in Italian)"
|
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
|
829eb0f45a9e-10
|
"answer the question (in Italian)"
"If you do update it, please update the sources as well. "
"If the context isn't useful, return the original answer."
)
refine_prompt = PromptTemplate(
input_variables=["question", "existing_answer", "context_str"],
template=refine_template,
)
question_template = (
"Context information is below. \n"
"---------------------\n"
"{context_str}"
"\n---------------------\n"
"Given the context information and not prior knowledge, "
"answer the question in Italian: {question}\n"
)
question_prompt = PromptTemplate(
input_variables=["context_str", "question"], template=question_template
)
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True, question_prompt=question_prompt, refine_prompt=refine_prompt)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'intermediate_steps': ['\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese e ha onorato la sua carriera.',
|
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
|
829eb0f45a9e-11
|
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per",
|
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
|
829eb0f45a9e-12
|
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per",
|
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
|
829eb0f45a9e-13
|
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per"],
|
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
|
829eb0f45a9e-14
|
'output_text': "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per"}
The map-rerank Chain#
This sections shows results of using the map-rerank Chain to do question answering with sources.
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_rerank", metadata_keys=['source'], return_intermediate_steps=True)
query = "What did the president say about Justice Breyer"
result = chain({"input_documents": docs, "question": query}, return_only_outputs=True)
result["output_text"]
' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.'
result["intermediate_steps"]
[{'answer': ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.',
'score': '100'},
|
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
|
829eb0f45a9e-15
|
'score': '100'},
{'answer': ' This document does not answer the question', 'score': '0'},
{'answer': ' This document does not answer the question', 'score': '0'},
{'answer': ' This document does not answer the question', 'score': '0'}]
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
from langchain.output_parsers import RegexParser
output_parser = RegexParser(
regex=r"(.*?)\nScore: (.*)",
output_keys=["answer", "score"],
)
prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
In addition to giving an answer, also return a score of how fully it answered the user's question. This should be in the following format:
Question: [question here]
Helpful Answer In Italian: [answer here]
Score: [score between 0 and 100]
Begin!
Context:
---------
{context}
---------
Question: {question}
Helpful Answer In Italian:"""
PROMPT = PromptTemplate(
template=prompt_template,
input_variables=["context", "question"],
output_parser=output_parser,
)
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_rerank", metadata_keys=['source'], return_intermediate_steps=True, prompt=PROMPT)
query = "What did the president say about Justice Breyer"
result = chain({"input_documents": docs, "question": query}, return_only_outputs=True)
result
{'source': 30,
|
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
|
829eb0f45a9e-16
|
result
{'source': 30,
'intermediate_steps': [{'answer': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha onorato la sua carriera.',
'score': '100'},
{'answer': ' Il presidente non ha detto nulla sulla Giustizia Breyer.',
'score': '100'},
{'answer': ' Non so.', 'score': '0'},
{'answer': ' Il presidente non ha detto nulla sulla giustizia Breyer.',
'score': '100'}],
'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha onorato la sua carriera.'}
previous
Hypothetical Document Embeddings
next
Question Answering
Contents
Prepare Data
Quickstart
The stuff Chain
The map_reduce Chain
The refine Chain
The map-rerank Chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
|
9f74f9ea5925-0
|
.ipynb
.pdf
Analyze Document
Contents
Summarize
Question Answering
Analyze Document#
The AnalyzeDocumentChain is more of an end to chain. This chain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain. This can be used as more of an end-to-end chain.
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
Summarize#
Let’s take a look at it in action below, using it summarize a long document.
from langchain import OpenAI
from langchain.chains.summarize import load_summarize_chain
llm = OpenAI(temperature=0)
summary_chain = load_summarize_chain(llm, chain_type="map_reduce")
from langchain.chains import AnalyzeDocumentChain
summarize_document_chain = AnalyzeDocumentChain(combine_docs_chain=summary_chain)
summarize_document_chain.run(state_of_the_union)
" In this speech, President Biden addresses the American people and the world, discussing the recent aggression of Russia's Vladimir Putin in Ukraine and the US response. He outlines economic sanctions and other measures taken to hold Putin accountable, and announces the US Department of Justice's task force to go after the crimes of Russian oligarchs. He also announces plans to fight inflation and lower costs for families, invest in American manufacturing, and provide military, economic, and humanitarian assistance to Ukraine. He calls for immigration reform, protecting the rights of women, and advancing the rights of LGBTQ+ Americans, and pays tribute to military families. He concludes with optimism for the future of America."
Question Answering#
Let’s take a look at this using a question answering chain.
from langchain.chains.question_answering import load_qa_chain
qa_chain = load_qa_chain(llm, chain_type="map_reduce")
|
https://python.langchain.com/en/latest/modules/chains/index_examples/analyze_document.html
|
9f74f9ea5925-1
|
qa_chain = load_qa_chain(llm, chain_type="map_reduce")
qa_document_chain = AnalyzeDocumentChain(combine_docs_chain=qa_chain)
qa_document_chain.run(input_document=state_of_the_union, question="what did the president say about justice breyer?")
' The president thanked Justice Breyer for his service.'
previous
Transformation Chain
next
Chat Over Documents with Chat History
Contents
Summarize
Question Answering
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/chains/index_examples/analyze_document.html
|
de806987ec44-0
|
.ipynb
.pdf
Question Answering
Contents
Prepare Data
Quickstart
The stuff Chain
The map_reduce Chain
The refine Chain
The map-rerank Chain
Question Answering#
This notebook walks through how to use LangChain for question answering over a list of documents. It covers four different types of chains: stuff, map_reduce, refine, map_rerank. For a more in depth explanation of what these chain types are, see here.
Prepare Data#
First we prepare the data. For this example we do similarity search over a vector database, but these documents could be fetched in any manner (the point of this notebook to highlight what to do AFTER you fetch the documents).
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.docstore.document import Document
from langchain.prompts import PromptTemplate
from langchain.indexes.vectorstore import VectorstoreIndexCreator
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": str(i)} for i in range(len(texts))]).as_retriever()
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
query = "What did the president say about Justice Breyer"
docs = docsearch.get_relevant_documents(query)
from langchain.chains.question_answering import load_qa_chain
from langchain.llms import OpenAI
Quickstart#
|
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
|
de806987ec44-1
|
from langchain.llms import OpenAI
Quickstart#
If you just want to get started as quickly as possible, this is the recommended way to do it:
chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff")
query = "What did the president say about Justice Breyer"
chain.run(input_documents=docs, question=query)
' The president said that Justice Breyer has dedicated his life to serve the country and thanked him for his service.'
If you want more control and understanding over what is happening, please see the information below.
The stuff Chain#
This sections shows results of using the stuff Chain to do question answering.
chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff")
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'output_text': ' The president said that Justice Breyer has dedicated his life to serve the country and thanked him for his service.'}
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
{context}
Question: {question}
Answer in Italian:"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff", prompt=PROMPT)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
|
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
|
de806987ec44-2
|
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha ricevuto una vasta gamma di supporto.'}
The map_reduce Chain#
This sections shows results of using the map_reduce Chain to do question answering.
chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce")
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'output_text': ' The president said that Justice Breyer is an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court, and thanked him for his service.'}
Intermediate Steps
We can also return the intermediate steps for map_reduce chains, should we want to inspect them. This is done with the return_map_steps variable.
chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce", return_map_steps=True)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'intermediate_steps': [' "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."',
' A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.',
' None',
' None'],
|
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
|
de806987ec44-3
|
' None',
' None'],
'output_text': ' The president said that Justice Breyer is an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court, and thanked him for his service.'}
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
question_prompt_template = """Use the following portion of a long document to see if any of the text is relevant to answer the question.
Return any relevant text translated into italian.
{context}
Question: {question}
Relevant text, if any, in Italian:"""
QUESTION_PROMPT = PromptTemplate(
template=question_prompt_template, input_variables=["context", "question"]
)
combine_prompt_template = """Given the following extracted parts of a long document and a question, create a final answer italian.
If you don't know the answer, just say that you don't know. Don't try to make up an answer.
QUESTION: {question}
=========
{summaries}
=========
Answer in Italian:"""
COMBINE_PROMPT = PromptTemplate(
template=combine_prompt_template, input_variables=["summaries", "question"]
)
chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce", return_map_steps=True, question_prompt=QUESTION_PROMPT, combine_prompt=COMBINE_PROMPT)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
|
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
|
de806987ec44-4
|
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'intermediate_steps': ["\nStasera vorrei onorare qualcuno che ha dedicato la sua vita a servire questo paese: il giustizia Stephen Breyer - un veterano dell'esercito, uno studioso costituzionale e un giustizia in uscita della Corte Suprema degli Stati Uniti. Giustizia Breyer, grazie per il tuo servizio.",
'\nNessun testo pertinente.',
' Non ha detto nulla riguardo a Justice Breyer.',
" Non c'è testo pertinente."],
'output_text': ' Non ha detto nulla riguardo a Justice Breyer.'}
Batch Size
When using the map_reduce chain, one thing to keep in mind is the batch size you are using during the map step. If this is too high, it could cause rate limiting errors. You can control this by setting the batch size on the LLM used. Note that this only applies for LLMs with this parameter. Below is an example of doing so:
llm = OpenAI(batch_size=5, temperature=0)
The refine Chain#
This sections shows results of using the refine Chain to do question answering.
chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine")
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
|
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
|
de806987ec44-5
|
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'output_text': '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which he said would be the most sweeping investment to rebuild America in history and would help the country compete for the jobs of the 21st Century.'}
Intermediate Steps
We can also return the intermediate steps for refine chains, should we want to inspect them. This is done with the return_refine_steps variable.
chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine", return_refine_steps=True)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'intermediate_steps': ['\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country and his legacy of excellence.',
'\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice.',
'\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans.',
|
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
|
de806987ec44-6
|
'\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which is the most sweeping investment to rebuild America in history.'],
'output_text': '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which is the most sweeping investment to rebuild America in history.'}
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
refine_prompt_template = (
"The original question is as follows: {question}\n"
"We have provided an existing answer: {existing_answer}\n"
"We have the opportunity to refine the existing answer"
"(only if needed) with some more context below.\n"
"------------\n"
"{context_str}\n"
"------------\n"
"Given the new context, refine the original answer to better "
"answer the question. "
"If the context isn't useful, return the original answer. Reply in Italian."
)
refine_prompt = PromptTemplate(
input_variables=["question", "existing_answer", "context_str"],
template=refine_prompt_template,
)
initial_qa_template = (
"Context information is below. \n"
|
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
|
de806987ec44-7
|
)
initial_qa_template = (
"Context information is below. \n"
"---------------------\n"
"{context_str}"
"\n---------------------\n"
"Given the context information and not prior knowledge, "
"answer the question: {question}\nYour answer should be in Italian.\n"
)
initial_qa_prompt = PromptTemplate(
input_variables=["context_str", "question"], template=initial_qa_template
)
chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine", return_refine_steps=True,
question_prompt=initial_qa_prompt, refine_prompt=refine_prompt)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'intermediate_steps': ['\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese e ha reso omaggio al suo servizio.',
"\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione.",
|
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
|
de806987ec44-8
|
"\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei.",
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei e per investire in America, educare gli americani, far crescere la forza lavoro e costruire l'economia dal"],
|
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
|
de806987ec44-9
|
'output_text': "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei e per investire in America, educare gli americani, far crescere la forza lavoro e costruire l'economia dal"}
The map-rerank Chain#
This sections shows results of using the map-rerank Chain to do question answering with sources.
chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_rerank", return_intermediate_steps=True)
query = "What did the president say about Justice Breyer"
results = chain({"input_documents": docs, "question": query}, return_only_outputs=True)
results["output_text"]
' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.'
results["intermediate_steps"]
[{'answer': ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.',
'score': '100'},
{'answer': ' This document does not answer the question', 'score': '0'},
|
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
|
de806987ec44-10
|
{'answer': ' This document does not answer the question', 'score': '0'},
{'answer': ' This document does not answer the question', 'score': '0'},
{'answer': ' This document does not answer the question', 'score': '0'}]
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
from langchain.output_parsers import RegexParser
output_parser = RegexParser(
regex=r"(.*?)\nScore: (.*)",
output_keys=["answer", "score"],
)
prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
In addition to giving an answer, also return a score of how fully it answered the user's question. This should be in the following format:
Question: [question here]
Helpful Answer In Italian: [answer here]
Score: [score between 0 and 100]
Begin!
Context:
---------
{context}
---------
Question: {question}
Helpful Answer In Italian:"""
PROMPT = PromptTemplate(
template=prompt_template,
input_variables=["context", "question"],
output_parser=output_parser,
)
chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_rerank", return_intermediate_steps=True, prompt=PROMPT)
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'intermediate_steps': [{'answer': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese.',
'score': '100'},
|
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
|
de806987ec44-11
|
'score': '100'},
{'answer': ' Il presidente non ha detto nulla sulla Giustizia Breyer.',
'score': '100'},
{'answer': ' Non so.', 'score': '0'},
{'answer': ' Non so.', 'score': '0'}],
'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese.'}
previous
Question Answering with Sources
next
Summarization
Contents
Prepare Data
Quickstart
The stuff Chain
The map_reduce Chain
The refine Chain
The map-rerank Chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
|
7ad34749cfe4-0
|
.ipynb
.pdf
Hypothetical Document Embeddings
Contents
Multiple generations
Using our own prompts
Using HyDE
Hypothetical Document Embeddings#
This notebook goes over how to use Hypothetical Document Embeddings (HyDE), as described in this paper.
At a high level, HyDE is an embedding technique that takes queries, generates a hypothetical answer, and then embeds that generated document and uses that as the final example.
In order to use HyDE, we therefore need to provide a base embedding model, as well as an LLMChain that can be used to generate those documents. By default, the HyDE class comes with some default prompts to use (see the paper for more details on them), but we can also create our own.
from langchain.llms import OpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.chains import LLMChain, HypotheticalDocumentEmbedder
from langchain.prompts import PromptTemplate
base_embeddings = OpenAIEmbeddings()
llm = OpenAI()
# Load with `web_search` prompt
embeddings = HypotheticalDocumentEmbedder.from_llm(llm, base_embeddings, "web_search")
# Now we can use it as any embedding class!
result = embeddings.embed_query("Where is the Taj Mahal?")
Multiple generations#
We can also generate multiple documents and then combine the embeddings for those. By default, we combine those by taking the average. We can do this by changing the LLM we use to generate documents to return multiple things.
multi_llm = OpenAI(n=4, best_of=4)
embeddings = HypotheticalDocumentEmbedder.from_llm(multi_llm, base_embeddings, "web_search")
result = embeddings.embed_query("Where is the Taj Mahal?")
Using our own prompts#
|
https://python.langchain.com/en/latest/modules/chains/index_examples/hyde.html
|
7ad34749cfe4-1
|
result = embeddings.embed_query("Where is the Taj Mahal?")
Using our own prompts#
Besides using preconfigured prompts, we can also easily construct our own prompts and use those in the LLMChain that is generating the documents. This can be useful if we know the domain our queries will be in, as we can condition the prompt to generate text more similar to that.
In the example below, let’s condition it to generate text about a state of the union address (because we will use that in the next example).
prompt_template = """Please answer the user's question about the most recent state of the union address
Question: {question}
Answer:"""
prompt = PromptTemplate(input_variables=["question"], template=prompt_template)
llm_chain = LLMChain(llm=llm, prompt=prompt)
embeddings = HypotheticalDocumentEmbedder(llm_chain=llm_chain, base_embeddings=base_embeddings)
result = embeddings.embed_query("What did the president say about Ketanji Brown Jackson")
Using HyDE#
Now that we have HyDE, we can use it as we would any other embedding class! Here is using it to find similar passages in the state of the union example.
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
docsearch = Chroma.from_texts(texts, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
|
https://python.langchain.com/en/latest/modules/chains/index_examples/hyde.html
|
7ad34749cfe4-2
|
Using DuckDB in-memory for database. Data will be transient.
print(docs[0].page_content)
In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections.
We cannot let this happen.
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
previous
Graph QA
next
Question Answering with Sources
Contents
Multiple generations
Using our own prompts
Using HyDE
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/chains/index_examples/hyde.html
|
c47c87cffe16-0
|
.ipynb
.pdf
Summarization
Contents
Prepare Data
Quickstart
The stuff Chain
The map_reduce Chain
The refine Chain
Summarization#
This notebook walks through how to use LangChain for summarization over a list of documents. It covers three different chain types: stuff, map_reduce, and refine. For a more in depth explanation of what these chain types are, see here.
Prepare Data#
First we prepare the data. For this example we create multiple documents from one long one, but these documents could be fetched in any manner (the point of this notebook to highlight what to do AFTER you fetch the documents).
from langchain import OpenAI, PromptTemplate, LLMChain
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.mapreduce import MapReduceChain
from langchain.prompts import PromptTemplate
llm = OpenAI(temperature=0)
text_splitter = CharacterTextSplitter()
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
texts = text_splitter.split_text(state_of_the_union)
from langchain.docstore.document import Document
docs = [Document(page_content=t) for t in texts[:3]]
Quickstart#
If you just want to get started as quickly as possible, this is the recommended way to do it:
from langchain.chains.summarize import load_summarize_chain
chain = load_summarize_chain(llm, chain_type="map_reduce")
chain.run(docs)
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
c47c87cffe16-1
|
chain.run(docs)
' In response to Russian aggression in Ukraine, the United States and its allies are taking action to hold Putin accountable, including economic sanctions, asset seizures, and military assistance. The US is also providing economic and humanitarian aid to Ukraine, and has passed the American Rescue Plan and the Bipartisan Infrastructure Law to help struggling families and create jobs. The US remains unified and determined to protect Ukraine and the free world.'
If you want more control and understanding over what is happening, please see the information below.
The stuff Chain#
This sections shows results of using the stuff Chain to do summarization.
chain = load_summarize_chain(llm, chain_type="stuff")
chain.run(docs)
' In his speech, President Biden addressed the crisis in Ukraine, the American Rescue Plan, and the Bipartisan Infrastructure Law. He discussed the need to invest in America, educate Americans, and build the economy from the bottom up. He also announced the release of 60 million barrels of oil from reserves around the world, and the creation of a dedicated task force to go after the crimes of Russian oligarchs. He concluded by emphasizing the need to Buy American and use taxpayer dollars to rebuild America.'
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
prompt_template = """Write a concise summary of the following:
{text}
CONCISE SUMMARY IN ITALIAN:"""
PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"])
chain = load_summarize_chain(llm, chain_type="stuff", prompt=PROMPT)
chain.run(docs)
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
c47c87cffe16-2
|
chain.run(docs)
"\n\nIn questa serata, il Presidente degli Stati Uniti ha annunciato una serie di misure per affrontare la crisi in Ucraina, causata dall'aggressione di Putin. Ha anche annunciato l'invio di aiuti economici, militari e umanitari all'Ucraina. Ha anche annunciato che gli Stati Uniti e i loro alleati stanno imponendo sanzioni economiche a Putin e stanno rilasciando 60 milioni di barili di petrolio dalle riserve di tutto il mondo. Inoltre, ha annunciato che il Dipartimento di Giustizia degli Stati Uniti sta creando una task force dedicata ai crimini degli oligarchi russi. Il Presidente ha anche annunciato l'approvazione della legge bipartitica sull'infrastruttura, che prevede investimenti per la ricostruzione dell'America. Questo porterà a creare posti"
The map_reduce Chain#
This sections shows results of using the map_reduce Chain to do summarization.
chain = load_summarize_chain(llm, chain_type="map_reduce")
chain.run(docs)
" In response to Russia's aggression in Ukraine, the United States and its allies have imposed economic sanctions and are taking other measures to hold Putin accountable. The US is also providing economic and military assistance to Ukraine, protecting NATO countries, and releasing oil from its Strategic Petroleum Reserve. President Biden and Vice President Harris have passed legislation to help struggling families and rebuild America's infrastructure."
Intermediate Steps
We can also return the intermediate steps for map_reduce chains, should we want to inspect them. This is done with the return_map_steps variable.
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
c47c87cffe16-3
|
chain = load_summarize_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True)
chain({"input_documents": docs}, return_only_outputs=True)
{'map_steps': [" In response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains.",
' The United States and its European allies are taking action to punish Russia for its invasion of Ukraine, including seizing assets, closing off airspace, and providing economic and military assistance to Ukraine. The US is also mobilizing forces to protect NATO countries and has released 30 million barrels of oil from its Strategic Petroleum Reserve to help blunt gas prices. The world is uniting in support of Ukraine and democracy, and the US stands with its Ukrainian-American citizens.',
" President Biden and Vice President Harris ran for office with a new economic vision for America, and have since passed the American Rescue Plan and the Bipartisan Infrastructure Law to help struggling families and rebuild America's infrastructure. This includes creating jobs, modernizing roads, airports, ports, and waterways, replacing lead pipes, providing affordable high-speed internet, and investing in American products to support American jobs."],
'output_text': " In response to Russia's aggression in Ukraine, the United States and its allies have imposed economic sanctions and are taking other measures to hold Putin accountable. The US is also providing economic and military assistance to Ukraine, protecting NATO countries, and passing legislation to help struggling families and rebuild America's infrastructure. The world is uniting in support of Ukraine and democracy, and the US stands with its Ukrainian-American citizens."}
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
c47c87cffe16-4
|
prompt_template = """Write a concise summary of the following:
{text}
CONCISE SUMMARY IN ITALIAN:"""
PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"])
chain = load_summarize_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True, map_prompt=PROMPT, combine_prompt=PROMPT)
chain({"input_documents": docs}, return_only_outputs=True)
{'intermediate_steps': ["\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Gli Stati Uniti e i loro alleati stanno ora imponendo sanzioni economiche a Putin e stanno tagliando l'accesso della Russia alla tecnologia. Il Dipartimento di Giustizia degli Stati Uniti sta anche creando una task force dedicata per andare dopo i crimini degli oligarchi russi.",
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
c47c87cffe16-5
|
"\n\nStiamo unendo le nostre forze con quelle dei nostri alleati europei per sequestrare yacht, appartamenti di lusso e jet privati di Putin. Abbiamo chiuso lo spazio aereo americano ai voli russi e stiamo fornendo più di un miliardo di dollari in assistenza all'Ucraina. Abbiamo anche mobilitato le nostre forze terrestri, aeree e navali per proteggere i paesi della NATO. Abbiamo anche rilasciato 60 milioni di barili di petrolio dalle riserve di tutto il mondo, di cui 30 milioni dalla nostra riserva strategica di petrolio. Stiamo affrontando una prova reale e ci vorrà del tempo, ma alla fine Putin non riuscirà a spegnere l'amore dei popoli per la libertà.",
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
c47c87cffe16-6
|
"\n\nIl Presidente Biden ha lottato per passare l'American Rescue Plan per aiutare le persone che soffrivano a causa della pandemia. Il piano ha fornito sollievo economico immediato a milioni di americani, ha aiutato a mettere cibo sulla loro tavola, a mantenere un tetto sopra le loro teste e a ridurre il costo dell'assicurazione sanitaria. Il piano ha anche creato più di 6,5 milioni di nuovi posti di lavoro, il più alto numero di posti di lavoro creati in un anno nella storia degli Stati Uniti. Il Presidente Biden ha anche firmato la legge bipartitica sull'infrastruttura, la più ampia iniziativa di ricostruzione della storia degli Stati Uniti. Il piano prevede di modernizzare le strade, gli aeroporti, i porti e le vie navigabili in"],
'output_text': "\n\nIl Presidente Biden sta lavorando per aiutare le persone che soffrono a causa della pandemia attraverso l'American Rescue Plan e la legge bipartitica sull'infrastruttura. Gli Stati Uniti e i loro alleati stanno anche imponendo sanzioni economiche a Putin e tagliando l'accesso della Russia alla tecnologia. Stanno anche sequestrando yacht, appartamenti di lusso e jet privati di Putin e fornendo più di un miliardo di dollari in assistenza all'Ucraina. Alla fine, Putin non riuscirà a spegnere l'amore dei popoli per la libertà."}
The refine Chain#
This sections shows results of using the refine Chain to do summarization.
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
c47c87cffe16-7
|
The refine Chain#
This sections shows results of using the refine Chain to do summarization.
chain = load_summarize_chain(llm, chain_type="refine")
chain.run(docs)
"\n\nIn response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. We are joining with our European allies to find and seize the assets of Russian oligarchs, including yachts, luxury apartments, and private jets. The U.S. is also closing off American airspace to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The U.S. and its allies are providing support to the Ukrainians in their fight for freedom, including military, economic, and humanitarian assistance. The U.S. is also mobilizing ground forces, air squadrons, and ship deployments to protect NATO countries. The U.S. and its allies are also releasing 60 million barrels of oil from reserves around the world, with the U.S. contributing 30 million barrels from its own Strategic Petroleum Reserve. In addition, the U.S. has passed the American Rescue Plan to provide immediate economic relief for tens of millions of Americans, and the Bipartisan Infrastructure Law to rebuild America and create jobs. This investment will"
Intermediate Steps
We can also return the intermediate steps for refine chains, should we want to inspect them. This is done with the return_refine_steps variable.
chain = load_summarize_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True)
chain({"input_documents": docs}, return_only_outputs=True)
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
c47c87cffe16-8
|
chain({"input_documents": docs}, return_only_outputs=True)
{'refine_steps': [" In response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains.",
"\n\nIn response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. We are joining with our European allies to find and seize the assets of Russian oligarchs, including yachts, luxury apartments, and private jets. The U.S. is also closing off American airspace to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The U.S. and its allies are providing support to the Ukrainians in their fight for freedom, including military, economic, and humanitarian assistance. The U.S. is also mobilizing ground forces, air squadrons, and ship deployments to protect NATO countries. The U.S. and its allies are also releasing 60 million barrels of oil from reserves around the world, with the U.S. contributing 30 million barrels from its own Strategic Petroleum Reserve. Putin's war on Ukraine has left Russia weaker and the rest of the world stronger, with the world uniting in support of democracy and peace.",
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
c47c87cffe16-9
|
"\n\nIn response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. We are joining with our European allies to find and seize the assets of Russian oligarchs, including yachts, luxury apartments, and private jets. The U.S. is also closing off American airspace to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The U.S. and its allies are providing support to the Ukrainians in their fight for freedom, including military, economic, and humanitarian assistance. The U.S. is also mobilizing ground forces, air squadrons, and ship deployments to protect NATO countries. The U.S. and its allies are also releasing 60 million barrels of oil from reserves around the world, with the U.S. contributing 30 million barrels from its own Strategic Petroleum Reserve. In addition, the U.S. has passed the American Rescue Plan to provide immediate economic relief for tens of millions of Americans, and the Bipartisan Infrastructure Law to rebuild America and create jobs. This includes investing"],
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
c47c87cffe16-10
|
'output_text': "\n\nIn response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. We are joining with our European allies to find and seize the assets of Russian oligarchs, including yachts, luxury apartments, and private jets. The U.S. is also closing off American airspace to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The U.S. and its allies are providing support to the Ukrainians in their fight for freedom, including military, economic, and humanitarian assistance. The U.S. is also mobilizing ground forces, air squadrons, and ship deployments to protect NATO countries. The U.S. and its allies are also releasing 60 million barrels of oil from reserves around the world, with the U.S. contributing 30 million barrels from its own Strategic Petroleum Reserve. In addition, the U.S. has passed the American Rescue Plan to provide immediate economic relief for tens of millions of Americans, and the Bipartisan Infrastructure Law to rebuild America and create jobs. This includes investing"}
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
prompt_template = """Write a concise summary of the following:
{text}
CONCISE SUMMARY IN ITALIAN:"""
PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"])
refine_template = (
"Your job is to produce a final summary\n"
"We have provided an existing summary up to a certain point: {existing_answer}\n"
"We have the opportunity to refine the existing summary"
"(only if needed) with some more context below.\n"
"------------\n"
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
c47c87cffe16-11
|
"------------\n"
"{text}\n"
"------------\n"
"Given the new context, refine the original summary in Italian"
"If the context isn't useful, return the original summary."
)
refine_prompt = PromptTemplate(
input_variables=["existing_answer", "text"],
template=refine_template,
)
chain = load_summarize_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True, question_prompt=PROMPT, refine_prompt=refine_prompt)
chain({"input_documents": docs}, return_only_outputs=True)
{'intermediate_steps': ["\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Insieme ai nostri alleati, stiamo imponendo sanzioni economiche, tagliando l'accesso della Russia alla tecnologia e bloccando i suoi più grandi istituti bancari dal sistema finanziario internazionale. Il Dipartimento di Giustizia degli Stati Uniti sta anche assemblando una task force dedicata per andare dopo i crimini degli oligarchi russi.",
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
c47c87cffe16-12
|
"\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Insieme ai nostri alleati, stiamo imponendo sanzioni economiche, tagliando l'accesso della Russia alla tecnologia, bloccando i suoi più grandi istituti bancari dal sistema finanziario internazionale e chiudendo lo spazio aereo americano a tutti i voli russi. Il Dipartimento di Giustizia degli Stati Uniti sta anche assemblando una task force dedicata per andare dopo i crimini degli oligarchi russi. Stiamo fornendo più di un miliardo di dollari in assistenza diretta all'Ucraina e fornendo assistenza militare,",
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
c47c87cffe16-13
|
"\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Insieme ai nostri alleati, stiamo imponendo sanzioni economiche, tagliando l'accesso della Russia alla tecnologia, bloccando i suoi più grandi istituti bancari dal sistema finanziario internazionale e chiudendo lo spazio aereo americano a tutti i voli russi. Il Dipartimento di Giustizia degli Stati Uniti sta anche assemblando una task force dedicata per andare dopo i crimini degli oligarchi russi. Stiamo fornendo più di un miliardo di dollari in assistenza diretta all'Ucraina e fornendo assistenza militare."],
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
c47c87cffe16-14
|
'output_text': "\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Insieme ai nostri alleati, stiamo imponendo sanzioni economiche, tagliando l'accesso della Russia alla tecnologia, bloccando i suoi più grandi istituti bancari dal sistema finanziario internazionale e chiudendo lo spazio aereo americano a tutti i voli russi. Il Dipartimento di Giustizia degli Stati Uniti sta anche assemblando una task force dedicata per andare dopo i crimini degli oligarchi russi. Stiamo fornendo più di un miliardo di dollari in assistenza diretta all'Ucraina e fornendo assistenza militare."}
previous
Question Answering
next
Retrieval Question/Answering
Contents
Prepare Data
Quickstart
The stuff Chain
The map_reduce Chain
The refine Chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.