id
stringlengths 14
15
| text
stringlengths 101
5.26k
| source
stringlengths 57
120
|
---|---|---|
cb1ecfa06586-4 | If the fact is false, explain why.
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction.
Checked Assertions:
"""
- The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True
- It has an area of 465,000 square miles. True
- It is one of five oceans in the world, alongside the Pacific Ocean, Atlantic Ocean, Indian Ocean, and the Southern Ocean. False - The Greenland Sea is not an ocean, it is an arm of the Arctic Ocean.
- It is the smallest of the five oceans. False - The Greenland Sea is not an ocean, it is an arm of the Arctic Ocean.
- It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True
- The sea is named after the island of Greenland. True
- It is the Arctic Ocean's main outlet to the Atlantic. True
- It is often frozen over so navigation is limited. True
- It is considered the northern branch of the Norwegian Sea. True
"""
Original Summary:
"""
The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is one of five oceans in the world, alongside the Pacific Ocean, Atlantic Ocean, Indian Ocean, and the Southern Ocean. It is the smallest of the five oceans and is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the island of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Norwegian Sea.
"""
Using these checked assertions, rewrite the original summary to be completely true.
The output should have the same structure and formatting as the original summary.
Summary:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true or false.
If all of the assertions are true, return "True". If any of the assertions are false, return "False".
Here are some examples:
===
Checked Assertions: """
- The sky is red: False
- Water is made of lava: False
- The sun is a star: True
"""
Result: False
===
Checked Assertions: """
- The sky is blue: True
- Water is wet: True
- The sun is a star: True
"""
Result: True
===
Checked Assertions: """
- The sky is blue - True
- Water is made of lava- False
- The sun is a star - True
"""
Result: False
===
Checked Assertions:"""
- The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True
- It has an area of 465,000 square miles. True
- It is one of five oceans in the world, alongside the Pacific Ocean, Atlantic Ocean, Indian Ocean, and the Southern Ocean. False - The Greenland Sea is not an ocean, it is an arm of the Arctic Ocean.
- It is the smallest of the five oceans. False - The Greenland Sea is not an ocean, it is an arm of the Arctic Ocean.
- It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True
- The sea is named after the island of Greenland. True
- It is the Arctic Ocean's main outlet to the Atlantic. True
- It is often frozen over so navigation is limited. True
- It is considered the northern branch of the Norwegian Sea. True
"""
Result:
> Finished chain.
> Finished chain.
The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the island of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Norwegian Sea.
> Entering new SequentialChain chain...
> Entering new LLMChain chain... | https://langchain.readthedocs.io/en/latest/modules/chains/examples/llm_summarization_checker.html |
cb1ecfa06586-5 | > Entering new SequentialChain chain...
> Entering new LLMChain chain...
Prompt after formatting:
Given some text, extract a list of facts from the text.
Format your output as a bulleted list.
Text:
"""
The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the island of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Norwegian Sea.
"""
Facts:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.
Here is a bullet point list of facts:
"""
- The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland.
- It has an area of 465,000 square miles.
- It is an arm of the Arctic Ocean.
- It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs.
- It is named after the island of Greenland.
- It is the Arctic Ocean's main outlet to the Atlantic.
- It is often frozen over so navigation is limited.
- It is considered the northern branch of the Norwegian Sea.
"""
For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".
If the fact is false, explain why.
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction.
Checked Assertions:
"""
- The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True
- It has an area of 465,000 square miles. True
- It is an arm of the Arctic Ocean. True
- It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True
- It is named after the island of Greenland. False - It is named after the country of Greenland.
- It is the Arctic Ocean's main outlet to the Atlantic. True
- It is often frozen over so navigation is limited. True
- It is considered the northern branch of the Norwegian Sea. False - It is considered the northern branch of the Atlantic Ocean.
"""
Original Summary:
"""
The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the island of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Norwegian Sea.
"""
Using these checked assertions, rewrite the original summary to be completely true.
The output should have the same structure and formatting as the original summary.
Summary:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true or false.
If all of the assertions are true, return "True". If any of the assertions are false, return "False".
Here are some examples:
===
Checked Assertions: """
- The sky is red: False
- Water is made of lava: False
- The sun is a star: True
"""
Result: False
===
Checked Assertions: """
- The sky is blue: True
- Water is wet: True
- The sun is a star: True
"""
Result: True
===
Checked Assertions: """
- The sky is blue - True
- Water is made of lava- False
- The sun is a star - True
"""
Result: False
===
Checked Assertions:""" | https://langchain.readthedocs.io/en/latest/modules/chains/examples/llm_summarization_checker.html |
cb1ecfa06586-6 | """
Result: False
===
Checked Assertions:"""
- The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True
- It has an area of 465,000 square miles. True
- It is an arm of the Arctic Ocean. True
- It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True
- It is named after the island of Greenland. False - It is named after the country of Greenland.
- It is the Arctic Ocean's main outlet to the Atlantic. True
- It is often frozen over so navigation is limited. True
- It is considered the northern branch of the Norwegian Sea. False - It is considered the northern branch of the Atlantic Ocean.
"""
Result:
> Finished chain.
> Finished chain.
The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the country of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Atlantic Ocean.
> Entering new SequentialChain chain...
> Entering new LLMChain chain...
Prompt after formatting:
Given some text, extract a list of facts from the text.
Format your output as a bulleted list.
Text:
"""
The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the country of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Atlantic Ocean.
"""
Facts:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.
Here is a bullet point list of facts:
"""
- The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland.
- It has an area of 465,000 square miles.
- It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs.
- The sea is named after the country of Greenland.
- It is the Arctic Ocean's main outlet to the Atlantic.
- It is often frozen over so navigation is limited.
- It is considered the northern branch of the Atlantic Ocean.
"""
For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".
If the fact is false, explain why.
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction.
Checked Assertions:
"""
- The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True
- It has an area of 465,000 square miles. True
- It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True
- The sea is named after the country of Greenland. True
- It is the Arctic Ocean's main outlet to the Atlantic. False - The Arctic Ocean's main outlet to the Atlantic is the Barents Sea.
- It is often frozen over so navigation is limited. True
- It is considered the northern branch of the Atlantic Ocean. False - The Greenland Sea is considered part of the Arctic Ocean, not the Atlantic Ocean.
"""
Original Summary:
""" | https://langchain.readthedocs.io/en/latest/modules/chains/examples/llm_summarization_checker.html |
cb1ecfa06586-7 | """
Original Summary:
"""
The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the country of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Atlantic Ocean.
"""
Using these checked assertions, rewrite the original summary to be completely true.
The output should have the same structure and formatting as the original summary.
Summary:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true or false.
If all of the assertions are true, return "True". If any of the assertions are false, return "False".
Here are some examples:
===
Checked Assertions: """
- The sky is red: False
- Water is made of lava: False
- The sun is a star: True
"""
Result: False
===
Checked Assertions: """
- The sky is blue: True
- Water is wet: True
- The sun is a star: True
"""
Result: True
===
Checked Assertions: """
- The sky is blue - True
- Water is made of lava- False
- The sun is a star - True
"""
Result: False
===
Checked Assertions:"""
- The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True
- It has an area of 465,000 square miles. True
- It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True
- The sea is named after the country of Greenland. True
- It is the Arctic Ocean's main outlet to the Atlantic. False - The Arctic Ocean's main outlet to the Atlantic is the Barents Sea.
- It is often frozen over so navigation is limited. True
- It is considered the northern branch of the Atlantic Ocean. False - The Greenland Sea is considered part of the Arctic Ocean, not the Atlantic Ocean.
"""
Result:
> Finished chain.
> Finished chain.
The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the country of Greenland, and is the Arctic Ocean's main outlet to the Barents Sea. It is often frozen over so navigation is limited, and is considered part of the Arctic Ocean.
> Finished chain.
"The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the country of Greenland, and is the Arctic Ocean's main outlet to the Barents Sea. It is often frozen over so navigation is limited, and is considered part of the Arctic Ocean."
from langchain.chains import LLMSummarizationCheckerChain
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
checker_chain = LLMSummarizationCheckerChain.from_llm(llm, max_checks=3, verbose=True)
text = "Mammals can lay eggs, birds can lay eggs, therefore birds are mammals."
checker_chain.run(text)
> Entering new LLMSummarizationCheckerChain chain...
> Entering new SequentialChain chain...
> Entering new LLMChain chain...
Prompt after formatting:
Given some text, extract a list of facts from the text.
Format your output as a bulleted list.
Text:
"""
Mammals can lay eggs, birds can lay eggs, therefore birds are mammals.
"""
Facts:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.
Here is a bullet point list of facts:
"""
- Mammals can lay eggs
- Birds can lay eggs
- Birds are mammals
""" | https://langchain.readthedocs.io/en/latest/modules/chains/examples/llm_summarization_checker.html |
cb1ecfa06586-8 | - Birds can lay eggs
- Birds are mammals
"""
For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".
If the fact is false, explain why.
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction.
Checked Assertions:
"""
- Mammals can lay eggs: False. Mammals are not capable of laying eggs, as they give birth to live young.
- Birds can lay eggs: True. Birds are capable of laying eggs.
- Birds are mammals: False. Birds are not mammals, they are a class of their own.
"""
Original Summary:
"""
Mammals can lay eggs, birds can lay eggs, therefore birds are mammals.
"""
Using these checked assertions, rewrite the original summary to be completely true.
The output should have the same structure and formatting as the original summary.
Summary:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true or false.
If all of the assertions are true, return "True". If any of the assertions are false, return "False".
Here are some examples:
===
Checked Assertions: """
- The sky is red: False
- Water is made of lava: False
- The sun is a star: True
"""
Result: False
===
Checked Assertions: """
- The sky is blue: True
- Water is wet: True
- The sun is a star: True
"""
Result: True
===
Checked Assertions: """
- The sky is blue - True
- Water is made of lava- False
- The sun is a star - True
"""
Result: False
===
Checked Assertions:"""
- Mammals can lay eggs: False. Mammals are not capable of laying eggs, as they give birth to live young.
- Birds can lay eggs: True. Birds are capable of laying eggs.
- Birds are mammals: False. Birds are not mammals, they are a class of their own.
"""
Result:
> Finished chain.
> Finished chain.
Birds and mammals are both capable of laying eggs, however birds are not mammals, they are a class of their own.
> Entering new SequentialChain chain...
> Entering new LLMChain chain...
Prompt after formatting:
Given some text, extract a list of facts from the text.
Format your output as a bulleted list.
Text:
"""
Birds and mammals are both capable of laying eggs, however birds are not mammals, they are a class of their own.
"""
Facts:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.
Here is a bullet point list of facts:
"""
- Birds and mammals are both capable of laying eggs.
- Birds are not mammals.
- Birds are a class of their own.
"""
For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".
If the fact is false, explain why.
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction.
Checked Assertions:
"""
- Birds and mammals are both capable of laying eggs: False. Mammals give birth to live young, while birds lay eggs.
- Birds are not mammals: True. Birds are a class of their own, separate from mammals.
- Birds are a class of their own: True. Birds are a class of their own, separate from mammals.
"""
Original Summary:
"""
Birds and mammals are both capable of laying eggs, however birds are not mammals, they are a class of their own.
"""
Using these checked assertions, rewrite the original summary to be completely true.
The output should have the same structure and formatting as the original summary.
Summary:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true or false. | https://langchain.readthedocs.io/en/latest/modules/chains/examples/llm_summarization_checker.html |
cb1ecfa06586-9 | Below are some assertions that have been fact checked and are labeled as true or false.
If all of the assertions are true, return "True". If any of the assertions are false, return "False".
Here are some examples:
===
Checked Assertions: """
- The sky is red: False
- Water is made of lava: False
- The sun is a star: True
"""
Result: False
===
Checked Assertions: """
- The sky is blue: True
- Water is wet: True
- The sun is a star: True
"""
Result: True
===
Checked Assertions: """
- The sky is blue - True
- Water is made of lava- False
- The sun is a star - True
"""
Result: False
===
Checked Assertions:"""
- Birds and mammals are both capable of laying eggs: False. Mammals give birth to live young, while birds lay eggs.
- Birds are not mammals: True. Birds are a class of their own, separate from mammals.
- Birds are a class of their own: True. Birds are a class of their own, separate from mammals.
"""
Result:
> Finished chain.
> Finished chain.
> Finished chain.
'Birds are not mammals, but they are a class of their own. They lay eggs, unlike mammals which give birth to live young.'
previous
LLMRequestsChain
next
Moderation
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/chains/examples/llm_summarization_checker.html |
06c165ab987f-0 | .ipynb
.pdf
Moderation
Contents
How to use the moderation chain
How to append a Moderation chain to an LLMChain
Moderation#
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so. Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model. Some API providers, like OpenAI, specifically prohibit you, or your end users, from generating some types of harmful content. To comply with this (and to just generally prevent your application from being harmful) you may often want to append a moderation chain to any LLMChains, in order to make sure any output the LLM generates is not harmful.
If the content passed into the moderation chain is harmful, there is not one best way to handle it, it probably depends on your application. Sometimes you may want to throw an error in the Chain (and have your application handle that). Other times, you may want to return something to the user explaining that the text was harmful. There could even be other ways to handle it! We will cover all these ways in this notebook.
In this notebook, we will show:
How to run any piece of text through a moderation chain.
How to append a Moderation chain to an LLMChain.
from langchain.llms import OpenAI
from langchain.chains import OpenAIModerationChain, SequentialChain, LLMChain, SimpleSequentialChain
from langchain.prompts import PromptTemplate
How to use the moderation chain#
Here’s an example of using the moderation chain with default settings (will return a string explaining stuff was flagged).
moderation_chain = OpenAIModerationChain()
moderation_chain.run("This is okay")
'This is okay'
moderation_chain.run("I will kill you")
"Text was found that violates OpenAI's content policy."
Here’s an example of using the moderation chain to throw an error.
moderation_chain_error = OpenAIModerationChain(error=True)
moderation_chain_error.run("This is okay")
'This is okay'
moderation_chain_error.run("I will kill you")
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[7], line 1
----> 1 moderation_chain_error.run("I will kill you")
File ~/workplace/langchain/langchain/chains/base.py:138, in Chain.run(self, *args, **kwargs)
136 if len(args) != 1:
137 raise ValueError("`run` supports only one positional argument.")
--> 138 return self(args[0])[self.output_keys[0]]
140 if kwargs and not args:
141 return self(kwargs)[self.output_keys[0]]
File ~/workplace/langchain/langchain/chains/base.py:112, in Chain.__call__(self, inputs, return_only_outputs)
108 if self.verbose:
109 print(
110 f"\n\n\033[1m> Entering new {self.__class__.__name__} chain...\033[0m"
111 )
--> 112 outputs = self._call(inputs)
113 if self.verbose:
114 print(f"\n\033[1m> Finished {self.__class__.__name__} chain.\033[0m")
File ~/workplace/langchain/langchain/chains/moderation.py:81, in OpenAIModerationChain._call(self, inputs)
79 text = inputs[self.input_key]
80 results = self.client.create(text)
---> 81 output = self._moderate(text, results["results"][0])
82 return {self.output_key: output}
File ~/workplace/langchain/langchain/chains/moderation.py:73, in OpenAIModerationChain._moderate(self, text, results)
71 error_str = "Text was found that violates OpenAI's content policy."
72 if self.error:
---> 73 raise ValueError(error_str)
74 else:
75 return error_str
ValueError: Text was found that violates OpenAI's content policy.
Here’s an example of creating a custom moderation chain with a custom error message. It requires some knowledge of OpenAI’s moderation endpoint results (see docs here).
class CustomModeration(OpenAIModerationChain):
def _moderate(self, text: str, results: dict) -> str:
if results["flagged"]: | https://langchain.readthedocs.io/en/latest/modules/chains/examples/moderation.html |
06c165ab987f-1 | if results["flagged"]:
error_str = f"The following text was found that violates OpenAI's content policy: {text}"
return error_str
return text
custom_moderation = CustomModeration()
custom_moderation.run("This is okay")
'This is okay'
custom_moderation.run("I will kill you")
"The following text was found that violates OpenAI's content policy: I will kill you"
How to append a Moderation chain to an LLMChain#
To easily combine a moderation chain with an LLMChain, you can use the SequentialChain abstraction.
Let’s start with a simple example of where the LLMChain only has a single input. For this purpose, we will prompt the model so it says something harmful.
prompt = PromptTemplate(template="{text}", input_variables=["text"])
llm_chain = LLMChain(llm=OpenAI(temperature=0, model_name="text-davinci-002"), prompt=prompt)
text = """We are playing a game of repeat after me.
Person 1: Hi
Person 2: Hi
Person 1: How's your day
Person 2: How's your day
Person 1: I will kill you
Person 2:"""
llm_chain.run(text)
' I will kill you'
chain = SimpleSequentialChain(chains=[llm_chain, moderation_chain])
chain.run(text)
"Text was found that violates OpenAI's content policy."
Now let’s walk through an example of using it with an LLMChain which has multiple inputs (a bit more tricky because we can’t use the SimpleSequentialChain)
prompt = PromptTemplate(template="{setup}{new_input}Person2:", input_variables=["setup", "new_input"])
llm_chain = LLMChain(llm=OpenAI(temperature=0, model_name="text-davinci-002"), prompt=prompt)
setup = """We are playing a game of repeat after me.
Person 1: Hi
Person 2: Hi
Person 1: How's your day
Person 2: How's your day
Person 1:"""
new_input = "I will kill you"
inputs = {"setup": setup, "new_input": new_input}
llm_chain(inputs, return_only_outputs=True)
{'text': ' I will kill you'}
# Setting the input/output keys so it lines up
moderation_chain.input_key = "text"
moderation_chain.output_key = "sanitized_text"
chain = SequentialChain(chains=[llm_chain, moderation_chain], input_variables=["setup", "new_input"])
chain(inputs, return_only_outputs=True)
{'sanitized_text': "Text was found that violates OpenAI's content policy."}
previous
LLMSummarizationCheckerChain
next
Router Chains: Selecting from multiple prompts with MultiPromptChain
Contents
How to use the moderation chain
How to append a Moderation chain to an LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/chains/examples/moderation.html |
5f4c30e4fd45-0 | .ipynb
.pdf
Summarization
Contents
Prepare Data
Quickstart
The stuff Chain
The map_reduce Chain
The custom MapReduceChain
The refine Chain
Summarization#
This notebook walks through how to use LangChain for summarization over a list of documents. It covers three different chain types: stuff, map_reduce, and refine. For a more in depth explanation of what these chain types are, see here.
Prepare Data#
First we prepare the data. For this example we create multiple documents from one long one, but these documents could be fetched in any manner (the point of this notebook to highlight what to do AFTER you fetch the documents).
from langchain import OpenAI, PromptTemplate, LLMChain
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.mapreduce import MapReduceChain
from langchain.prompts import PromptTemplate
llm = OpenAI(temperature=0)
text_splitter = CharacterTextSplitter()
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
texts = text_splitter.split_text(state_of_the_union)
from langchain.docstore.document import Document
docs = [Document(page_content=t) for t in texts[:3]]
Quickstart#
If you just want to get started as quickly as possible, this is the recommended way to do it:
from langchain.chains.summarize import load_summarize_chain
chain = load_summarize_chain(llm, chain_type="map_reduce")
chain.run(docs)
' In response to Russian aggression in Ukraine, the United States and its allies are taking action to hold Putin accountable, including economic sanctions, asset seizures, and military assistance. The US is also providing economic and humanitarian aid to Ukraine, and has passed the American Rescue Plan and the Bipartisan Infrastructure Law to help struggling families and create jobs. The US remains unified and determined to protect Ukraine and the free world.'
If you want more control and understanding over what is happening, please see the information below.
The stuff Chain#
This sections shows results of using the stuff Chain to do summarization.
chain = load_summarize_chain(llm, chain_type="stuff")
chain.run(docs)
' In his speech, President Biden addressed the crisis in Ukraine, the American Rescue Plan, and the Bipartisan Infrastructure Law. He discussed the need to invest in America, educate Americans, and build the economy from the bottom up. He also announced the release of 60 million barrels of oil from reserves around the world, and the creation of a dedicated task force to go after the crimes of Russian oligarchs. He concluded by emphasizing the need to Buy American and use taxpayer dollars to rebuild America.'
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
prompt_template = """Write a concise summary of the following:
{text}
CONCISE SUMMARY IN ITALIAN:"""
PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"])
chain = load_summarize_chain(llm, chain_type="stuff", prompt=PROMPT)
chain.run(docs)
"\n\nIn questa serata, il Presidente degli Stati Uniti ha annunciato una serie di misure per affrontare la crisi in Ucraina, causata dall'aggressione di Putin. Ha anche annunciato l'invio di aiuti economici, militari e umanitari all'Ucraina. Ha anche annunciato che gli Stati Uniti e i loro alleati stanno imponendo sanzioni economiche a Putin e stanno rilasciando 60 milioni di barili di petrolio dalle riserve di tutto il mondo. Inoltre, ha annunciato che il Dipartimento di Giustizia degli Stati Uniti sta creando una task force dedicata ai crimini degli oligarchi russi. Il Presidente ha anche annunciato l'approvazione della legge bipartitica sull'infrastruttura, che prevede investimenti per la ricostruzione dell'America. Questo porterà a creare posti"
The map_reduce Chain#
This sections shows results of using the map_reduce Chain to do summarization.
chain = load_summarize_chain(llm, chain_type="map_reduce")
chain.run(docs) | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/summarize.html |
5f4c30e4fd45-1 | chain.run(docs)
" In response to Russia's aggression in Ukraine, the United States and its allies have imposed economic sanctions and are taking other measures to hold Putin accountable. The US is also providing economic and military assistance to Ukraine, protecting NATO countries, and releasing oil from its Strategic Petroleum Reserve. President Biden and Vice President Harris have passed legislation to help struggling families and rebuild America's infrastructure."
Intermediate Steps
We can also return the intermediate steps for map_reduce chains, should we want to inspect them. This is done with the return_map_steps variable.
chain = load_summarize_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True)
chain({"input_documents": docs}, return_only_outputs=True)
{'map_steps': [" In response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains.",
' The United States and its European allies are taking action to punish Russia for its invasion of Ukraine, including seizing assets, closing off airspace, and providing economic and military assistance to Ukraine. The US is also mobilizing forces to protect NATO countries and has released 30 million barrels of oil from its Strategic Petroleum Reserve to help blunt gas prices. The world is uniting in support of Ukraine and democracy, and the US stands with its Ukrainian-American citizens.',
" President Biden and Vice President Harris ran for office with a new economic vision for America, and have since passed the American Rescue Plan and the Bipartisan Infrastructure Law to help struggling families and rebuild America's infrastructure. This includes creating jobs, modernizing roads, airports, ports, and waterways, replacing lead pipes, providing affordable high-speed internet, and investing in American products to support American jobs."],
'output_text': " In response to Russia's aggression in Ukraine, the United States and its allies have imposed economic sanctions and are taking other measures to hold Putin accountable. The US is also providing economic and military assistance to Ukraine, protecting NATO countries, and passing legislation to help struggling families and rebuild America's infrastructure. The world is uniting in support of Ukraine and democracy, and the US stands with its Ukrainian-American citizens."}
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
prompt_template = """Write a concise summary of the following:
{text}
CONCISE SUMMARY IN ITALIAN:"""
PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"])
chain = load_summarize_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True, map_prompt=PROMPT, combine_prompt=PROMPT)
chain({"input_documents": docs}, return_only_outputs=True)
{'intermediate_steps': ["\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Gli Stati Uniti e i loro alleati stanno ora imponendo sanzioni economiche a Putin e stanno tagliando l'accesso della Russia alla tecnologia. Il Dipartimento di Giustizia degli Stati Uniti sta anche creando una task force dedicata per andare dopo i crimini degli oligarchi russi.",
"\n\nStiamo unendo le nostre forze con quelle dei nostri alleati europei per sequestrare yacht, appartamenti di lusso e jet privati di Putin. Abbiamo chiuso lo spazio aereo americano ai voli russi e stiamo fornendo più di un miliardo di dollari in assistenza all'Ucraina. Abbiamo anche mobilitato le nostre forze terrestri, aeree e navali per proteggere i paesi della NATO. Abbiamo anche rilasciato 60 milioni di barili di petrolio dalle riserve di tutto il mondo, di cui 30 milioni dalla nostra riserva strategica di petrolio. Stiamo affrontando una prova reale e ci vorrà del tempo, ma alla fine Putin non riuscirà a spegnere l'amore dei popoli per la libertà.", | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/summarize.html |
5f4c30e4fd45-2 | "\n\nIl Presidente Biden ha lottato per passare l'American Rescue Plan per aiutare le persone che soffrivano a causa della pandemia. Il piano ha fornito sollievo economico immediato a milioni di americani, ha aiutato a mettere cibo sulla loro tavola, a mantenere un tetto sopra le loro teste e a ridurre il costo dell'assicurazione sanitaria. Il piano ha anche creato più di 6,5 milioni di nuovi posti di lavoro, il più alto numero di posti di lavoro creati in un anno nella storia degli Stati Uniti. Il Presidente Biden ha anche firmato la legge bipartitica sull'infrastruttura, la più ampia iniziativa di ricostruzione della storia degli Stati Uniti. Il piano prevede di modernizzare le strade, gli aeroporti, i porti e le vie navigabili in"],
'output_text': "\n\nIl Presidente Biden sta lavorando per aiutare le persone che soffrono a causa della pandemia attraverso l'American Rescue Plan e la legge bipartitica sull'infrastruttura. Gli Stati Uniti e i loro alleati stanno anche imponendo sanzioni economiche a Putin e tagliando l'accesso della Russia alla tecnologia. Stanno anche sequestrando yacht, appartamenti di lusso e jet privati di Putin e fornendo più di un miliardo di dollari in assistenza all'Ucraina. Alla fine, Putin non riuscirà a spegnere l'amore dei popoli per la libertà."}
The custom MapReduceChain#
Multi input prompt
You can also use prompt with multi input. In this example, we will use a MapReduce chain to answer specifc question about our code.
from langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain
from langchain.chains.combine_documents.stuff import StuffDocumentsChain
map_template_string = """Give the following python code information, generate a description that explains what the code does and also mention the time complexity.
Code:
{code}
Return the the description in the following format:
name of the function: description of the function
"""
reduce_template_string = """Give the following following python fuctions name and their descritpion, answer the following question
{code_description}
Question: {question}
Answer:
"""
MAP_PROMPT = PromptTemplate(input_variables=["code"], template=map_template_string)
REDUCE_PROMPT = PromptTemplate(input_variables=["code_description", "question"], template=reduce_template_string)
llm = OpenAI()
map_llm_chain = LLMChain(llm=llm, prompt=MAP_PROMPT)
reduce_llm_chain = LLMChain(llm=llm, prompt=REDUCE_PROMPT)
generative_result_reduce_chain = StuffDocumentsChain(
llm_chain=reduce_llm_chain,
document_variable_name="code_description",
)
combine_documents = MapReduceDocumentsChain(
llm_chain=map_llm_chain,
combine_document_chain=generative_result_reduce_chain,
document_variable_name="code",
)
map_reduce = MapReduceChain(
combine_documents_chain=combine_documents,
text_splitter=CharacterTextSplitter(separator="\n##\n", chunk_size=100, chunk_overlap=0),
)
code = """
def bubblesort(list):
for iter_num in range(len(list)-1,0,-1):
for idx in range(iter_num):
if list[idx]>list[idx+1]:
temp = list[idx]
list[idx] = list[idx+1]
list[idx+1] = temp
return list
##
def insertion_sort(InputList):
for i in range(1, len(InputList)):
j = i-1
nxt_element = InputList[i]
while (InputList[j] > nxt_element) and (j >= 0):
InputList[j+1] = InputList[j]
j=j-1
InputList[j+1] = nxt_element
return InputList
##
def shellSort(input_list):
gap = len(input_list) // 2
while gap > 0:
for i in range(gap, len(input_list)):
temp = input_list[i]
j = i
while j >= gap and input_list[j - gap] > temp: | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/summarize.html |
5f4c30e4fd45-3 | while j >= gap and input_list[j - gap] > temp:
input_list[j] = input_list[j - gap]
j = j-gap
input_list[j] = temp
gap = gap//2
return input_list
"""
map_reduce.run(input_text=code, question="Which function has a better time complexity?")
Created a chunk of size 247, which is longer than the specified 100
Created a chunk of size 267, which is longer than the specified 100
'shellSort has a better time complexity than both bubblesort and insertion_sort, as it has a time complexity of O(n^2), while the other two have a time complexity of O(n^2).'
The refine Chain#
This sections shows results of using the refine Chain to do summarization.
chain = load_summarize_chain(llm, chain_type="refine")
chain.run(docs)
"\n\nIn response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. We are joining with our European allies to find and seize the assets of Russian oligarchs, including yachts, luxury apartments, and private jets. The U.S. is also closing off American airspace to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The U.S. and its allies are providing support to the Ukrainians in their fight for freedom, including military, economic, and humanitarian assistance. The U.S. is also mobilizing ground forces, air squadrons, and ship deployments to protect NATO countries. The U.S. and its allies are also releasing 60 million barrels of oil from reserves around the world, with the U.S. contributing 30 million barrels from its own Strategic Petroleum Reserve. In addition, the U.S. has passed the American Rescue Plan to provide immediate economic relief for tens of millions of Americans, and the Bipartisan Infrastructure Law to rebuild America and create jobs. This investment will"
Intermediate Steps
We can also return the intermediate steps for refine chains, should we want to inspect them. This is done with the return_refine_steps variable.
chain = load_summarize_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True)
chain({"input_documents": docs}, return_only_outputs=True)
{'refine_steps': [" In response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains.",
"\n\nIn response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. We are joining with our European allies to find and seize the assets of Russian oligarchs, including yachts, luxury apartments, and private jets. The U.S. is also closing off American airspace to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The U.S. and its allies are providing support to the Ukrainians in their fight for freedom, including military, economic, and humanitarian assistance. The U.S. is also mobilizing ground forces, air squadrons, and ship deployments to protect NATO countries. The U.S. and its allies are also releasing 60 million barrels of oil from reserves around the world, with the U.S. contributing 30 million barrels from its own Strategic Petroleum Reserve. Putin's war on Ukraine has left Russia weaker and the rest of the world stronger, with the world uniting in support of democracy and peace.", | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/summarize.html |
5f4c30e4fd45-4 | "\n\nIn response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. We are joining with our European allies to find and seize the assets of Russian oligarchs, including yachts, luxury apartments, and private jets. The U.S. is also closing off American airspace to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The U.S. and its allies are providing support to the Ukrainians in their fight for freedom, including military, economic, and humanitarian assistance. The U.S. is also mobilizing ground forces, air squadrons, and ship deployments to protect NATO countries. The U.S. and its allies are also releasing 60 million barrels of oil from reserves around the world, with the U.S. contributing 30 million barrels from its own Strategic Petroleum Reserve. In addition, the U.S. has passed the American Rescue Plan to provide immediate economic relief for tens of millions of Americans, and the Bipartisan Infrastructure Law to rebuild America and create jobs. This includes investing"],
'output_text': "\n\nIn response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. We are joining with our European allies to find and seize the assets of Russian oligarchs, including yachts, luxury apartments, and private jets. The U.S. is also closing off American airspace to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The U.S. and its allies are providing support to the Ukrainians in their fight for freedom, including military, economic, and humanitarian assistance. The U.S. is also mobilizing ground forces, air squadrons, and ship deployments to protect NATO countries. The U.S. and its allies are also releasing 60 million barrels of oil from reserves around the world, with the U.S. contributing 30 million barrels from its own Strategic Petroleum Reserve. In addition, the U.S. has passed the American Rescue Plan to provide immediate economic relief for tens of millions of Americans, and the Bipartisan Infrastructure Law to rebuild America and create jobs. This includes investing"}
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
prompt_template = """Write a concise summary of the following:
{text}
CONCISE SUMMARY IN ITALIAN:"""
PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"])
refine_template = (
"Your job is to produce a final summary\n"
"We have provided an existing summary up to a certain point: {existing_answer}\n"
"We have the opportunity to refine the existing summary"
"(only if needed) with some more context below.\n"
"------------\n"
"{text}\n"
"------------\n"
"Given the new context, refine the original summary in Italian"
"If the context isn't useful, return the original summary."
)
refine_prompt = PromptTemplate(
input_variables=["existing_answer", "text"],
template=refine_template,
)
chain = load_summarize_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True, question_prompt=PROMPT, refine_prompt=refine_prompt)
chain({"input_documents": docs}, return_only_outputs=True)
{'intermediate_steps': ["\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Insieme ai nostri alleati, stiamo imponendo sanzioni economiche, tagliando l'accesso della Russia alla tecnologia e bloccando i suoi più grandi istituti bancari dal sistema finanziario internazionale. Il Dipartimento di Giustizia degli Stati Uniti sta anche assemblando una task force dedicata per andare dopo i crimini degli oligarchi russi.", | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/summarize.html |
5f4c30e4fd45-5 | "\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Insieme ai nostri alleati, stiamo imponendo sanzioni economiche, tagliando l'accesso della Russia alla tecnologia, bloccando i suoi più grandi istituti bancari dal sistema finanziario internazionale e chiudendo lo spazio aereo americano a tutti i voli russi. Il Dipartimento di Giustizia degli Stati Uniti sta anche assemblando una task force dedicata per andare dopo i crimini degli oligarchi russi. Stiamo fornendo più di un miliardo di dollari in assistenza diretta all'Ucraina e fornendo assistenza militare,",
"\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Insieme ai nostri alleati, stiamo imponendo sanzioni economiche, tagliando l'accesso della Russia alla tecnologia, bloccando i suoi più grandi istituti bancari dal sistema finanziario internazionale e chiudendo lo spazio aereo americano a tutti i voli russi. Il Dipartimento di Giustizia degli Stati Uniti sta anche assemblando una task force dedicata per andare dopo i crimini degli oligarchi russi. Stiamo fornendo più di un miliardo di dollari in assistenza diretta all'Ucraina e fornendo assistenza militare."],
'output_text': "\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Insieme ai nostri alleati, stiamo imponendo sanzioni economiche, tagliando l'accesso della Russia alla tecnologia, bloccando i suoi più grandi istituti bancari dal sistema finanziario internazionale e chiudendo lo spazio aereo americano a tutti i voli russi. Il Dipartimento di Giustizia degli Stati Uniti sta anche assemblando una task force dedicata per andare dopo i crimini degli oligarchi russi. Stiamo fornendo più di un miliardo di dollari in assistenza diretta all'Ucraina e fornendo assistenza militare."}
previous
Question Answering
next
Retrieval Question/Answering
Contents
Prepare Data
Quickstart
The stuff Chain
The map_reduce Chain
The custom MapReduceChain
The refine Chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/summarize.html |
62e631e7e688-0 | .ipynb
.pdf
Question Answering with Sources
Contents
Prepare Data
Quickstart
The stuff Chain
The map_reduce Chain
The refine Chain
The map-rerank Chain
Question Answering with Sources#
This notebook walks through how to use LangChain for question answering with sources over a list of documents. It covers four different chain types: stuff, map_reduce, refine,map-rerank. For a more in depth explanation of what these chain types are, see here.
Prepare Data#
First we prepare the data. For this example we do similarity search over a vector database, but these documents could be fetched in any manner (the point of this notebook to highlight what to do AFTER you fetch the documents).
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.embeddings.cohere import CohereEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch
from langchain.vectorstores import Chroma
from langchain.docstore.document import Document
from langchain.prompts import PromptTemplate
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": str(i)} for i in range(len(texts))])
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
query = "What did the president say about Justice Breyer"
docs = docsearch.similarity_search(query)
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
from langchain.llms import OpenAI
Quickstart#
If you just want to get started as quickly as possible, this is the recommended way to do it:
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff")
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'}
If you want more control and understanding over what is happening, please see the information below.
The stuff Chain#
This sections shows results of using the stuff Chain to do question answering with sources.
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff")
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'}
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
template = """Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES").
If you don't know the answer, just say that you don't know. Don't try to make up an answer.
ALWAYS return a "SOURCES" part in your answer.
Respond in Italian.
QUESTION: {question}
=========
{summaries}
=========
FINAL ANSWER IN ITALIAN:"""
PROMPT = PromptTemplate(template=template, input_variables=["summaries", "question"])
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff", prompt=PROMPT)
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'output_text': '\nNon so cosa abbia detto il presidente riguardo a Justice Breyer.\nSOURCES: 30, 31, 33'}
The map_reduce Chain#
This sections shows results of using the map_reduce Chain to do question answering with sources.
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce")
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'}
Intermediate Steps | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/qa_with_sources.html |
62e631e7e688-1 | Intermediate Steps
We can also return the intermediate steps for map_reduce chains, should we want to inspect them. This is done with the return_intermediate_steps variable.
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'intermediate_steps': [' "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."',
' None',
' None',
' None'],
'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'}
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
question_prompt_template = """Use the following portion of a long document to see if any of the text is relevant to answer the question.
Return any relevant text in Italian.
{context}
Question: {question}
Relevant text, if any, in Italian:"""
QUESTION_PROMPT = PromptTemplate(
template=question_prompt_template, input_variables=["context", "question"]
)
combine_prompt_template = """Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES").
If you don't know the answer, just say that you don't know. Don't try to make up an answer.
ALWAYS return a "SOURCES" part in your answer.
Respond in Italian.
QUESTION: {question}
=========
{summaries}
=========
FINAL ANSWER IN ITALIAN:"""
COMBINE_PROMPT = PromptTemplate(
template=combine_prompt_template, input_variables=["summaries", "question"]
)
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True, question_prompt=QUESTION_PROMPT, combine_prompt=COMBINE_PROMPT)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'intermediate_steps': ["\nStasera vorrei onorare qualcuno che ha dedicato la sua vita a servire questo paese: il giustizia Stephen Breyer - un veterano dell'esercito, uno studioso costituzionale e un giustizia in uscita della Corte Suprema degli Stati Uniti. Giustizia Breyer, grazie per il tuo servizio.",
' Non pertinente.',
' Non rilevante.',
" Non c'è testo pertinente."],
'output_text': ' Non conosco la risposta. SOURCES: 30, 31, 33, 20.'}
Batch Size
When using the map_reduce chain, one thing to keep in mind is the batch size you are using during the map step. If this is too high, it could cause rate limiting errors. You can control this by setting the batch size on the LLM used. Note that this only applies for LLMs with this parameter. Below is an example of doing so:
llm = OpenAI(batch_size=5, temperature=0)
The refine Chain#
This sections shows results of using the refine Chain to do question answering with sources.
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="refine")
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True) | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/qa_with_sources.html |
62e631e7e688-2 | chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'output_text': "\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked him for his service and praised his career as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He noted Justice Breyer's reputation as a consensus builder and the broad range of support he has received from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also highlighted the importance of securing the border and fixing the immigration system in order to advance liberty and justice, and mentioned the new technology, joint patrols, dedicated immigration judges, and commitments to support partners in South and Central America that have been put in place. He also expressed his commitment to the LGBTQ+ community, noting the need for the bipartisan Equality Act and the importance of protecting transgender Americans from state laws targeting them. He also highlighted his commitment to bipartisanship, noting the 80 bipartisan bills he signed into law last year, and his plans to strengthen the Violence Against Women Act. Additionally, he announced that the Justice Department will name a chief prosecutor for pandemic fraud and his plan to lower the deficit by more than one trillion dollars in a"}
Intermediate Steps
We can also return the intermediate steps for refine chains, should we want to inspect them. This is done with the return_intermediate_steps variable.
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'intermediate_steps': ['\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service.',
'\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. \n\nSource: 31',
'\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. He also mentioned the need to pass the bipartisan Equality Act to protect LGBTQ+ Americans, and to strengthen the Violence Against Women Act that he had written three decades ago. \n\nSource: 31, 33', | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/qa_with_sources.html |
62e631e7e688-3 | '\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. He also mentioned the need to pass the bipartisan Equality Act to protect LGBTQ+ Americans, and to strengthen the Violence Against Women Act that he had written three decades ago. Additionally, he mentioned his plan to lower costs to give families a fair shot, lower the deficit, and go after criminals who stole billions in relief money meant for small businesses and millions of Americans. He also announced that the Justice Department will name a chief prosecutor for pandemic fraud. \n\nSource: 20, 31, 33'],
'output_text': '\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. He also mentioned the need to pass the bipartisan Equality Act to protect LGBTQ+ Americans, and to strengthen the Violence Against Women Act that he had written three decades ago. Additionally, he mentioned his plan to lower costs to give families a fair shot, lower the deficit, and go after criminals who stole billions in relief money meant for small businesses and millions of Americans. He also announced that the Justice Department will name a chief prosecutor for pandemic fraud. \n\nSource: 20, 31, 33'}
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
refine_template = (
"The original question is as follows: {question}\n"
"We have provided an existing answer, including sources: {existing_answer}\n"
"We have the opportunity to refine the existing answer"
"(only if needed) with some more context below.\n"
"------------\n"
"{context_str}\n"
"------------\n"
"Given the new context, refine the original answer to better "
"answer the question (in Italian)"
"If you do update it, please update the sources as well. "
"If the context isn't useful, return the original answer."
)
refine_prompt = PromptTemplate(
input_variables=["question", "existing_answer", "context_str"],
template=refine_template,
)
question_template = (
"Context information is below. \n"
"---------------------\n"
"{context_str}"
"\n---------------------\n"
"Given the context information and not prior knowledge, "
"answer the question in Italian: {question}\n"
)
question_prompt = PromptTemplate(
input_variables=["context_str", "question"], template=question_template
)
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True, question_prompt=question_prompt, refine_prompt=refine_prompt)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'intermediate_steps': ['\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese e ha onorato la sua carriera.', | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/qa_with_sources.html |
62e631e7e688-4 | "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per",
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per",
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per"],
'output_text': "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per"}
The map-rerank Chain#
This sections shows results of using the map-rerank Chain to do question answering with sources.
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_rerank", metadata_keys=['source'], return_intermediate_steps=True)
query = "What did the president say about Justice Breyer"
result = chain({"input_documents": docs, "question": query}, return_only_outputs=True)
result["output_text"] | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/qa_with_sources.html |
62e631e7e688-5 | result["output_text"]
' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.'
result["intermediate_steps"]
[{'answer': ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.',
'score': '100'},
{'answer': ' This document does not answer the question', 'score': '0'},
{'answer': ' This document does not answer the question', 'score': '0'},
{'answer': ' This document does not answer the question', 'score': '0'}]
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
from langchain.output_parsers import RegexParser
output_parser = RegexParser(
regex=r"(.*?)\nScore: (.*)",
output_keys=["answer", "score"],
)
prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
In addition to giving an answer, also return a score of how fully it answered the user's question. This should be in the following format:
Question: [question here]
Helpful Answer In Italian: [answer here]
Score: [score between 0 and 100]
Begin!
Context:
---------
{context}
---------
Question: {question}
Helpful Answer In Italian:"""
PROMPT = PromptTemplate(
template=prompt_template,
input_variables=["context", "question"],
output_parser=output_parser,
)
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_rerank", metadata_keys=['source'], return_intermediate_steps=True, prompt=PROMPT)
query = "What did the president say about Justice Breyer"
result = chain({"input_documents": docs, "question": query}, return_only_outputs=True)
result
{'source': 30,
'intermediate_steps': [{'answer': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha onorato la sua carriera.',
'score': '100'},
{'answer': ' Il presidente non ha detto nulla sulla Giustizia Breyer.',
'score': '100'},
{'answer': ' Non so.', 'score': '0'},
{'answer': ' Il presidente non ha detto nulla sulla giustizia Breyer.',
'score': '100'}],
'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha onorato la sua carriera.'}
previous
Hypothetical Document Embeddings
next
Question Answering
Contents
Prepare Data
Quickstart
The stuff Chain
The map_reduce Chain
The refine Chain
The map-rerank Chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/qa_with_sources.html |
b6d4ef9794e8-0 | .ipynb
.pdf
Question Answering
Contents
Prepare Data
Quickstart
The stuff Chain
The map_reduce Chain
The refine Chain
The map-rerank Chain
Question Answering#
This notebook walks through how to use LangChain for question answering over a list of documents. It covers four different types of chains: stuff, map_reduce, refine, map_rerank. For a more in depth explanation of what these chain types are, see here.
Prepare Data#
First we prepare the data. For this example we do similarity search over a vector database, but these documents could be fetched in any manner (the point of this notebook to highlight what to do AFTER you fetch the documents).
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.docstore.document import Document
from langchain.prompts import PromptTemplate
from langchain.indexes.vectorstore import VectorstoreIndexCreator
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": str(i)} for i in range(len(texts))]).as_retriever()
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
query = "What did the president say about Justice Breyer"
docs = docsearch.get_relevant_documents(query)
from langchain.chains.question_answering import load_qa_chain
from langchain.llms import OpenAI
Quickstart#
If you just want to get started as quickly as possible, this is the recommended way to do it:
chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff")
query = "What did the president say about Justice Breyer"
chain.run(input_documents=docs, question=query)
' The president said that Justice Breyer has dedicated his life to serve the country and thanked him for his service.'
If you want more control and understanding over what is happening, please see the information below.
The stuff Chain#
This sections shows results of using the stuff Chain to do question answering.
chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff")
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'output_text': ' The president said that Justice Breyer has dedicated his life to serve the country and thanked him for his service.'}
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
{context}
Question: {question}
Answer in Italian:"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff", prompt=PROMPT)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha ricevuto una vasta gamma di supporto.'}
The map_reduce Chain#
This sections shows results of using the map_reduce Chain to do question answering.
chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce")
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'output_text': ' The president said that Justice Breyer is an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court, and thanked him for his service.'}
Intermediate Steps
We can also return the intermediate steps for map_reduce chains, should we want to inspect them. This is done with the return_map_steps variable.
chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce", return_map_steps=True)
chain({"input_documents": docs, "question": query}, return_only_outputs=True) | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/question_answering.html |
b6d4ef9794e8-1 | chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'intermediate_steps': [' "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."',
' A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.',
' None',
' None'],
'output_text': ' The president said that Justice Breyer is an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court, and thanked him for his service.'}
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
question_prompt_template = """Use the following portion of a long document to see if any of the text is relevant to answer the question.
Return any relevant text translated into italian.
{context}
Question: {question}
Relevant text, if any, in Italian:"""
QUESTION_PROMPT = PromptTemplate(
template=question_prompt_template, input_variables=["context", "question"]
)
combine_prompt_template = """Given the following extracted parts of a long document and a question, create a final answer italian.
If you don't know the answer, just say that you don't know. Don't try to make up an answer.
QUESTION: {question}
=========
{summaries}
=========
Answer in Italian:"""
COMBINE_PROMPT = PromptTemplate(
template=combine_prompt_template, input_variables=["summaries", "question"]
)
chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce", return_map_steps=True, question_prompt=QUESTION_PROMPT, combine_prompt=COMBINE_PROMPT)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'intermediate_steps': ["\nStasera vorrei onorare qualcuno che ha dedicato la sua vita a servire questo paese: il giustizia Stephen Breyer - un veterano dell'esercito, uno studioso costituzionale e un giustizia in uscita della Corte Suprema degli Stati Uniti. Giustizia Breyer, grazie per il tuo servizio.",
'\nNessun testo pertinente.',
' Non ha detto nulla riguardo a Justice Breyer.',
" Non c'è testo pertinente."],
'output_text': ' Non ha detto nulla riguardo a Justice Breyer.'}
Batch Size
When using the map_reduce chain, one thing to keep in mind is the batch size you are using during the map step. If this is too high, it could cause rate limiting errors. You can control this by setting the batch size on the LLM used. Note that this only applies for LLMs with this parameter. Below is an example of doing so:
llm = OpenAI(batch_size=5, temperature=0)
The refine Chain#
This sections shows results of using the refine Chain to do question answering.
chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine")
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'output_text': '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which he said would be the most sweeping investment to rebuild America in history and would help the country compete for the jobs of the 21st Century.'}
Intermediate Steps
We can also return the intermediate steps for refine chains, should we want to inspect them. This is done with the return_refine_steps variable.
chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine", return_refine_steps=True)
chain({"input_documents": docs, "question": query}, return_only_outputs=True) | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/question_answering.html |
b6d4ef9794e8-2 | chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'intermediate_steps': ['\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country and his legacy of excellence.',
'\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice.',
'\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans.',
'\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which is the most sweeping investment to rebuild America in history.'],
'output_text': '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which is the most sweeping investment to rebuild America in history.'}
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
refine_prompt_template = (
"The original question is as follows: {question}\n"
"We have provided an existing answer: {existing_answer}\n"
"We have the opportunity to refine the existing answer"
"(only if needed) with some more context below.\n"
"------------\n"
"{context_str}\n"
"------------\n"
"Given the new context, refine the original answer to better "
"answer the question. "
"If the context isn't useful, return the original answer. Reply in Italian."
)
refine_prompt = PromptTemplate(
input_variables=["question", "existing_answer", "context_str"],
template=refine_prompt_template,
)
initial_qa_template = (
"Context information is below. \n"
"---------------------\n"
"{context_str}"
"\n---------------------\n"
"Given the context information and not prior knowledge, "
"answer the question: {question}\nYour answer should be in Italian.\n"
)
initial_qa_prompt = PromptTemplate(
input_variables=["context_str", "question"], template=initial_qa_template
)
chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine", return_refine_steps=True,
question_prompt=initial_qa_prompt, refine_prompt=refine_prompt)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'intermediate_steps': ['\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese e ha reso omaggio al suo servizio.',
"\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione.", | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/question_answering.html |
b6d4ef9794e8-3 | "\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei.",
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei e per investire in America, educare gli americani, far crescere la forza lavoro e costruire l'economia dal"],
'output_text': "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei e per investire in America, educare gli americani, far crescere la forza lavoro e costruire l'economia dal"}
The map-rerank Chain#
This sections shows results of using the map-rerank Chain to do question answering with sources.
chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_rerank", return_intermediate_steps=True)
query = "What did the president say about Justice Breyer"
results = chain({"input_documents": docs, "question": query}, return_only_outputs=True)
results["output_text"]
' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.'
results["intermediate_steps"]
[{'answer': ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.',
'score': '100'},
{'answer': ' This document does not answer the question', 'score': '0'},
{'answer': ' This document does not answer the question', 'score': '0'},
{'answer': ' This document does not answer the question', 'score': '0'}]
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
from langchain.output_parsers import RegexParser
output_parser = RegexParser(
regex=r"(.*?)\nScore: (.*)",
output_keys=["answer", "score"],
)
prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
In addition to giving an answer, also return a score of how fully it answered the user's question. This should be in the following format:
Question: [question here]
Helpful Answer In Italian: [answer here]
Score: [score between 0 and 100] | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/question_answering.html |
b6d4ef9794e8-4 | Score: [score between 0 and 100]
Begin!
Context:
---------
{context}
---------
Question: {question}
Helpful Answer In Italian:"""
PROMPT = PromptTemplate(
template=prompt_template,
input_variables=["context", "question"],
output_parser=output_parser,
)
chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_rerank", return_intermediate_steps=True, prompt=PROMPT)
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'intermediate_steps': [{'answer': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese.',
'score': '100'},
{'answer': ' Il presidente non ha detto nulla sulla Giustizia Breyer.',
'score': '100'},
{'answer': ' Non so.', 'score': '0'},
{'answer': ' Non so.', 'score': '0'}],
'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese.'}
previous
Question Answering with Sources
next
Summarization
Contents
Prepare Data
Quickstart
The stuff Chain
The map_reduce Chain
The refine Chain
The map-rerank Chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/question_answering.html |
c12d869881a1-0 | .ipynb
.pdf
Hypothetical Document Embeddings
Contents
Multiple generations
Using our own prompts
Using HyDE
Hypothetical Document Embeddings#
This notebook goes over how to use Hypothetical Document Embeddings (HyDE), as described in this paper.
At a high level, HyDE is an embedding technique that takes queries, generates a hypothetical answer, and then embeds that generated document and uses that as the final example.
In order to use HyDE, we therefore need to provide a base embedding model, as well as an LLMChain that can be used to generate those documents. By default, the HyDE class comes with some default prompts to use (see the paper for more details on them), but we can also create our own.
from langchain.llms import OpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.chains import LLMChain, HypotheticalDocumentEmbedder
from langchain.prompts import PromptTemplate
base_embeddings = OpenAIEmbeddings()
llm = OpenAI()
# Load with `web_search` prompt
embeddings = HypotheticalDocumentEmbedder.from_llm(llm, base_embeddings, "web_search")
# Now we can use it as any embedding class!
result = embeddings.embed_query("Where is the Taj Mahal?")
Multiple generations#
We can also generate multiple documents and then combine the embeddings for those. By default, we combine those by taking the average. We can do this by changing the LLM we use to generate documents to return multiple things.
multi_llm = OpenAI(n=4, best_of=4)
embeddings = HypotheticalDocumentEmbedder.from_llm(multi_llm, base_embeddings, "web_search")
result = embeddings.embed_query("Where is the Taj Mahal?")
Using our own prompts#
Besides using preconfigured prompts, we can also easily construct our own prompts and use those in the LLMChain that is generating the documents. This can be useful if we know the domain our queries will be in, as we can condition the prompt to generate text more similar to that.
In the example below, let’s condition it to generate text about a state of the union address (because we will use that in the next example).
prompt_template = """Please answer the user's question about the most recent state of the union address
Question: {question}
Answer:"""
prompt = PromptTemplate(input_variables=["question"], template=prompt_template)
llm_chain = LLMChain(llm=llm, prompt=prompt)
embeddings = HypotheticalDocumentEmbedder(llm_chain=llm_chain, base_embeddings=base_embeddings)
result = embeddings.embed_query("What did the president say about Ketanji Brown Jackson")
Using HyDE#
Now that we have HyDE, we can use it as we would any other embedding class! Here is using it to find similar passages in the state of the union example.
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
docsearch = Chroma.from_texts(texts, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
print(docs[0].page_content)
In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections.
We cannot let this happen.
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
previous
Graph QA
next
Question Answering with Sources
Contents | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/hyde.html |
c12d869881a1-1 | previous
Graph QA
next
Question Answering with Sources
Contents
Multiple generations
Using our own prompts
Using HyDE
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/hyde.html |
20e420244fba-0 | .ipynb
.pdf
Chat Over Documents with Chat History
Contents
Pass in chat history
Using a different model for condensing the question
Return Source Documents
ConversationalRetrievalChain with search_distance
ConversationalRetrievalChain with map_reduce
ConversationalRetrievalChain with Question Answering with sources
ConversationalRetrievalChain with streaming to stdout
get_chat_history Function
Chat Over Documents with Chat History#
This notebook goes over how to set up a chain to chat over documents with chat history using a ConversationalRetrievalChain. The only difference between this chain and the RetrievalQAChain is that this allows for passing in of a chat history which can be used to allow for follow up questions.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.llms import OpenAI
from langchain.chains import ConversationalRetrievalChain
Load in documents. You can replace this with a loader for whatever type of data you want
from langchain.document_loaders import TextLoader
loader = TextLoader("../../state_of_the_union.txt")
documents = loader.load()
If you had multiple loaders that you wanted to combine, you do something like:
# loaders = [....]
# docs = []
# for loader in loaders:
# docs.extend(loader.load())
We now split the documents, create embeddings for them, and put them in a vectorstore. This allows us to do semantic search over them.
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_documents(documents, embeddings)
Using embedded DuckDB without persistence: data will be transient
We can now create a memory object, which is neccessary to track the inputs/outputs and hold a conversation.
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
We now initialize the ConversationalRetrievalChain
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), memory=memory)
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query})
result["answer"]
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
query = "Did he mention who she suceeded"
result = qa({"question": query})
result['answer']
' Ketanji Brown Jackson succeeded Justice Stephen Breyer on the United States Supreme Court.'
Pass in chat history#
In the above example, we used a Memory object to track chat history. We can also just pass it in explicitly. In order to do this, we need to initialize a chain without any memory object.
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever())
Here’s an example of asking a question with no chat history
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})
result["answer"]
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
Here’s an example of asking a question with some chat history
chat_history = [(query, result["answer"])]
query = "Did he mention who she suceeded"
result = qa({"question": query, "chat_history": chat_history})
result['answer']
' Ketanji Brown Jackson succeeded Justice Stephen Breyer on the United States Supreme Court.'
Using a different model for condensing the question# | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/chat_vector_db.html |
20e420244fba-1 | Using a different model for condensing the question#
This chain has two steps. First, it condenses the current question and the chat history into a standalone question. This is neccessary to create a standanlone vector to use for retrieval. After that, it does retrieval and then answers the question using retrieval augmented generation with a separate model. Part of the power of the declarative nature of LangChain is that you can easily use a separate language model for each call. This can be useful to use a cheaper and faster model for the simpler task of condensing the question, and then a more expensive model for answering the question. Here is an example of doing so.
from langchain.chat_models import ChatOpenAI
qa = ConversationalRetrievalChain.from_llm(
ChatOpenAI(temperature=0, model="gpt-4"),
vectorstore.as_retriever(),
condense_question_llm = ChatOpenAI(temperature=0, model='gpt-3.5-turbo'),
)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})
chat_history = [(query, result["answer"])]
query = "Did he mention who she suceeded"
result = qa({"question": query, "chat_history": chat_history})
Return Source Documents#
You can also easily return source documents from the ConversationalRetrievalChain. This is useful for when you want to inspect what documents were returned.
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), return_source_documents=True)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})
result['source_documents'][0]
Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../state_of_the_union.txt'})
ConversationalRetrievalChain with search_distance#
If you are using a vector store that supports filtering by search distance, you can add a threshold value parameter.
vectordbkwargs = {"search_distance": 0.9}
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), return_source_documents=True)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history, "vectordbkwargs": vectordbkwargs})
ConversationalRetrievalChain with map_reduce#
We can also use different types of combine document chains with the ConversationalRetrievalChain chain.
from langchain.chains import LLMChain
from langchain.chains.question_answering import load_qa_chain
from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT
llm = OpenAI(temperature=0)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_chain(llm, chain_type="map_reduce")
chain = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(),
question_generator=question_generator,
combine_docs_chain=doc_chain,
)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = chain({"question": query, "chat_history": chat_history})
result['answer'] | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/chat_vector_db.html |
20e420244fba-2 | result['answer']
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
ConversationalRetrievalChain with Question Answering with sources#
You can also use this chain with the question answering with sources chain.
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
llm = OpenAI(temperature=0)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_with_sources_chain(llm, chain_type="map_reduce")
chain = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(),
question_generator=question_generator,
combine_docs_chain=doc_chain,
)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = chain({"question": query, "chat_history": chat_history})
result['answer']
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \nSOURCES: ../../state_of_the_union.txt"
ConversationalRetrievalChain with streaming to stdout#
Output from the chain will be streamed to stdout token by token in this example.
from langchain.chains.llm import LLMChain
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT, QA_PROMPT
from langchain.chains.question_answering import load_qa_chain
# Construct a ConversationalRetrievalChain with a streaming llm for combine docs
# and a separate, non-streaming llm for question generation
llm = OpenAI(temperature=0)
streaming_llm = OpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_chain(streaming_llm, chain_type="stuff", prompt=QA_PROMPT)
qa = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(), combine_docs_chain=doc_chain, question_generator=question_generator)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})
The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
chat_history = [(query, result["answer"])]
query = "Did he mention who she suceeded"
result = qa({"question": query, "chat_history": chat_history})
Ketanji Brown Jackson succeeded Justice Stephen Breyer on the United States Supreme Court.
get_chat_history Function#
You can also specify a get_chat_history function, which can be used to format the chat_history string.
def get_chat_history(inputs) -> str:
res = []
for human, ai in inputs:
res.append(f"Human:{human}\nAI:{ai}")
return "\n".join(res)
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), get_chat_history=get_chat_history)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})
result['answer'] | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/chat_vector_db.html |
20e420244fba-3 | result['answer']
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
previous
Analyze Document
next
Graph QA
Contents
Pass in chat history
Using a different model for condensing the question
Return Source Documents
ConversationalRetrievalChain with search_distance
ConversationalRetrievalChain with map_reduce
ConversationalRetrievalChain with Question Answering with sources
ConversationalRetrievalChain with streaming to stdout
get_chat_history Function
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/chat_vector_db.html |
4118f7b83d16-0 | .ipynb
.pdf
Retrieval Question/Answering
Contents
Chain Type
Custom Prompts
Return Source Documents
Retrieval Question/Answering#
This example showcases question answering over an index.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.document_loaders import TextLoader
loader = TextLoader("../../state_of_the_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_documents(texts, embeddings)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever())
query = "What did the president say about Ketanji Brown Jackson"
qa.run(query)
" The president said that she is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support, from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
Chain Type#
You can easily specify different chain types to load and use in the RetrievalQA chain. For a more detailed walkthrough of these types, please see this notebook.
There are two ways to load different chain types. First, you can specify the chain type argument in the from_chain_type method. This allows you to pass in the name of the chain type you want to use. For example, in the below we change the chain type to map_reduce.
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="map_reduce", retriever=docsearch.as_retriever())
query = "What did the president say about Ketanji Brown Jackson"
qa.run(query)
" The president said that Judge Ketanji Brown Jackson is one of our nation's top legal minds, a former top litigator in private practice and a former federal public defender, from a family of public school educators and police officers, a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
The above way allows you to really simply change the chain_type, but it does provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly (as you did in this notebook) and then pass that directly to the the RetrievalQA chain with the combine_documents_chain parameter. For example:
from langchain.chains.question_answering import load_qa_chain
qa_chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff")
qa = RetrievalQA(combine_documents_chain=qa_chain, retriever=docsearch.as_retriever())
query = "What did the president say about Ketanji Brown Jackson"
qa.run(query)
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
Custom Prompts#
You can pass in custom prompts to do question answering. These prompts are the same prompts as you can pass into the base question answering chain
from langchain.prompts import PromptTemplate
prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
{context}
Question: {question}
Answer in Italian:"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
chain_type_kwargs = {"prompt": PROMPT}
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever(), chain_type_kwargs=chain_type_kwargs)
query = "What did the president say about Ketanji Brown Jackson"
qa.run(query) | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/vector_db_qa.html |
4118f7b83d16-1 | qa.run(query)
" Il presidente ha detto che Ketanji Brown Jackson è una delle menti legali più importanti del paese, che continuerà l'eccellenza di Justice Breyer e che ha ricevuto un ampio sostegno, da Fraternal Order of Police a ex giudici nominati da democratici e repubblicani."
Return Source Documents#
Additionally, we can return the source documents used to answer the question by specifying an optional parameter when constructing the chain.
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever(), return_source_documents=True)
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"query": query})
result["result"]
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice and a former federal public defender from a family of public school educators and police officers, and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
result["source_documents"]
[Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),
Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),
Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/vector_db_qa.html |
4118f7b83d16-2 | Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)]
previous
Summarization
next
Retrieval Question Answering with Sources
Contents
Chain Type
Custom Prompts
Return Source Documents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/vector_db_qa.html |
eb8d4c0fcd60-0 | .ipynb
.pdf
Graph QA
Contents
Create the graph
Querying the graph
Save the graph
Graph QA#
This notebook goes over how to do question answering over a graph data structure.
Create the graph#
In this section, we construct an example graph. At the moment, this works best for small pieces of text.
from langchain.indexes import GraphIndexCreator
from langchain.llms import OpenAI
from langchain.document_loaders import TextLoader
index_creator = GraphIndexCreator(llm=OpenAI(temperature=0))
with open("../../state_of_the_union.txt") as f:
all_text = f.read()
We will use just a small snippet, because extracting the knowledge triplets is a bit intensive at the moment.
text = "\n".join(all_text.split("\n\n")[105:108])
text
'It won’t look like much, but if you stop and look closely, you’ll see a “Field of dreams,” the ground on which America’s future will be built. \nThis is where Intel, the American company that helped build Silicon Valley, is going to build its $20 billion semiconductor “mega site”. \nUp to eight state-of-the-art factories in one place. 10,000 new good-paying jobs. '
graph = index_creator.from_text(text)
We can inspect the created graph.
graph.get_triples()
[('Intel', '$20 billion semiconductor "mega site"', 'is going to build'),
('Intel', 'state-of-the-art factories', 'is building'),
('Intel', '10,000 new good-paying jobs', 'is creating'),
('Intel', 'Silicon Valley', 'is helping build'),
('Field of dreams',
"America's future will be built",
'is the ground on which')]
Querying the graph#
We can now use the graph QA chain to ask question of the graph
from langchain.chains import GraphQAChain
chain = GraphQAChain.from_llm(OpenAI(temperature=0), graph=graph, verbose=True)
chain.run("what is Intel going to build?")
> Entering new GraphQAChain chain...
Entities Extracted:
Intel
Full Context:
Intel is going to build $20 billion semiconductor "mega site"
Intel is building state-of-the-art factories
Intel is creating 10,000 new good-paying jobs
Intel is helping build Silicon Valley
> Finished chain.
' Intel is going to build a $20 billion semiconductor "mega site" with state-of-the-art factories, creating 10,000 new good-paying jobs and helping to build Silicon Valley.'
Save the graph#
We can also save and load the graph.
graph.write_to_gml("graph.gml")
from langchain.indexes.graph import NetworkxEntityGraph
loaded_graph = NetworkxEntityGraph.from_gml("graph.gml")
loaded_graph.get_triples()
[('Intel', '$20 billion semiconductor "mega site"', 'is going to build'),
('Intel', 'state-of-the-art factories', 'is building'),
('Intel', '10,000 new good-paying jobs', 'is creating'),
('Intel', 'Silicon Valley', 'is helping build'),
('Field of dreams',
"America's future will be built",
'is the ground on which')]
previous
Chat Over Documents with Chat History
next
Hypothetical Document Embeddings
Contents
Create the graph
Querying the graph
Save the graph
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/graph_qa.html |
f3ef3940b40f-0 | .ipynb
.pdf
Vector DB Text Generation
Contents
Prepare Data
Set Up Vector DB
Set Up LLM Chain with Custom Prompt
Generate Text
Vector DB Text Generation#
This notebook walks through how to use LangChain for text generation over a vector index. This is useful if we want to generate text that is able to draw from a large body of custom text, for example, generating blog posts that have an understanding of previous blog posts written, or product tutorials that can refer to product documentation.
Prepare Data#
First, we prepare the data. For this example, we fetch a documentation site that consists of markdown files hosted on Github and split them into small enough Documents.
from langchain.llms import OpenAI
from langchain.docstore.document import Document
import requests
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.prompts import PromptTemplate
import pathlib
import subprocess
import tempfile
def get_github_docs(repo_owner, repo_name):
with tempfile.TemporaryDirectory() as d:
subprocess.check_call(
f"git clone --depth 1 https://github.com/{repo_owner}/{repo_name}.git .",
cwd=d,
shell=True,
)
git_sha = (
subprocess.check_output("git rev-parse HEAD", shell=True, cwd=d)
.decode("utf-8")
.strip()
)
repo_path = pathlib.Path(d)
markdown_files = list(repo_path.glob("*/*.md")) + list(
repo_path.glob("*/*.mdx")
)
for markdown_file in markdown_files:
with open(markdown_file, "r") as f:
relative_path = markdown_file.relative_to(repo_path)
github_url = f"https://github.com/{repo_owner}/{repo_name}/blob/{git_sha}/{relative_path}"
yield Document(page_content=f.read(), metadata={"source": github_url})
sources = get_github_docs("yirenlu92", "deno-manual-forked")
source_chunks = []
splitter = CharacterTextSplitter(separator=" ", chunk_size=1024, chunk_overlap=0)
for source in sources:
for chunk in splitter.split_text(source.page_content):
source_chunks.append(Document(page_content=chunk, metadata=source.metadata))
Cloning into '.'...
Set Up Vector DB#
Now that we have the documentation content in chunks, let’s put all this information in a vector index for easy retrieval.
search_index = Chroma.from_documents(source_chunks, OpenAIEmbeddings())
Set Up LLM Chain with Custom Prompt#
Next, let’s set up a simple LLM chain but give it a custom prompt for blog post generation. Note that the custom prompt is parameterized and takes two inputs: context, which will be the documents fetched from the vector search, and topic, which is given by the user.
from langchain.chains import LLMChain
prompt_template = """Use the context below to write a 400 word blog post about the topic below:
Context: {context}
Topic: {topic}
Blog post:"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "topic"]
)
llm = OpenAI(temperature=0)
chain = LLMChain(llm=llm, prompt=PROMPT)
Generate Text#
Finally, we write a function to apply our inputs to the chain. The function takes an input parameter topic. We find the documents in the vector index that correspond to that topic, and use them as additional context in our simple LLM chain.
def generate_blog_post(topic):
docs = search_index.similarity_search(topic, k=4)
inputs = [{"context": doc.page_content, "topic": topic} for doc in docs]
print(chain.apply(inputs))
generate_blog_post("environment variables") | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/vector_db_text_generation.html |
f3ef3940b40f-1 | print(chain.apply(inputs))
generate_blog_post("environment variables")
[{'text': '\n\nEnvironment variables are a great way to store and access sensitive information in your Deno applications. Deno offers built-in support for environment variables with `Deno.env`, and you can also use a `.env` file to store and access environment variables.\n\nUsing `Deno.env` is simple. It has getter and setter methods, so you can easily set and retrieve environment variables. For example, you can set the `FIREBASE_API_KEY` and `FIREBASE_AUTH_DOMAIN` environment variables like this:\n\n```ts\nDeno.env.set("FIREBASE_API_KEY", "examplekey123");\nDeno.env.set("FIREBASE_AUTH_DOMAIN", "firebasedomain.com");\n\nconsole.log(Deno.env.get("FIREBASE_API_KEY")); // examplekey123\nconsole.log(Deno.env.get("FIREBASE_AUTH_DOMAIN")); // firebasedomain.com\n```\n\nYou can also store environment variables in a `.env` file. This is a great'}, {'text': '\n\nEnvironment variables are a powerful tool for managing configuration settings in a program. They allow us to set values that can be used by the program, without having to hard-code them into the code. This makes it easier to change settings without having to modify the code.\n\nIn Deno, environment variables can be set in a few different ways. The most common way is to use the `VAR=value` syntax. This will set the environment variable `VAR` to the value `value`. This can be used to set any number of environment variables before running a command. For example, if we wanted to set the environment variable `VAR` to `hello` before running a Deno command, we could do so like this:\n\n```\nVAR=hello deno run main.ts\n```\n\nThis will set the environment variable `VAR` to `hello` before running the command. We can then access this variable in our code using the `Deno.env.get()` function. For example, if we ran the following command:\n\n```\nVAR=hello && deno eval "console.log(\'Deno: \' + Deno.env.get(\'VAR'}, {'text': '\n\nEnvironment variables are a powerful tool for developers, allowing them to store and access data without having to hard-code it into their applications. In Deno, you can access environment variables using the `Deno.env.get()` function.\n\nFor example, if you wanted to access the `HOME` environment variable, you could do so like this:\n\n```js\n// env.js\nDeno.env.get("HOME");\n```\n\nWhen running this code, you\'ll need to grant the Deno process access to environment variables. This can be done by passing the `--allow-env` flag to the `deno run` command. You can also specify which environment variables you want to grant access to, like this:\n\n```shell\n# Allow access to only the HOME env var\ndeno run --allow-env=HOME env.js\n```\n\nIt\'s important to note that environment variables are case insensitive on Windows, so Deno also matches them case insensitively (on Windows only).\n\nAnother thing to be aware of when using environment variables is subprocess permissions. Subprocesses are powerful and can access system resources regardless of the permissions you granted to the Den'}, {'text': '\n\nEnvironment variables are an important part of any programming language, and Deno is no exception. Deno is a secure JavaScript and TypeScript runtime built on the V8 JavaScript engine, and it recently added support for environment variables. This feature was added in Deno version 1.6.0, and it is now available for use in Deno applications.\n\nEnvironment variables are used to store information that can be used by programs. They are typically used to store configuration information, such as the location of a database or the name of a user. In Deno, environment variables are stored in the `Deno.env` object. This object is similar to the `process.env` object in Node.js, and it allows you to access and set environment variables.\n\nThe `Deno.env` object is a read-only object, meaning that you cannot directly modify the environment variables. Instead, you must use the `Deno.env.set()` function to set environment variables. This function takes two arguments: the name of the environment variable and the value to set it to. For example, if you wanted to set the `FOO` environment variable to `bar`, you would use the following code:\n\n```'}]
previous
Retrieval Question Answering with Sources
next
API Chains
Contents | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/vector_db_text_generation.html |
f3ef3940b40f-2 | previous
Retrieval Question Answering with Sources
next
API Chains
Contents
Prepare Data
Set Up Vector DB
Set Up LLM Chain with Custom Prompt
Generate Text
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/vector_db_text_generation.html |
5c28abc9ab19-0 | .ipynb
.pdf
Analyze Document
Contents
Summarize
Question Answering
Analyze Document#
The AnalyzeDocumentChain is more of an end to chain. This chain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain. This can be used as more of an end-to-end chain.
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
Summarize#
Let’s take a look at it in action below, using it summarize a long document.
from langchain import OpenAI
from langchain.chains.summarize import load_summarize_chain
llm = OpenAI(temperature=0)
summary_chain = load_summarize_chain(llm, chain_type="map_reduce")
from langchain.chains import AnalyzeDocumentChain
summarize_document_chain = AnalyzeDocumentChain(combine_docs_chain=summary_chain)
summarize_document_chain.run(state_of_the_union)
" In this speech, President Biden addresses the American people and the world, discussing the recent aggression of Russia's Vladimir Putin in Ukraine and the US response. He outlines economic sanctions and other measures taken to hold Putin accountable, and announces the US Department of Justice's task force to go after the crimes of Russian oligarchs. He also announces plans to fight inflation and lower costs for families, invest in American manufacturing, and provide military, economic, and humanitarian assistance to Ukraine. He calls for immigration reform, protecting the rights of women, and advancing the rights of LGBTQ+ Americans, and pays tribute to military families. He concludes with optimism for the future of America."
Question Answering#
Let’s take a look at this using a question answering chain.
from langchain.chains.question_answering import load_qa_chain
qa_chain = load_qa_chain(llm, chain_type="map_reduce")
qa_document_chain = AnalyzeDocumentChain(combine_docs_chain=qa_chain)
qa_document_chain.run(input_document=state_of_the_union, question="what did the president say about justice breyer?")
' The president thanked Justice Breyer for his service.'
previous
Transformation Chain
next
Chat Over Documents with Chat History
Contents
Summarize
Question Answering
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/analyze_document.html |
383d5ab25b8c-0 | .ipynb
.pdf
Retrieval Question Answering with Sources
Contents
Chain Type
Retrieval Question Answering with Sources#
This notebook goes over how to do question-answering with sources over an Index. It does this by using the RetrievalQAWithSourcesChain, which does the lookup of the documents from an Index.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.embeddings.cohere import CohereEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch
from langchain.vectorstores import Chroma
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": f"{i}-pl"} for i in range(len(texts))])
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
from langchain.chains import RetrievalQAWithSourcesChain
from langchain import OpenAI
chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever())
chain({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True)
{'answer': ' The president honored Justice Breyer for his service and mentioned his legacy of excellence.\n',
'sources': '31-pl'}
Chain Type#
You can easily specify different chain types to load and use in the RetrievalQAWithSourcesChain chain. For a more detailed walkthrough of these types, please see this notebook.
There are two ways to load different chain types. First, you can specify the chain type argument in the from_chain_type method. This allows you to pass in the name of the chain type you want to use. For example, in the below we change the chain type to map_reduce.
chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="map_reduce", retriever=docsearch.as_retriever())
chain({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True)
{'answer': ' The president said "Justice Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."\n',
'sources': '31-pl'}
The above way allows you to really simply change the chain_type, but it does provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly (as you did in this notebook) and then pass that directly to the the RetrievalQAWithSourcesChain chain with the combine_documents_chain parameter. For example:
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
qa_chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff")
qa = RetrievalQAWithSourcesChain(combine_documents_chain=qa_chain, retriever=docsearch.as_retriever())
qa({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True)
{'answer': ' The president honored Justice Breyer for his service and mentioned his legacy of excellence.\n',
'sources': '31-pl'}
previous
Retrieval Question/Answering
next
Vector DB Text Generation
Contents
Chain Type
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/chains/index_examples/vector_db_qa_with_sources.html |
3e7ded4185d8-0 | .ipynb
.pdf
Callbacks
Contents
Callbacks
How to use callbacks
When do you want to use each of these?
Using an existing handler
Creating a custom handler
Async Callbacks
Using multiple handlers, passing in handlers
Tracing and Token Counting
Tracing
Token Counting
Callbacks#
LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks.
You can subscribe to these events by using the callbacks argument available throughout the API. This argument is list of handler objects, which are expected to implement one or more of the methods described below in more detail. There are two main callbacks mechanisms:
Constructor callbacks will be used for all calls made on that object, and will be scoped to that object only, i.e. if you pass a handler to the LLMChain constructor, it will not be used by the model attached to that chain.
Request callbacks will be used for that specific request only, and all sub-requests that it contains (eg. a call to an LLMChain triggers a call to a Model, which uses the same handler passed through). These are explicitly passed through.
Advanced: When you create a custom chain you can easily set it up to use the same callback system as all the built-in chains.
_call, _generate, _run, and equivalent async methods on Chains / LLMs / Chat Models / Agents / Tools now receive a 2nd argument called run_manager which is bound to that run, and contains the logging methods that can be used by that object (i.e. on_llm_new_token). This is useful when constructing a custom chain. See this guide for more information on how to create custom chains and use callbacks inside them.
CallbackHandlers are objects that implement the CallbackHandler interface, which has a method for each event that can be subscribed to. The CallbackManager will call the appropriate method on each handler when the event is triggered.
class BaseCallbackHandler:
"""Base callback handler that can be used to handle callbacks from langchain."""
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> Any:
"""Run when LLM starts running."""
def on_llm_new_token(self, token: str, **kwargs: Any) -> Any:
"""Run on new LLM token. Only available when streaming is enabled."""
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> Any:
"""Run when LLM ends running."""
def on_llm_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> Any:
"""Run when LLM errors."""
def on_chain_start(
self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any
) -> Any:
"""Run when chain starts running."""
def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> Any:
"""Run when chain ends running."""
def on_chain_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> Any:
"""Run when chain errors."""
def on_tool_start(
self, serialized: Dict[str, Any], input_str: str, **kwargs: Any
) -> Any:
"""Run when tool starts running."""
def on_tool_end(self, output: str, **kwargs: Any) -> Any:
"""Run when tool ends running."""
def on_tool_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> Any:
"""Run when tool errors."""
def on_text(self, text: str, **kwargs: Any) -> Any:
"""Run on arbitrary text."""
def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:
"""Run on agent action."""
def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> Any:
"""Run on agent end."""
How to use callbacks#
The callbacks argument is available on most objects throughout the API (Chains, Models, Tools, Agents, etc.) in two different places: | https://langchain.readthedocs.io/en/latest/modules/callbacks/getting_started.html |
3e7ded4185d8-1 | Constructor callbacks: defined in the constructor, eg. LLMChain(callbacks=[handler]), which will be used for all calls made on that object, and will be scoped to that object only, eg. if you pass a handler to the LLMChain constructor, it will not be used by the Model attached to that chain.
Request callbacks: defined in the call()/run()/apply() methods used for issuing a request, eg. chain.call(inputs, callbacks=[handler]), which will be used for that specific request only, and all sub-requests that it contains (eg. a call to an LLMChain triggers a call to a Model, which uses the same handler passed in the call() method).
The verbose argument is available on most objects throughout the API (Chains, Models, Tools, Agents, etc.) as a constructor argument, eg. LLMChain(verbose=True), and it is equivalent to passing a ConsoleCallbackHandler to the callbacks argument of that object and all child objects. This is useful for debugging, as it will log all events to the console.
When do you want to use each of these?#
Constructor callbacks are most useful for use cases such as logging, monitoring, etc., which are not specific to a single request, but rather to the entire chain. For example, if you want to log all the requests made to an LLMChain, you would pass a handler to the constructor.
Request callbacks are most useful for use cases such as streaming, where you want to stream the output of a single request to a specific websocket connection, or other similar use cases. For example, if you want to stream the output of a single request to a websocket, you would pass a handler to the call() method
Using an existing handler#
LangChain provides a few built-in handlers that you can use to get started. These are available in the langchain/callbacks module. The most basic handler is the StdOutCallbackHandler, which simply logs all events to stdout. In the future we will add more default handlers to the library.
Note when the verbose flag on the object is set to true, the StdOutCallbackHandler will be invoked even without being explicitly passed in.
from langchain.callbacks import StdOutCallbackHandler
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
handler = StdOutCallbackHandler()
llm = OpenAI()
prompt = PromptTemplate.from_template("1 + {number} = ")
# First, let's explicitly set the StdOutCallbackHandler in `callbacks`
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler])
chain.run(number=2)
# Then, let's use the `verbose` flag to achieve the same result
chain = LLMChain(llm=llm, prompt=prompt, verbose=True)
chain.run(number=2)
# Finally, let's use the request `callbacks` to achieve the same result
chain = LLMChain(llm=llm, prompt=prompt)
chain.run(number=2, callbacks=[handler])
> Entering new LLMChain chain...
Prompt after formatting:
1 + 2 =
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
1 + 2 =
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
1 + 2 =
> Finished chain.
'\n\n3'
Creating a custom handler#
You can create a custom handler to set on the object as well. In the example below, we’ll implement streaming with a custom handler.
from langchain.callbacks.base import BaseCallbackHandler
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage
class MyCustomHandler(BaseCallbackHandler):
def on_llm_new_token(self, token: str, **kwargs) -> None:
print(f"My custom handler, token: {token}")
# To enable streaming, we pass in `streaming=True` to the ChatModel constructor
# Additionally, we pass in a list with our custom handler
chat = ChatOpenAI(max_tokens=25, streaming=True, callbacks=[MyCustomHandler()])
chat([HumanMessage(content="Tell me a joke")])
My custom handler, token:
My custom handler, token: Why
My custom handler, token: did
My custom handler, token: the
My custom handler, token: tomato
My custom handler, token: turn
My custom handler, token: red
My custom handler, token: ?
My custom handler, token: Because
My custom handler, token: it | https://langchain.readthedocs.io/en/latest/modules/callbacks/getting_started.html |
3e7ded4185d8-2 | My custom handler, token: Because
My custom handler, token: it
My custom handler, token: saw
My custom handler, token: the
My custom handler, token: salad
My custom handler, token: dressing
My custom handler, token: !
My custom handler, token:
AIMessage(content='Why did the tomato turn red? Because it saw the salad dressing!', additional_kwargs={})
Async Callbacks#
If you are planning to use the async API, it is recommended to use AsyncCallbackHandler to avoid blocking the runloop.
Advanced if you use a sync CallbackHandler while using an async method to run your llm/chain/tool/agent, it will still work. However, under the hood, it will be called with run_in_executor which can cause issues if your CallbackHandler is not thread-safe.
import asyncio
from typing import Any, Dict, List
from langchain.schema import LLMResult
from langchain.callbacks.base import AsyncCallbackHandler
class MyCustomSyncHandler(BaseCallbackHandler):
def on_llm_new_token(self, token: str, **kwargs) -> None:
print(f"Sync handler being called in a `thread_pool_executor`: token: {token}")
class MyCustomAsyncHandler(AsyncCallbackHandler):
"""Async callback handler that can be used to handle callbacks from langchain."""
async def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None:
"""Run when chain starts running."""
print("zzzz....")
await asyncio.sleep(0.3)
class_name = serialized["name"]
print("Hi! I just woke up. Your llm is starting")
async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Run when chain ends running."""
print("zzzz....")
await asyncio.sleep(0.3)
print("Hi! I just woke up. Your llm is ending")
# To enable streaming, we pass in `streaming=True` to the ChatModel constructor
# Additionally, we pass in a list with our custom handler
chat = ChatOpenAI(max_tokens=25, streaming=True, callbacks=[MyCustomSyncHandler(), MyCustomAsyncHandler()])
await chat.agenerate([[HumanMessage(content="Tell me a joke")]])
zzzz....
Hi! I just woke up. Your llm is starting
Sync handler being called in a `thread_pool_executor`: token:
Sync handler being called in a `thread_pool_executor`: token: Why
Sync handler being called in a `thread_pool_executor`: token: don
Sync handler being called in a `thread_pool_executor`: token: 't
Sync handler being called in a `thread_pool_executor`: token: scientists
Sync handler being called in a `thread_pool_executor`: token: trust
Sync handler being called in a `thread_pool_executor`: token: atoms
Sync handler being called in a `thread_pool_executor`: token: ?
Sync handler being called in a `thread_pool_executor`: token: Because
Sync handler being called in a `thread_pool_executor`: token: they
Sync handler being called in a `thread_pool_executor`: token: make
Sync handler being called in a `thread_pool_executor`: token: up
Sync handler being called in a `thread_pool_executor`: token: everything
Sync handler being called in a `thread_pool_executor`: token: !
Sync handler being called in a `thread_pool_executor`: token:
zzzz....
Hi! I just woke up. Your llm is ending
LLMResult(generations=[[ChatGeneration(text="Why don't scientists trust atoms?\n\nBecause they make up everything!", generation_info=None, message=AIMessage(content="Why don't scientists trust atoms?\n\nBecause they make up everything!", additional_kwargs={}))]], llm_output={'token_usage': {}, 'model_name': 'gpt-3.5-turbo'})
Using multiple handlers, passing in handlers#
In the previous examples, we passed in callback handlers upon creation of an object by using callbacks=. In this case, the callbacks will be scoped to that particular object. | https://langchain.readthedocs.io/en/latest/modules/callbacks/getting_started.html |
3e7ded4185d8-3 | However, in many cases, it is advantageous to pass in handlers instead when running the object. When we pass through CallbackHandlers using the callbacks keyword arg when executing an run, those callbacks will be issued by all nested objects involved in the execution. For example, when a handler is passed through to an Agent, it will be used for all callbacks related to the agent and all the objects involved in the agent’s execution, in this case, the Tools, LLMChain, and LLM.
This prevents us from having to manually attach the handlers to each individual nested object.
from typing import Dict, Union, Any, List
from langchain.callbacks.base import BaseCallbackHandler
from langchain.schema import AgentAction
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain.callbacks import tracing_enabled
from langchain.llms import OpenAI
# First, define custom callback handler implementations
class MyCustomHandlerOne(BaseCallbackHandler):
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> Any:
print(f"on_llm_start {serialized['name']}")
def on_llm_new_token(self, token: str, **kwargs: Any) -> Any:
print(f"on_new_token {token}")
def on_llm_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> Any:
"""Run when LLM errors."""
def on_chain_start(
self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any
) -> Any:
print(f"on_chain_start {serialized['name']}")
def on_tool_start(
self, serialized: Dict[str, Any], input_str: str, **kwargs: Any
) -> Any:
print(f"on_tool_start {serialized['name']}")
def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:
print(f"on_agent_action {action}")
class MyCustomHandlerTwo(BaseCallbackHandler):
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> Any:
print(f"on_llm_start (I'm the second handler!!) {serialized['name']}")
# Instantiate the handlers
handler1 = MyCustomHandlerOne()
handler2 = MyCustomHandlerTwo()
# Setup the agent. Only the `llm` will issue callbacks for handler2
llm = OpenAI(temperature=0, streaming=True, callbacks=[handler2])
tools = load_tools(["llm-math"], llm=llm)
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)
# Callbacks for handler1 will be issued by every object involved in the
# Agent execution (llm, llmchain, tool, agent executor)
agent.run("What is 2 raised to the 0.235 power?", callbacks=[handler1])
on_chain_start AgentExecutor
on_chain_start LLMChain
on_llm_start OpenAI
on_llm_start (I'm the second handler!!) OpenAI
on_new_token I
on_new_token need
on_new_token to
on_new_token use
on_new_token a
on_new_token calculator
on_new_token to
on_new_token solve
on_new_token this
on_new_token .
on_new_token
Action
on_new_token :
on_new_token Calculator
on_new_token
Action
on_new_token Input
on_new_token :
on_new_token 2
on_new_token ^
on_new_token 0
on_new_token .
on_new_token 235
on_new_token
on_agent_action AgentAction(tool='Calculator', tool_input='2^0.235', log=' I need to use a calculator to solve this.\nAction: Calculator\nAction Input: 2^0.235')
on_tool_start Calculator
on_chain_start LLMMathChain
on_chain_start LLMChain
on_llm_start OpenAI
on_llm_start (I'm the second handler!!) OpenAI
on_new_token
on_new_token ```text
on_new_token
on_new_token 2
on_new_token **
on_new_token 0
on_new_token .
on_new_token 235
on_new_token
on_new_token ```
on_new_token ...
on_new_token num
on_new_token expr
on_new_token . | https://langchain.readthedocs.io/en/latest/modules/callbacks/getting_started.html |
3e7ded4185d8-4 | on_new_token ...
on_new_token num
on_new_token expr
on_new_token .
on_new_token evaluate
on_new_token ("
on_new_token 2
on_new_token **
on_new_token 0
on_new_token .
on_new_token 235
on_new_token ")
on_new_token ...
on_new_token
on_new_token
on_chain_start LLMChain
on_llm_start OpenAI
on_llm_start (I'm the second handler!!) OpenAI
on_new_token I
on_new_token now
on_new_token know
on_new_token the
on_new_token final
on_new_token answer
on_new_token .
on_new_token
Final
on_new_token Answer
on_new_token :
on_new_token 1
on_new_token .
on_new_token 17
on_new_token 690
on_new_token 67
on_new_token 372
on_new_token 187
on_new_token 674
on_new_token
'1.1769067372187674'
Tracing and Token Counting#
Tracing and token counting are two capabilities we provide which are built on our callbacks mechanism.
Tracing#
There are two recommended ways to trace your LangChains:
Setting the LANGCHAIN_TRACING environment variable to "true".
Using a context manager with tracing_enabled() to trace a particular block of code.
Note if the environment variable is set, all code will be traced, regardless of whether or not it’s within the context manager.
import os
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain.callbacks import tracing_enabled
from langchain.llms import OpenAI
# To run the code, make sure to set OPENAI_API_KEY and SERPAPI_API_KEY
llm = OpenAI(temperature=0)
tools = load_tools(["llm-math", "serpapi"], llm=llm)
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
questions = [
"Who won the US Open men's final in 2019? What is his age raised to the 0.334 power?",
"Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?",
"Who won the most recent formula 1 grand prix? What is their age raised to the 0.23 power?",
"Who won the US Open women's final in 2019? What is her age raised to the 0.34 power?",
"Who is Beyonce's husband? What is his age raised to the 0.19 power?",
]
os.environ["LANGCHAIN_TRACING"] = "true"
# Both of the agent runs will be traced because the environment variable is set
agent.run(questions[0])
with tracing_enabled() as session:
assert session
agent.run(questions[1])
> Entering new AgentExecutor chain...
I need to find out who won the US Open men's final in 2019 and then calculate his age raised to the 0.334 power.
Action: Search
Action Input: "US Open men's final 2019 winner"
Observation: Rafael Nadal defeated Daniil Medvedev in the final, 7–5, 6–3, 5–7, 4–6, 6–4 to win the men's singles tennis title at the 2019 US Open. It was his fourth US ...
Thought: I need to find out the age of the winner
Action: Search
Action Input: "Rafael Nadal age"
Observation: 36 years
Thought: I need to calculate the age raised to the 0.334 power
Action: Calculator
Action Input: 36^0.334
Observation: Answer: 3.3098250249682484
Thought: I now know the final answer
Final Answer: Rafael Nadal, aged 36, won the US Open men's final in 2019 and his age raised to the 0.334 power is 3.3098250249682484.
> Finished chain.
> Entering new AgentExecutor chain...
I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.
Action: Search
Action Input: "Olivia Wilde boyfriend" | https://langchain.readthedocs.io/en/latest/modules/callbacks/getting_started.html |
3e7ded4185d8-5 | Action: Search
Action Input: "Olivia Wilde boyfriend"
Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
Thought: I need to find out Harry Styles' age.
Action: Search
Action Input: "Harry Styles age"
Observation: 29 years
Thought: I need to calculate 29 raised to the 0.23 power.
Action: Calculator
Action Input: 29^0.23
Observation: Answer: 2.169459462491557
Thought: I now know the final answer.
Final Answer: Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557.
> Finished chain.
# Now, we unset the environment variable and use a context manager.
if "LANGCHAIN_TRACING" in os.environ:
del os.environ["LANGCHAIN_TRACING"]
# here, we are writing traces to "my_test_session"
with tracing_enabled("my_test_session") as session:
assert session
agent.run(questions[0]) # this should be traced
agent.run(questions[1]) # this should not be traced
> Entering new AgentExecutor chain...
I need to find out who won the US Open men's final in 2019 and then calculate his age raised to the 0.334 power.
Action: Search
Action Input: "US Open men's final 2019 winner"
Observation: Rafael Nadal defeated Daniil Medvedev in the final, 7–5, 6–3, 5–7, 4–6, 6–4 to win the men's singles tennis title at the 2019 US Open. It was his fourth US ...
Thought: I need to find out the age of the winner
Action: Search
Action Input: "Rafael Nadal age"
Observation: 36 years
Thought: I need to calculate the age raised to the 0.334 power
Action: Calculator
Action Input: 36^0.334
Observation: Answer: 3.3098250249682484
Thought: I now know the final answer
Final Answer: Rafael Nadal, aged 36, won the US Open men's final in 2019 and his age raised to the 0.334 power is 3.3098250249682484.
> Finished chain.
> Entering new AgentExecutor chain...
I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.
Action: Search
Action Input: "Olivia Wilde boyfriend"
Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
Thought: I need to find out Harry Styles' age.
Action: Search
Action Input: "Harry Styles age"
Observation: 29 years
Thought: I need to calculate 29 raised to the 0.23 power.
Action: Calculator
Action Input: 29^0.23
Observation: Answer: 2.169459462491557
Thought: I now know the final answer.
Final Answer: Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557.
> Finished chain.
"Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557."
# The context manager is concurrency safe:
if "LANGCHAIN_TRACING" in os.environ:
del os.environ["LANGCHAIN_TRACING"]
# start a background task
task = asyncio.create_task(agent.arun(questions[0])) # this should not be traced
with tracing_enabled() as session:
assert session
tasks = [agent.arun(q) for q in questions[1:3]] # these should be traced
await asyncio.gather(*tasks)
await task
> Entering new AgentExecutor chain...
> Entering new AgentExecutor chain...
> Entering new AgentExecutor chain... | https://langchain.readthedocs.io/en/latest/modules/callbacks/getting_started.html |
3e7ded4185d8-6 | > Entering new AgentExecutor chain...
> Entering new AgentExecutor chain...
I need to find out who won the grand prix and then calculate their age raised to the 0.23 power.
Action: Search
Action Input: "Formula 1 Grand Prix Winner" I need to find out who won the US Open men's final in 2019 and then calculate his age raised to the 0.334 power.
Action: Search
Action Input: "US Open men's final 2019 winner"Rafael Nadal defeated Daniil Medvedev in the final, 7–5, 6–3, 5–7, 4–6, 6–4 to win the men's singles tennis title at the 2019 US Open. It was his fourth US ... I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.
Action: Search
Action Input: "Olivia Wilde boyfriend"Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.Lewis Hamilton has won 103 Grands Prix during his career. He won 21 races with McLaren and has won 82 with Mercedes. Lewis Hamilton holds the record for the ... I need to find out the age of the winner
Action: Search
Action Input: "Rafael Nadal age"36 years I need to find out Harry Styles' age.
Action: Search
Action Input: "Harry Styles age" I need to find out Lewis Hamilton's age
Action: Search
Action Input: "Lewis Hamilton Age"29 years I need to calculate the age raised to the 0.334 power
Action: Calculator
Action Input: 36^0.334 I need to calculate 29 raised to the 0.23 power.
Action: Calculator
Action Input: 29^0.23Answer: 3.3098250249682484Answer: 2.16945946249155738 years
> Finished chain.
> Finished chain.
I now need to calculate 38 raised to the 0.23 power
Action: Calculator
Action Input: 38^0.23Answer: 2.3086081644669734
> Finished chain.
"Rafael Nadal, aged 36, won the US Open men's final in 2019 and his age raised to the 0.334 power is 3.3098250249682484."
Token Counting#
LangChain offers a context manager that allows you to count tokens.
from langchain.callbacks import get_openai_callback
llm = OpenAI(temperature=0)
with get_openai_callback() as cb:
llm("What is the square root of 4?")
total_tokens = cb.total_tokens
assert total_tokens > 0
with get_openai_callback() as cb:
llm("What is the square root of 4?")
llm("What is the square root of 4?")
assert cb.total_tokens == total_tokens * 2
# You can kick off concurrent runs from within the context manager
with get_openai_callback() as cb:
await asyncio.gather(
*[llm.agenerate(["What is the square root of 4?"]) for _ in range(3)]
)
assert cb.total_tokens == total_tokens * 3
# The context manager is concurrency safe
task = asyncio.create_task(llm.agenerate(["What is the square root of 4?"]))
with get_openai_callback() as cb:
await llm.agenerate(["What is the square root of 4?"])
await task
assert cb.total_tokens == total_tokens
previous
Plan and Execute
next
Autonomous Agents
Contents
Callbacks
How to use callbacks
When do you want to use each of these?
Using an existing handler
Creating a custom handler
Async Callbacks
Using multiple handlers, passing in handlers
Tracing and Token Counting
Tracing
Token Counting
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/callbacks/getting_started.html |
1425f7798b06-0 | .rst
.pdf
Toolkits
Toolkits#
Note
Conceptual Guide
This section of documentation covers agents with toolkits - eg an agent applied to a particular use case.
See below for a full list of agent toolkits
Azure Cognitive Services Toolkit
CSV Agent
Gmail Toolkit
Jira
JSON Agent
OpenAPI agents
Natural Language APIs
Pandas Dataframe Agent
PlayWright Browser Toolkit
PowerBI Dataset Agent
Python Agent
Spark Dataframe Agent
Spark SQL Agent
SQL Database Agent
Vectorstore Agent
previous
Structured Tool Chat Agent
next
Azure Cognitive Services Toolkit
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/toolkits.html |
9b828695685b-0 | .rst
.pdf
Agent Executors
Agent Executors#
Note
Conceptual Guide
Agent executors take an agent and tools and use the agent to decide which tools to call and in what order.
In this part of the documentation we cover other related functionality to agent executors
How to combine agents and vectorstores
How to use the async API for Agents
How to create ChatGPT Clone
Handle Parsing Errors
How to access intermediate steps
How to cap the max number of iterations
How to use a timeout for the agent
How to add SharedMemory to an Agent and its Tools
previous
Vectorstore Agent
next
How to combine agents and vectorstores
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/agent_executors.html |
59b4052a0ce4-0 | .ipynb
.pdf
Getting Started
Getting Started#
Agents use an LLM to determine which actions to take and in what order.
An action can either be using a tool and observing its output, or returning to the user.
When used correctly agents can be extremely powerful. The purpose of this notebook is to show you how to easily use agents through the simplest, highest level API.
In order to load agents, you should understand the following concepts:
Tool: A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. The interface for a tool is currently a function that is expected to have a string as an input, with a string as an output.
LLM: The language model powering the agent.
Agent: The agent to use. This should be a string that references a support agent class. Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see the documentation for custom agents.
Agents: For a list of supported agents and their specifications, see here.
Tools: For a list of predefined tools and their specifications, see here.
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
First, let’s load the language model we’re going to use to control the agent.
llm = OpenAI(temperature=0)
Next, let’s load some tools to use. Note that the llm-math tool uses an LLM, so we need to pass that in.
tools = load_tools(["serpapi", "llm-math"], llm=llm)
Finally, let’s initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
Now let’s test it out!
agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
> Entering new AgentExecutor chain...
I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.
Action: Search
Action Input: "Leo DiCaprio girlfriend"
Observation: Camila Morrone
Thought: I need to find out Camila Morrone's age
Action: Search
Action Input: "Camila Morrone age"
Observation: 25 years
Thought: I need to calculate 25 raised to the 0.43 power
Action: Calculator
Action Input: 25^0.43
Observation: Answer: 3.991298452658078
Thought: I now know the final answer
Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078.
> Finished chain.
"Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078."
previous
Agents
next
Tools
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/getting_started.html |
1e1ebdc5bbc2-0 | .ipynb
.pdf
Plan and Execute
Contents
Plan and Execute
Imports
Tools
Planner, Executor, and Agent
Run Example
Plan and Execute#
Plan and execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by BabyAGI and then the “Plan-and-Solve” paper.
The planning is almost always done by an LLM.
The execution is usually done by a separate agent (equipped with tools).
Imports#
from langchain.chat_models import ChatOpenAI
from langchain.experimental.plan_and_execute import PlanAndExecute, load_agent_executor, load_chat_planner
from langchain.llms import OpenAI
from langchain import SerpAPIWrapper
from langchain.agents.tools import Tool
from langchain import LLMMathChain
Tools#
search = SerpAPIWrapper()
llm = OpenAI(temperature=0)
llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)
tools = [
Tool(
name = "Search",
func=search.run,
description="useful for when you need to answer questions about current events"
),
Tool(
name="Calculator",
func=llm_math_chain.run,
description="useful for when you need to answer questions about math"
),
]
Planner, Executor, and Agent#
model = ChatOpenAI(temperature=0)
planner = load_chat_planner(model)
executor = load_agent_executor(model, tools, verbose=True)
agent = PlanAndExecute(planner=planner, executor=executor, verbose=True)
Run Example#
agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
> Entering new PlanAndExecute chain...
steps=[Step(value="Search for Leo DiCaprio's girlfriend on the internet."), Step(value='Find her current age.'), Step(value='Raise her current age to the 0.43 power using a calculator or programming language.'), Step(value='Output the result.'), Step(value="Given the above steps taken, respond to the user's original question.\n\n")]
> Entering new AgentExecutor chain...
Action:
```
{
"action": "Search",
"action_input": "Who is Leo DiCaprio's girlfriend?"
}
```
Observation: DiCaprio broke up with girlfriend Camila Morrone, 25, in the summer of 2022, after dating for four years. He's since been linked to another famous supermodel – Gigi Hadid. The power couple were first supposedly an item in September after being spotted getting cozy during a party at New York Fashion Week.
Thought:Based on the previous observation, I can provide the answer to the current objective.
Action:
```
{
"action": "Final Answer",
"action_input": "Leo DiCaprio is currently linked to Gigi Hadid."
}
```
> Finished chain.
*****
Step: Search for Leo DiCaprio's girlfriend on the internet.
Response: Leo DiCaprio is currently linked to Gigi Hadid.
> Entering new AgentExecutor chain...
Action:
```
{
"action": "Search",
"action_input": "What is Gigi Hadid's current age?"
}
```
Observation: 28 years
Thought:Previous steps: steps=[(Step(value="Search for Leo DiCaprio's girlfriend on the internet."), StepResponse(response='Leo DiCaprio is currently linked to Gigi Hadid.'))]
Current objective: value='Find her current age.'
Action:
```
{
"action": "Search",
"action_input": "What is Gigi Hadid's current age?"
}
```
Observation: 28 years
Thought:Previous steps: steps=[(Step(value="Search for Leo DiCaprio's girlfriend on the internet."), StepResponse(response='Leo DiCaprio is currently linked to Gigi Hadid.')), (Step(value='Find her current age.'), StepResponse(response='28 years'))]
Current objective: None
Action:
```
{
"action": "Final Answer",
"action_input": "Gigi Hadid's current age is 28 years."
}
```
> Finished chain.
*****
Step: Find her current age.
Response: Gigi Hadid's current age is 28 years.
> Entering new AgentExecutor chain...
Action:
```
{ | https://langchain.readthedocs.io/en/latest/modules/agents/plan_and_execute.html |
1e1ebdc5bbc2-1 | > Entering new AgentExecutor chain...
Action:
```
{
"action": "Calculator",
"action_input": "28 ** 0.43"
}
```
> Entering new LLMMathChain chain...
28 ** 0.43
```text
28 ** 0.43
```
...numexpr.evaluate("28 ** 0.43")...
Answer: 4.1906168361987195
> Finished chain.
Observation: Answer: 4.1906168361987195
Thought:The next step is to provide the answer to the user's question.
Action:
```
{
"action": "Final Answer",
"action_input": "Gigi Hadid's current age raised to the 0.43 power is approximately 4.19."
}
```
> Finished chain.
*****
Step: Raise her current age to the 0.43 power using a calculator or programming language.
Response: Gigi Hadid's current age raised to the 0.43 power is approximately 4.19.
> Entering new AgentExecutor chain...
Action:
```
{
"action": "Final Answer",
"action_input": "The result is approximately 4.19."
}
```
> Finished chain.
*****
Step: Output the result.
Response: The result is approximately 4.19.
> Entering new AgentExecutor chain...
Action:
```
{
"action": "Final Answer",
"action_input": "Gigi Hadid's current age raised to the 0.43 power is approximately 4.19."
}
```
> Finished chain.
*****
Step: Given the above steps taken, respond to the user's original question.
Response: Gigi Hadid's current age raised to the 0.43 power is approximately 4.19.
> Finished chain.
"Gigi Hadid's current age raised to the 0.43 power is approximately 4.19."
previous
How to add SharedMemory to an Agent and its Tools
next
Callbacks
Contents
Plan and Execute
Imports
Tools
Planner, Executor, and Agent
Run Example
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/plan_and_execute.html |
241f7000ec40-0 | .rst
.pdf
Agents
Agents#
Note
Conceptual Guide
In this part of the documentation we cover the different types of agents, disregarding which specific tools they are used with.
For a high level overview of the different types of agents, see the below documentation.
Agent Types
For documentation on how to create a custom agent, see the below.
Custom Agent
Custom LLM Agent
Custom LLM Agent (with a ChatModel)
Custom MRKL Agent
Custom MultiAction Agent
Custom Agent with Tool Retrieval
We also have documentation for an in-depth dive into each agent type.
Conversation Agent (for Chat Models)
Conversation Agent
MRKL
MRKL Chat
ReAct
Self Ask With Search
Structured Tool Chat Agent
previous
Zapier Natural Language Actions API
next
Agent Types
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/agents.html |
fffec0c16796-0 | .rst
.pdf
Tools
Tools#
Note
Conceptual Guide
Tools are ways that an agent can use to interact with the outside world.
For an overview of what a tool is, how to use them, and a full list of examples, please see the getting started documentation
Getting Started
Next, we have some examples of customizing and generically working with tools
Defining Custom Tools
Multi-Input Tools
Tool Input Schema
In this documentation we cover generic tooling functionality (eg how to create your own)
as well as examples of tools and how to use them.
Apify
ArXiv API Tool
AWS Lambda API
Shell Tool
Bing Search
Brave Search
ChatGPT Plugins
DuckDuckGo Search
File System Tools
Google Places
Google Search
Google Serper API
Gradio Tools
GraphQL tool
HuggingFace Tools
Human as a tool
IFTTT WebHooks
Metaphor Search
Call the API
Use Metaphor as a tool
OpenWeatherMap API
PubMed Tool
Python REPL
Requests
SceneXplain
Search Tools
SearxNG Search API
SerpAPI
Twilio
Wikipedia
Wolfram Alpha
YouTubeSearchTool
Zapier Natural Language Actions API
Example with SimpleSequentialChain
previous
Getting Started
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/tools.html |
ec41ddb2cbb4-0 | .ipynb
.pdf
Custom LLM Agent
Contents
Set up environment
Set up tool
Prompt Template
Output Parser
Set up LLM
Define the stop sequence
Set up the Agent
Use the Agent
Adding Memory
Custom LLM Agent#
This notebook goes through how to create your own custom LLM agent.
An LLM agent consists of three parts:
PromptTemplate: This is the prompt template that can be used to instruct the language model on what to do
LLM: This is the language model that powers the agent
stop sequence: Instructs the LLM to stop generating as soon as this string is found
OutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish object
The LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:
Passes user input and any previous steps to the Agent (in this case, the LLMAgent)
If the Agent returns an AgentFinish, then return that directly to the user
If the Agent returns an AgentAction, then use that to call a tool and get an Observation
Repeat, passing the AgentAction and Observation back to the Agent until an AgentFinish is emitted.
AgentAction is a response that consists of action and action_input. action refers to which tool to use, and action_input refers to the input to that tool. log can also be provided as more context (that can be used for logging, tracing, etc).
AgentFinish is a response that contains the final message to be sent back to the user. This should be used to end an agent run.
In this notebook we walk through how to create a custom LLM agent.
Set up environment#
Do necessary imports, etc.
from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParser
from langchain.prompts import StringPromptTemplate
from langchain import OpenAI, SerpAPIWrapper, LLMChain
from typing import List, Union
from langchain.schema import AgentAction, AgentFinish
import re
Set up tool#
Set up any tools the agent may want to use. This may be necessary to put in the prompt (so that the agent knows to use these tools).
# Define which tools the agent can use to answer user queries
search = SerpAPIWrapper()
tools = [
Tool(
name = "Search",
func=search.run,
description="useful for when you need to answer questions about current events"
)
]
Prompt Template#
This instructs the agent on what to do. Generally, the template should incorporate:
tools: which tools the agent has access and how and when to call them.
intermediate_steps: These are tuples of previous (AgentAction, Observation) pairs. These are generally not passed directly to the model, but the prompt template formats them in a specific way.
input: generic user input
# Set up the base template
template = """Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin! Remember to speak as a pirate when giving your final answer. Use lots of "Arg"s
Question: {input}
{agent_scratchpad}"""
# Set up a prompt template
class CustomPromptTemplate(StringPromptTemplate):
# The template to use
template: str
# The list of tools available
tools: List[Tool]
def format(self, **kwargs) -> str:
# Get the intermediate steps (AgentAction, Observation tuples)
# Format them in a particular way
intermediate_steps = kwargs.pop("intermediate_steps")
thoughts = ""
for action, observation in intermediate_steps:
thoughts += action.log
thoughts += f"\nObservation: {observation}\nThought: "
# Set the agent_scratchpad variable to that value
kwargs["agent_scratchpad"] = thoughts
# Create a tools variable from the list of tools provided
kwargs["tools"] = "\n".join([f"{tool.name}: {tool.description}" for tool in self.tools]) | https://langchain.readthedocs.io/en/latest/modules/agents/agents/custom_llm_agent.html |
ec41ddb2cbb4-1 | # Create a list of tool names for the tools provided
kwargs["tool_names"] = ", ".join([tool.name for tool in self.tools])
return self.template.format(**kwargs)
prompt = CustomPromptTemplate(
template=template,
tools=tools,
# This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically
# This includes the `intermediate_steps` variable because that is needed
input_variables=["input", "intermediate_steps"]
)
Output Parser#
The output parser is responsible for parsing the LLM output into AgentAction and AgentFinish. This usually depends heavily on the prompt used.
This is where you can change the parsing to do retries, handle whitespace, etc
class CustomOutputParser(AgentOutputParser):
def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:
# Check if agent should finish
if "Final Answer:" in llm_output:
return AgentFinish(
# Return values is generally always a dictionary with a single `output` key
# It is not recommended to try anything else at the moment :)
return_values={"output": llm_output.split("Final Answer:")[-1].strip()},
log=llm_output,
)
# Parse out the action and action input
regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
match = re.search(regex, llm_output, re.DOTALL)
if not match:
raise ValueError(f"Could not parse LLM output: `{llm_output}`")
action = match.group(1).strip()
action_input = match.group(2)
# Return the action and action input
return AgentAction(tool=action, tool_input=action_input.strip(" ").strip('"'), log=llm_output)
output_parser = CustomOutputParser()
Set up LLM#
Choose the LLM you want to use!
llm = OpenAI(temperature=0)
Define the stop sequence#
This is important because it tells the LLM when to stop generation.
This depends heavily on the prompt and model you are using. Generally, you want this to be whatever token you use in the prompt to denote the start of an Observation (otherwise, the LLM may hallucinate an observation for you).
Set up the Agent#
We can now combine everything to set up our agent
# LLM chain consisting of the LLM and a prompt
llm_chain = LLMChain(llm=llm, prompt=prompt)
tool_names = [tool.name for tool in tools]
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names
)
Use the Agent#
Now we can use it!
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
agent_executor.run("How many people live in canada as of 2023?")
> Entering new AgentExecutor chain...
Thought: I need to find out the population of Canada in 2023
Action: Search
Action Input: Population of Canada in 2023
Observation:The current population of Canada is 38,658,314 as of Wednesday, April 12, 2023, based on Worldometer elaboration of the latest United Nations data. I now know the final answer
Final Answer: Arrr, there be 38,658,314 people livin' in Canada as of 2023!
> Finished chain.
"Arrr, there be 38,658,314 people livin' in Canada as of 2023!"
Adding Memory#
If you want to add memory to the agent, you’ll need to:
Add a place in the custom prompt for the chat_history
Add a memory object to the agent executor.
# Set up the base template
template_with_history = """Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action | https://langchain.readthedocs.io/en/latest/modules/agents/agents/custom_llm_agent.html |
ec41ddb2cbb4-2 | Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin! Remember to speak as a pirate when giving your final answer. Use lots of "Arg"s
Previous conversation history:
{history}
New question: {input}
{agent_scratchpad}"""
prompt_with_history = CustomPromptTemplate(
template=template_with_history,
tools=tools,
# This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically
# This includes the `intermediate_steps` variable because that is needed
input_variables=["input", "intermediate_steps", "history"]
)
llm_chain = LLMChain(llm=llm, prompt=prompt_with_history)
tool_names = [tool.name for tool in tools]
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names
)
from langchain.memory import ConversationBufferWindowMemory
memory=ConversationBufferWindowMemory(k=2)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True, memory=memory)
agent_executor.run("How many people live in canada as of 2023?")
> Entering new AgentExecutor chain...
Thought: I need to find out the population of Canada in 2023
Action: Search
Action Input: Population of Canada in 2023
Observation:The current population of Canada is 38,658,314 as of Wednesday, April 12, 2023, based on Worldometer elaboration of the latest United Nations data. I now know the final answer
Final Answer: Arrr, there be 38,658,314 people livin' in Canada as of 2023!
> Finished chain.
"Arrr, there be 38,658,314 people livin' in Canada as of 2023!"
agent_executor.run("how about in mexico?")
> Entering new AgentExecutor chain...
Thought: I need to find out how many people live in Mexico.
Action: Search
Action Input: How many people live in Mexico as of 2023?
Observation:The current population of Mexico is 132,679,922 as of Tuesday, April 11, 2023, based on Worldometer elaboration of the latest United Nations data. Mexico 2020 ... I now know the final answer.
Final Answer: Arrr, there be 132,679,922 people livin' in Mexico as of 2023!
> Finished chain.
"Arrr, there be 132,679,922 people livin' in Mexico as of 2023!"
previous
Custom Agent
next
Custom LLM Agent (with a ChatModel)
Contents
Set up environment
Set up tool
Prompt Template
Output Parser
Set up LLM
Define the stop sequence
Set up the Agent
Use the Agent
Adding Memory
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/agents/custom_llm_agent.html |
469de9739106-0 | .ipynb
.pdf
Custom LLM Agent (with a ChatModel)
Contents
Set up environment
Set up tool
Prompt Template
Output Parser
Set up LLM
Define the stop sequence
Set up the Agent
Use the Agent
Custom LLM Agent (with a ChatModel)#
This notebook goes through how to create your own custom agent based on a chat model.
An LLM chat agent consists of three parts:
PromptTemplate: This is the prompt template that can be used to instruct the language model on what to do
ChatModel: This is the language model that powers the agent
stop sequence: Instructs the LLM to stop generating as soon as this string is found
OutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish object
The LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:
Passes user input and any previous steps to the Agent (in this case, the LLMAgent)
If the Agent returns an AgentFinish, then return that directly to the user
If the Agent returns an AgentAction, then use that to call a tool and get an Observation
Repeat, passing the AgentAction and Observation back to the Agent until an AgentFinish is emitted.
AgentAction is a response that consists of action and action_input. action refers to which tool to use, and action_input refers to the input to that tool. log can also be provided as more context (that can be used for logging, tracing, etc).
AgentFinish is a response that contains the final message to be sent back to the user. This should be used to end an agent run.
In this notebook we walk through how to create a custom LLM agent.
Set up environment#
Do necessary imports, etc.
!pip install langchain
!pip install google-search-results
!pip install openai
from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParser
from langchain.prompts import BaseChatPromptTemplate
from langchain import SerpAPIWrapper, LLMChain
from langchain.chat_models import ChatOpenAI
from typing import List, Union
from langchain.schema import AgentAction, AgentFinish, HumanMessage
import re
from getpass import getpass
Set up tool#
Set up any tools the agent may want to use. This may be necessary to put in the prompt (so that the agent knows to use these tools).
SERPAPI_API_KEY = getpass()
# Define which tools the agent can use to answer user queries
search = SerpAPIWrapper(serpapi_api_key=SERPAPI_API_KEY)
tools = [
Tool(
name = "Search",
func=search.run,
description="useful for when you need to answer questions about current events"
)
]
Prompt Template#
This instructs the agent on what to do. Generally, the template should incorporate:
tools: which tools the agent has access and how and when to call them.
intermediate_steps: These are tuples of previous (AgentAction, Observation) pairs. These are generally not passed directly to the model, but the prompt template formats them in a specific way.
input: generic user input
# Set up the base template
template = """Complete the objective as best you can. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
These were previous tasks you completed:
Begin!
Question: {input}
{agent_scratchpad}"""
# Set up a prompt template
class CustomPromptTemplate(BaseChatPromptTemplate):
# The template to use
template: str
# The list of tools available
tools: List[Tool]
def format_messages(self, **kwargs) -> str:
# Get the intermediate steps (AgentAction, Observation tuples)
# Format them in a particular way
intermediate_steps = kwargs.pop("intermediate_steps")
thoughts = ""
for action, observation in intermediate_steps:
thoughts += action.log
thoughts += f"\nObservation: {observation}\nThought: "
# Set the agent_scratchpad variable to that value | https://langchain.readthedocs.io/en/latest/modules/agents/agents/custom_llm_chat_agent.html |
469de9739106-1 | # Set the agent_scratchpad variable to that value
kwargs["agent_scratchpad"] = thoughts
# Create a tools variable from the list of tools provided
kwargs["tools"] = "\n".join([f"{tool.name}: {tool.description}" for tool in self.tools])
# Create a list of tool names for the tools provided
kwargs["tool_names"] = ", ".join([tool.name for tool in self.tools])
formatted = self.template.format(**kwargs)
return [HumanMessage(content=formatted)]
prompt = CustomPromptTemplate(
template=template,
tools=tools,
# This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically
# This includes the `intermediate_steps` variable because that is needed
input_variables=["input", "intermediate_steps"]
)
Output Parser#
The output parser is responsible for parsing the LLM output into AgentAction and AgentFinish. This usually depends heavily on the prompt used.
This is where you can change the parsing to do retries, handle whitespace, etc
class CustomOutputParser(AgentOutputParser):
def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:
# Check if agent should finish
if "Final Answer:" in llm_output:
return AgentFinish(
# Return values is generally always a dictionary with a single `output` key
# It is not recommended to try anything else at the moment :)
return_values={"output": llm_output.split("Final Answer:")[-1].strip()},
log=llm_output,
)
# Parse out the action and action input
regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
match = re.search(regex, llm_output, re.DOTALL)
if not match:
raise ValueError(f"Could not parse LLM output: `{llm_output}`")
action = match.group(1).strip()
action_input = match.group(2)
# Return the action and action input
return AgentAction(tool=action, tool_input=action_input.strip(" ").strip('"'), log=llm_output)
output_parser = CustomOutputParser()
Set up LLM#
Choose the LLM you want to use!
OPENAI_API_KEY = getpass()
llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY, temperature=0)
Define the stop sequence#
This is important because it tells the LLM when to stop generation.
This depends heavily on the prompt and model you are using. Generally, you want this to be whatever token you use in the prompt to denote the start of an Observation (otherwise, the LLM may hallucinate an observation for you).
Set up the Agent#
We can now combine everything to set up our agent
# LLM chain consisting of the LLM and a prompt
llm_chain = LLMChain(llm=llm, prompt=prompt)
tool_names = [tool.name for tool in tools]
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names
)
Use the Agent#
Now we can use it!
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
agent_executor.run("Search for Leo DiCaprio's girlfriend on the internet.")
> Entering new AgentExecutor chain...
Thought: I should use a reliable search engine to get accurate information.
Action: Search
Action Input: "Leo DiCaprio girlfriend"
Observation:He went on to date Gisele Bündchen, Bar Refaeli, Blake Lively, Toni Garrn and Nina Agdal, among others, before finally settling down with current girlfriend Camila Morrone, who is 23 years his junior.
I have found the answer to the question.
Final Answer: Leo DiCaprio's current girlfriend is Camila Morrone.
> Finished chain.
"Leo DiCaprio's current girlfriend is Camila Morrone."
previous
Custom LLM Agent
next
Custom MRKL Agent
Contents
Set up environment
Set up tool
Prompt Template
Output Parser
Set up LLM
Define the stop sequence
Set up the Agent
Use the Agent
By Harrison Chase
© Copyright 2023, Harrison Chase. | https://langchain.readthedocs.io/en/latest/modules/agents/agents/custom_llm_chat_agent.html |
469de9739106-2 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/agents/custom_llm_chat_agent.html |
204e9ceeb356-0 | .ipynb
.pdf
Custom MultiAction Agent
Custom MultiAction Agent#
This notebook goes through how to create your own custom agent.
An agent consists of two parts:
- Tools: The tools the agent has available to use.
- The agent class itself: this decides which action to take.
In this notebook we walk through how to create a custom agent that predicts/takes multiple steps at a time.
from langchain.agents import Tool, AgentExecutor, BaseMultiActionAgent
from langchain import OpenAI, SerpAPIWrapper
def random_word(query: str) -> str:
print("\nNow I'm doing this!")
return "foo"
search = SerpAPIWrapper()
tools = [
Tool(
name = "Search",
func=search.run,
description="useful for when you need to answer questions about current events"
),
Tool(
name = "RandomWord",
func=random_word,
description="call this to get a random word."
)
]
from typing import List, Tuple, Any, Union
from langchain.schema import AgentAction, AgentFinish
class FakeAgent(BaseMultiActionAgent):
"""Fake Custom Agent."""
@property
def input_keys(self):
return ["input"]
def plan(
self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any
) -> Union[List[AgentAction], AgentFinish]:
"""Given input, decided what to do.
Args:
intermediate_steps: Steps the LLM has taken to date,
along with observations
**kwargs: User inputs.
Returns:
Action specifying what tool to use.
"""
if len(intermediate_steps) == 0:
return [
AgentAction(tool="Search", tool_input=kwargs["input"], log=""),
AgentAction(tool="RandomWord", tool_input=kwargs["input"], log=""),
]
else:
return AgentFinish(return_values={"output": "bar"}, log="")
async def aplan(
self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any
) -> Union[List[AgentAction], AgentFinish]:
"""Given input, decided what to do.
Args:
intermediate_steps: Steps the LLM has taken to date,
along with observations
**kwargs: User inputs.
Returns:
Action specifying what tool to use.
"""
if len(intermediate_steps) == 0:
return [
AgentAction(tool="Search", tool_input=kwargs["input"], log=""),
AgentAction(tool="RandomWord", tool_input=kwargs["input"], log=""),
]
else:
return AgentFinish(return_values={"output": "bar"}, log="")
agent = FakeAgent()
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
agent_executor.run("How many people live in canada as of 2023?")
> Entering new AgentExecutor chain...
The current population of Canada is 38,669,152 as of Monday, April 24, 2023, based on Worldometer elaboration of the latest United Nations data.
Now I'm doing this!
foo
> Finished chain.
'bar'
previous
Custom MRKL Agent
next
Custom Agent with Tool Retrieval
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/agents/custom_multi_action_agent.html |
d119d5b4129b-0 | .ipynb
.pdf
Custom Agent with Tool Retrieval
Contents
Set up environment
Set up tools
Tool Retriever
Prompt Template
Output Parser
Set up LLM, stop sequence, and the agent
Use the Agent
Custom Agent with Tool Retrieval#
This notebook builds off of this notebook and assumes familiarity with how agents work.
The novel idea introduced in this notebook is the idea of using retrieval to select the set of tools to use to answer an agent query. This is useful when you have many many tools to select from. You cannot put the description of all the tools in the prompt (because of context length issues) so instead you dynamically select the N tools you do want to consider using at run time.
In this notebook we will create a somewhat contrieved example. We will have one legitimate tool (search) and then 99 fake tools which are just nonsense. We will then add a step in the prompt template that takes the user input and retrieves tool relevant to the query.
Set up environment#
Do necessary imports, etc.
from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParser
from langchain.prompts import StringPromptTemplate
from langchain import OpenAI, SerpAPIWrapper, LLMChain
from typing import List, Union
from langchain.schema import AgentAction, AgentFinish
import re
Set up tools#
We will create one legitimate tool (search) and then 99 fake tools
# Define which tools the agent can use to answer user queries
search = SerpAPIWrapper()
search_tool = Tool(
name = "Search",
func=search.run,
description="useful for when you need to answer questions about current events"
)
def fake_func(inp: str) -> str:
return "foo"
fake_tools = [
Tool(
name=f"foo-{i}",
func=fake_func,
description=f"a silly function that you can use to get more information about the number {i}"
)
for i in range(99)
]
ALL_TOOLS = [search_tool] + fake_tools
Tool Retriever#
We will use a vectorstore to create embeddings for each tool description. Then, for an incoming query we can create embeddings for that query and do a similarity search for relevant tools.
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
from langchain.schema import Document
docs = [Document(page_content=t.description, metadata={"index": i}) for i, t in enumerate(ALL_TOOLS)]
vector_store = FAISS.from_documents(docs, OpenAIEmbeddings())
retriever = vector_store.as_retriever()
def get_tools(query):
docs = retriever.get_relevant_documents(query)
return [ALL_TOOLS[d.metadata["index"]] for d in docs]
We can now test this retriever to see if it seems to work.
get_tools("whats the weather?")
[Tool(name='Search', description='useful for when you need to answer questions about current events', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<bound method SerpAPIWrapper.run of SerpAPIWrapper(search_engine=<class 'serpapi.google_search.GoogleSearch'>, params={'engine': 'google', 'google_domain': 'google.com', 'gl': 'us', 'hl': 'en'}, serpapi_api_key='', aiosession=None)>, coroutine=None),
Tool(name='foo-95', description='a silly function that you can use to get more information about the number 95', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None),
Tool(name='foo-12', description='a silly function that you can use to get more information about the number 12', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None),
Tool(name='foo-15', description='a silly function that you can use to get more information about the number 15', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None)] | https://langchain.readthedocs.io/en/latest/modules/agents/agents/custom_agent_with_tool_retrieval.html |
d119d5b4129b-1 | get_tools("whats the number 13?")
[Tool(name='foo-13', description='a silly function that you can use to get more information about the number 13', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None),
Tool(name='foo-12', description='a silly function that you can use to get more information about the number 12', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None),
Tool(name='foo-14', description='a silly function that you can use to get more information about the number 14', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None),
Tool(name='foo-11', description='a silly function that you can use to get more information about the number 11', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None)]
Prompt Template#
The prompt template is pretty standard, because we’re not actually changing that much logic in the actual prompt template, but rather we are just changing how retrieval is done.
# Set up the base template
template = """Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin! Remember to speak as a pirate when giving your final answer. Use lots of "Arg"s
Question: {input}
{agent_scratchpad}"""
The custom prompt template now has the concept of a tools_getter, which we call on the input to select the tools to use
from typing import Callable
# Set up a prompt template
class CustomPromptTemplate(StringPromptTemplate):
# The template to use
template: str
############## NEW ######################
# The list of tools available
tools_getter: Callable
def format(self, **kwargs) -> str:
# Get the intermediate steps (AgentAction, Observation tuples)
# Format them in a particular way
intermediate_steps = kwargs.pop("intermediate_steps")
thoughts = ""
for action, observation in intermediate_steps:
thoughts += action.log
thoughts += f"\nObservation: {observation}\nThought: "
# Set the agent_scratchpad variable to that value
kwargs["agent_scratchpad"] = thoughts
############## NEW ######################
tools = self.tools_getter(kwargs["input"])
# Create a tools variable from the list of tools provided
kwargs["tools"] = "\n".join([f"{tool.name}: {tool.description}" for tool in tools])
# Create a list of tool names for the tools provided
kwargs["tool_names"] = ", ".join([tool.name for tool in tools])
return self.template.format(**kwargs)
prompt = CustomPromptTemplate(
template=template,
tools_getter=get_tools,
# This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically
# This includes the `intermediate_steps` variable because that is needed
input_variables=["input", "intermediate_steps"]
)
Output Parser#
The output parser is unchanged from the previous notebook, since we are not changing anything about the output format.
class CustomOutputParser(AgentOutputParser):
def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:
# Check if agent should finish
if "Final Answer:" in llm_output:
return AgentFinish(
# Return values is generally always a dictionary with a single `output` key | https://langchain.readthedocs.io/en/latest/modules/agents/agents/custom_agent_with_tool_retrieval.html |
d119d5b4129b-2 | # Return values is generally always a dictionary with a single `output` key
# It is not recommended to try anything else at the moment :)
return_values={"output": llm_output.split("Final Answer:")[-1].strip()},
log=llm_output,
)
# Parse out the action and action input
regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
match = re.search(regex, llm_output, re.DOTALL)
if not match:
raise ValueError(f"Could not parse LLM output: `{llm_output}`")
action = match.group(1).strip()
action_input = match.group(2)
# Return the action and action input
return AgentAction(tool=action, tool_input=action_input.strip(" ").strip('"'), log=llm_output)
output_parser = CustomOutputParser()
Set up LLM, stop sequence, and the agent#
Also the same as the previous notebook
llm = OpenAI(temperature=0)
# LLM chain consisting of the LLM and a prompt
llm_chain = LLMChain(llm=llm, prompt=prompt)
tools = get_tools("whats the weather?")
tool_names = [tool.name for tool in tools]
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names
)
Use the Agent#
Now we can use it!
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
agent_executor.run("What's the weather in SF?")
> Entering new AgentExecutor chain...
Thought: I need to find out what the weather is in SF
Action: Search
Action Input: Weather in SF
Observation:Mostly cloudy skies early, then partly cloudy in the afternoon. High near 60F. ENE winds shifting to W at 10 to 15 mph. Humidity71%. UV Index6 of 10. I now know the final answer
Final Answer: 'Arg, 'tis mostly cloudy skies early, then partly cloudy in the afternoon. High near 60F. ENE winds shiftin' to W at 10 to 15 mph. Humidity71%. UV Index6 of 10.
> Finished chain.
"'Arg, 'tis mostly cloudy skies early, then partly cloudy in the afternoon. High near 60F. ENE winds shiftin' to W at 10 to 15 mph. Humidity71%. UV Index6 of 10."
previous
Custom MultiAction Agent
next
Conversation Agent (for Chat Models)
Contents
Set up environment
Set up tools
Tool Retriever
Prompt Template
Output Parser
Set up LLM, stop sequence, and the agent
Use the Agent
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/agents/custom_agent_with_tool_retrieval.html |
eb41659584f1-0 | .ipynb
.pdf
Custom MRKL Agent
Contents
Custom LLMChain
Multiple inputs
Custom MRKL Agent#
This notebook goes through how to create your own custom MRKL agent.
A MRKL agent consists of three parts:
- Tools: The tools the agent has available to use.
- LLMChain: The LLMChain that produces the text that is parsed in a certain way to determine which action to take.
- The agent class itself: this parses the output of the LLMChain to determine which action to take.
In this notebook we walk through how to create a custom MRKL agent by creating a custom LLMChain.
Custom LLMChain#
The first way to create a custom agent is to use an existing Agent class, but use a custom LLMChain. This is the simplest way to create a custom Agent. It is highly recommended that you work with the ZeroShotAgent, as at the moment that is by far the most generalizable one.
Most of the work in creating the custom LLMChain comes down to the prompt. Because we are using an existing agent class to parse the output, it is very important that the prompt say to produce text in that format. Additionally, we currently require an agent_scratchpad input variable to put notes on previous actions and observations. This should almost always be the final part of the prompt. However, besides those instructions, you can customize the prompt as you wish.
To ensure that the prompt contains the appropriate instructions, we will utilize a helper method on that class. The helper method for the ZeroShotAgent takes the following arguments:
tools: List of tools the agent will have access to, used to format the prompt.
prefix: String to put before the list of tools.
suffix: String to put after the list of tools.
input_variables: List of input variables the final prompt will expect.
For this exercise, we will give our agent access to Google Search, and we will customize it in that we will have it answer as a pirate.
from langchain.agents import ZeroShotAgent, Tool, AgentExecutor
from langchain import OpenAI, SerpAPIWrapper, LLMChain
search = SerpAPIWrapper()
tools = [
Tool(
name = "Search",
func=search.run,
description="useful for when you need to answer questions about current events"
)
]
prefix = """Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:"""
suffix = """Begin! Remember to speak as a pirate when giving your final answer. Use lots of "Args"
Question: {input}
{agent_scratchpad}"""
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=["input", "agent_scratchpad"]
)
In case we are curious, we can now take a look at the final prompt template to see what it looks like when its all put together.
print(prompt.template)
Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:
Search: useful for when you need to answer questions about current events
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [Search]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin! Remember to speak as a pirate when giving your final answer. Use lots of "Args"
Question: {input}
{agent_scratchpad}
Note that we are able to feed agents a self-defined prompt template, i.e. not restricted to the prompt generated by the create_prompt function, assuming it meets the agent’s requirements.
For example, for ZeroShotAgent, we will need to ensure that it meets the following requirements. There should a string starting with “Action:” and a following string starting with “Action Input:”, and both should be separated by a newline.
llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)
tool_names = [tool.name for tool in tools]
agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True) | https://langchain.readthedocs.io/en/latest/modules/agents/agents/custom_mrkl_agent.html |
eb41659584f1-1 | agent_executor.run("How many people live in canada as of 2023?")
> Entering new AgentExecutor chain...
Thought: I need to find out the population of Canada
Action: Search
Action Input: Population of Canada 2023
Observation: The current population of Canada is 38,661,927 as of Sunday, April 16, 2023, based on Worldometer elaboration of the latest United Nations data.
Thought: I now know the final answer
Final Answer: Arrr, Canada be havin' 38,661,927 people livin' there as of 2023!
> Finished chain.
"Arrr, Canada be havin' 38,661,927 people livin' there as of 2023!"
Multiple inputs#
Agents can also work with prompts that require multiple inputs.
prefix = """Answer the following questions as best you can. You have access to the following tools:"""
suffix = """When answering, you MUST speak in the following language: {language}.
Question: {input}
{agent_scratchpad}"""
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=["input", "language", "agent_scratchpad"]
)
llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)
agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
agent_executor.run(input="How many people live in canada as of 2023?", language="italian")
> Entering new AgentExecutor chain...
Thought: I should look for recent population estimates.
Action: Search
Action Input: Canada population 2023
Observation: 39,566,248
Thought: I should double check this number.
Action: Search
Action Input: Canada population estimates 2023
Observation: Canada's population was estimated at 39,566,248 on January 1, 2023, after a record population growth of 1,050,110 people from January 1, 2022, to January 1, 2023.
Thought: I now know the final answer.
Final Answer: La popolazione del Canada è stata stimata a 39.566.248 il 1° gennaio 2023, dopo un record di crescita demografica di 1.050.110 persone dal 1° gennaio 2022 al 1° gennaio 2023.
> Finished chain.
'La popolazione del Canada è stata stimata a 39.566.248 il 1° gennaio 2023, dopo un record di crescita demografica di 1.050.110 persone dal 1° gennaio 2022 al 1° gennaio 2023.'
previous
Custom LLM Agent (with a ChatModel)
next
Custom MultiAction Agent
Contents
Custom LLMChain
Multiple inputs
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/agents/custom_mrkl_agent.html |
e2bf473c27bd-0 | .ipynb
.pdf
Custom Agent
Custom Agent#
This notebook goes through how to create your own custom agent.
An agent consists of two parts:
- Tools: The tools the agent has available to use.
- The agent class itself: this decides which action to take.
In this notebook we walk through how to create a custom agent.
from langchain.agents import Tool, AgentExecutor, BaseSingleActionAgent
from langchain import OpenAI, SerpAPIWrapper
search = SerpAPIWrapper()
tools = [
Tool(
name = "Search",
func=search.run,
description="useful for when you need to answer questions about current events",
return_direct=True
)
]
from typing import List, Tuple, Any, Union
from langchain.schema import AgentAction, AgentFinish
class FakeAgent(BaseSingleActionAgent):
"""Fake Custom Agent."""
@property
def input_keys(self):
return ["input"]
def plan(
self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any
) -> Union[AgentAction, AgentFinish]:
"""Given input, decided what to do.
Args:
intermediate_steps: Steps the LLM has taken to date,
along with observations
**kwargs: User inputs.
Returns:
Action specifying what tool to use.
"""
return AgentAction(tool="Search", tool_input=kwargs["input"], log="")
async def aplan(
self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any
) -> Union[AgentAction, AgentFinish]:
"""Given input, decided what to do.
Args:
intermediate_steps: Steps the LLM has taken to date,
along with observations
**kwargs: User inputs.
Returns:
Action specifying what tool to use.
"""
return AgentAction(tool="Search", tool_input=kwargs["input"], log="")
agent = FakeAgent()
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
agent_executor.run("How many people live in canada as of 2023?")
> Entering new AgentExecutor chain...
The current population of Canada is 38,669,152 as of Monday, April 24, 2023, based on Worldometer elaboration of the latest United Nations data.
> Finished chain.
'The current population of Canada is 38,669,152 as of Monday, April 24, 2023, based on Worldometer elaboration of the latest United Nations data.'
previous
Agent Types
next
Custom LLM Agent
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/agents/custom_agent.html |
020d66a60fd5-0 | .md
.pdf
Agent Types
Contents
zero-shot-react-description
react-docstore
self-ask-with-search
conversational-react-description
Agent Types#
Agents use an LLM to determine which actions to take and in what order.
An action can either be using a tool and observing its output, or returning a response to the user.
Here are the agents available in LangChain.
zero-shot-react-description#
This agent uses the ReAct framework to determine which tool to use
based solely on the tool’s description. Any number of tools can be provided.
This agent requires that a description is provided for each tool.
react-docstore#
This agent uses the ReAct framework to interact with a docstore. Two tools must
be provided: a Search tool and a Lookup tool (they must be named exactly as so).
The Search tool should search for a document, while the Lookup tool should lookup
a term in the most recently found document.
This agent is equivalent to the
original ReAct paper, specifically the Wikipedia example.
self-ask-with-search#
This agent utilizes a single tool that should be named Intermediate Answer.
This tool should be able to lookup factual answers to questions. This agent
is equivalent to the original self ask with search paper,
where a Google search API was provided as the tool.
conversational-react-description#
This agent is designed to be used in conversational settings.
The prompt is designed to make the agent helpful and conversational.
It uses the ReAct framework to decide which tool to use, and uses memory to remember the previous conversation interactions.
previous
Agents
next
Custom Agent
Contents
zero-shot-react-description
react-docstore
self-ask-with-search
conversational-react-description
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/agents/agent_types.html |
94fdead7eb30-0 | .ipynb
.pdf
Conversation Agent
Conversation Agent#
This notebook walks through using an agent optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well.
This is accomplished with a specific type of agent (conversational-react-description) which expects to be used with a memory component.
from langchain.agents import Tool
from langchain.agents import AgentType
from langchain.memory import ConversationBufferMemory
from langchain import OpenAI
from langchain.utilities import SerpAPIWrapper
from langchain.agents import initialize_agent
search = SerpAPIWrapper()
tools = [
Tool(
name = "Current Search",
func=search.run,
description="useful for when you need to answer questions about current events or the current state of the world"
),
]
memory = ConversationBufferMemory(memory_key="chat_history")
llm=OpenAI(temperature=0)
agent_chain = initialize_agent(tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)
agent_chain.run(input="hi, i am bob")
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? No
AI: Hi Bob, nice to meet you! How can I help you today?
> Finished chain.
'Hi Bob, nice to meet you! How can I help you today?'
agent_chain.run(input="what's my name?")
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? No
AI: Your name is Bob!
> Finished chain.
'Your name is Bob!'
agent_chain.run("what are some good dinners to make this week, if i like thai food?")
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: Current Search
Action Input: Thai food dinner recipes
Observation: 59 easy Thai recipes for any night of the week · Marion Grasby's Thai spicy chilli and basil fried rice · Thai curry noodle soup · Marion Grasby's Thai Spicy ...
Thought: Do I need to use a tool? No
AI: Here are some great Thai dinner recipes you can try this week: Marion Grasby's Thai Spicy Chilli and Basil Fried Rice, Thai Curry Noodle Soup, Thai Green Curry with Coconut Rice, Thai Red Curry with Vegetables, and Thai Coconut Soup. I hope you enjoy them!
> Finished chain.
"Here are some great Thai dinner recipes you can try this week: Marion Grasby's Thai Spicy Chilli and Basil Fried Rice, Thai Curry Noodle Soup, Thai Green Curry with Coconut Rice, Thai Red Curry with Vegetables, and Thai Coconut Soup. I hope you enjoy them!"
agent_chain.run(input="tell me the last letter in my name, and also tell me who won the world cup in 1978?")
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: Current Search
Action Input: Who won the World Cup in 1978
Observation: Argentina national football team
Thought: Do I need to use a tool? No
AI: The last letter in your name is "b" and the winner of the 1978 World Cup was the Argentina national football team.
> Finished chain.
'The last letter in your name is "b" and the winner of the 1978 World Cup was the Argentina national football team.'
agent_chain.run(input="whats the current temperature in pomfret?")
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: Current Search
Action Input: Current temperature in Pomfret
Observation: Partly cloudy skies. High around 70F. Winds W at 5 to 10 mph. Humidity41%.
Thought: Do I need to use a tool? No
AI: The current temperature in Pomfret is around 70F with partly cloudy skies and winds W at 5 to 10 mph. The humidity is 41%.
> Finished chain.
'The current temperature in Pomfret is around 70F with partly cloudy skies and winds W at 5 to 10 mph. The humidity is 41%.'
previous
Conversation Agent (for Chat Models)
next
MRKL
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/agents/examples/conversational_agent.html |
3920e70f51d5-0 | .ipynb
.pdf
Structured Tool Chat Agent
Contents
Initialize Tools
Adding in memory
Structured Tool Chat Agent#
This notebook walks through using a chat agent capable of using multi-input tools.
Older agents are configured to specify an action input as a single string, but this agent can use the provided tools’ args_schema to populate the action input.
This functionality is natively available in the (structured-chat-zero-shot-react-description or AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION).
import os
os.environ["LANGCHAIN_TRACING"] = "true" # If you want to trace the execution of the program, set to "true"
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent
Initialize Tools#
We will test the agent using a web browser.
from langchain.agents.agent_toolkits import PlayWrightBrowserToolkit
from langchain.tools.playwright.utils import (
create_async_playwright_browser,
create_sync_playwright_browser, # A synchronous browser is available, though it isn't compatible with jupyter.
)
# This import is required only for jupyter notebooks, since they have their own eventloop
import nest_asyncio
nest_asyncio.apply()
async_browser = create_async_playwright_browser()
browser_toolkit = PlayWrightBrowserToolkit.from_browser(async_browser=async_browser)
tools = browser_toolkit.get_tools()
llm = ChatOpenAI(temperature=0) # Also works well with Anthropic models
agent_chain = initialize_agent(tools, llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
response = await agent_chain.arun(input="Hi I'm Erica.")
print(response)
> Entering new AgentExecutor chain...
Action:
```
{
"action": "Final Answer",
"action_input": "Hello Erica, how can I assist you today?"
}
```
> Finished chain.
Hello Erica, how can I assist you today?
response = await agent_chain.arun(input="Don't need help really just chatting.")
print(response)
> Entering new AgentExecutor chain...
> Finished chain.
I'm here to chat! How's your day going?
response = await agent_chain.arun(input="Browse to blog.langchain.dev and summarize the text, please.")
print(response)
> Entering new AgentExecutor chain...
Action:
```
{
"action": "navigate_browser",
"action_input": {
"url": "https://blog.langchain.dev/"
}
}
```
Observation: Navigating to https://blog.langchain.dev/ returned status code 200
Thought:I need to extract the text from the webpage to summarize it.
Action:
```
{
"action": "extract_text",
"action_input": {}
}
```
Observation: LangChain LangChain Home About GitHub Docs LangChain The official LangChain blog. Auto-Evaluator Opportunities Editor's Note: this is a guest blog post by Lance Martin.
TL;DR | https://langchain.readthedocs.io/en/latest/modules/agents/agents/examples/structured_chat.html |
3920e70f51d5-1 | TL;DR
We recently open-sourced an auto-evaluator tool for grading LLM question-answer chains. We are now releasing an open source, free to use hosted app and API to expand usability. Below we discuss a few opportunities to further improve May 1, 2023 5 min read Callbacks Improvements TL;DR: We're announcing improvements to our callbacks system, which powers logging, tracing, streaming output, and some awesome third-party integrations. This will better support concurrent runs with independent callbacks, tracing of deeply nested trees of LangChain components, and callback handlers scoped to a single request (which is super useful for May 1, 2023 3 min read Unleashing the power of AI Collaboration with Parallelized LLM Agent Actor Trees Editor's note: the following is a guest blog post from Cyrus at Shaman AI. We use guest blog posts to highlight interesting and novel applciations, and this is certainly that. There's been a lot of talk about agents recently, but most have been discussions around a single agent. If multiple Apr 28, 2023 4 min read Gradio & LLM Agents Editor's note: this is a guest blog post from Freddy Boulton, a software engineer at Gradio. We're excited to share this post because it brings a large number of exciting new tools into the ecosystem. Agents are largely defined by the tools they have, so to be able to equip Apr 23, 2023 4 min read RecAlign - The smart content filter for social media feed [Editor's Note] This is a guest post by Tian Jin. We are highlighting this application as we think it is a novel use case. Specifically, we think recommendation systems are incredibly impactful in our everyday lives and there has not been a ton of discourse on how LLMs will impact Apr 22, 2023 3 min read Improving Document Retrieval with Contextual Compression Note: This post assumes some familiarity with LangChain and is moderately technical.
💡 TL;DR: We’ve introduced a new abstraction and a new document Retriever to facilitate the post-processing of retrieved documents. Specifically, the new abstraction makes it easy to take a set of retrieved documents and extract from them Apr 20, 2023 3 min read Autonomous Agents & Agent Simulations Over the past two weeks, there has been a massive increase in using LLMs in an agentic manner. Specifically, projects like AutoGPT, BabyAGI, CAMEL, and Generative Agents have popped up. The LangChain community has now implemented some parts of all of those projects in the LangChain framework. While researching and Apr 18, 2023 7 min read AI-Powered Medical Knowledge: Revolutionizing Care for Rare Conditions [Editor's Note]: This is a guest post by Jack Simon, who recently participated in a hackathon at Williams College. He built a LangChain-powered chatbot focused on appendiceal cancer, aiming to make specialized knowledge more accessible to those in need. If you are interested in building a chatbot for another rare Apr 17, 2023 3 min read Auto-Eval of Question-Answering Tasks By Lance Martin
Context
LLM ops platforms, such as LangChain, make it easy to assemble LLM components (e.g., models, document retrievers, data loaders) into chains. Question-Answering is one of the most popular applications of these chains. But it is often not always obvious to determine what parameters (e.g. Apr 15, 2023 3 min read Announcing LangChainJS Support for Multiple JS Environments TLDR: We're announcing support for running LangChain.js in browsers, Cloudflare Workers, Vercel/Next.js, Deno, Supabase Edge Functions, alongside existing support for Node.js ESM and CJS. See install/upgrade docs and breaking changes list.
Context
Originally we designed LangChain.js to run in Node.js, which is the Apr 11, 2023 3 min read LangChain x Supabase Supabase is holding an AI Hackathon this week. Here at LangChain we are big fans of both Supabase and hackathons, so we thought this would be a perfect time to highlight the multiple ways you can use LangChain and Supabase together. | https://langchain.readthedocs.io/en/latest/modules/agents/agents/examples/structured_chat.html |
3920e70f51d5-2 | The reason we like Supabase so much is that Apr 8, 2023 2 min read Announcing our $10M seed round led by Benchmark It was only six months ago that we released the first version of LangChain, but it seems like several years. When we launched, generative AI was starting to go mainstream: stable diffusion had just been released and was captivating people’s imagination and fueling an explosion in developer activity, Jasper Apr 4, 2023 4 min read Custom Agents One of the most common requests we've heard is better functionality and documentation for creating custom agents. This has always been a bit tricky - because in our mind it's actually still very unclear what an "agent" actually is, and therefor what the "right" abstractions for them may be. Recently, Apr 3, 2023 3 min read Retrieval TL;DR: We are adjusting our abstractions to make it easy for other retrieval methods besides the LangChain VectorDB object to be used in LangChain. This is done with the goals of (1) allowing retrievers constructed elsewhere to be used more easily in LangChain, (2) encouraging more experimentation with alternative Mar 23, 2023 4 min read LangChain + Zapier Natural Language Actions (NLA) We are super excited to team up with Zapier and integrate their new Zapier NLA API into LangChain, which you can now use with your agents and chains. With this integration, you have access to the 5k+ apps and 20k+ actions on Zapier's platform through a natural language API interface. Mar 16, 2023 2 min read Evaluation Evaluation of language models, and by extension applications built on top of language models, is hard. With recent model releases (OpenAI, Anthropic, Google) evaluation is becoming a bigger and bigger issue. People are starting to try to tackle this, with OpenAI releasing OpenAI/evals - focused on evaluating OpenAI models. Mar 14, 2023 3 min read LLMs and SQL Francisco Ingham and Jon Luo are two of the community members leading the change on the SQL integrations. We’re really excited to write this blog post with them going over all the tips and tricks they’ve learned doing so. We’re even more excited to announce that we’ Mar 13, 2023 8 min read Origin Web Browser [Editor's Note]: This is the second of hopefully many guest posts. We intend to highlight novel applications building on top of LangChain. If you are interested in working with us on such a post, please reach out to [email protected].
Authors: Parth Asawa (pgasawa@), Ayushi Batwara (ayushi.batwara@), Jason Mar 8, 2023 4 min read Prompt Selectors One common complaint we've heard is that the default prompt templates do not work equally well for all models. This became especially pronounced this past week when OpenAI released a ChatGPT API. This new API had a completely new interface (which required new abstractions) and as a result many users Mar 8, 2023 2 min read Chat Models Last week OpenAI released a ChatGPT endpoint. It came marketed with several big improvements, most notably being 10x cheaper and a lot faster. But it also came with a completely new API endpoint. We were able to quickly write a wrapper for this endpoint to let users use it like Mar 6, 2023 6 min read Using the ChatGPT API to evaluate the ChatGPT API OpenAI released a new ChatGPT API yesterday. Lots of people were excited to try it. But how does it actually compare to the existing API? It will take some time before there is a definitive answer, but here are some initial thoughts. Because I'm lazy, I also enrolled the help Mar 2, 2023 5 min read Agent Toolkits Today, we're announcing agent toolkits, a new abstraction that allows developers to create agents designed for a particular use-case (for example, interacting with a relational database or interacting with an OpenAPI spec). We hope to continue developing different toolkits that can enable agents to do amazing feats. Toolkits are supported Mar 1, 2023 3 min read TypeScript Support It's finally here... TypeScript support for LangChain. | https://langchain.readthedocs.io/en/latest/modules/agents/agents/examples/structured_chat.html |
3920e70f51d5-3 | What does this mean? It means that all your favorite prompts, chains, and agents are all recreatable in TypeScript natively. Both the Python version and TypeScript version utilize the same serializable format, meaning that artifacts can seamlessly be shared between languages. As an Feb 17, 2023 2 min read Streaming Support in LangChain We’re excited to announce streaming support in LangChain. There's been a lot of talk about the best UX for LLM applications, and we believe streaming is at its core. We’ve also updated the chat-langchain repo to include streaming and async execution. We hope that this repo can serve Feb 14, 2023 2 min read LangChain + Chroma Today we’re announcing LangChain's integration with Chroma, the first step on the path to the Modern A.I Stack.
LangChain - The A.I-native developer toolkit
We started LangChain with the intent to build a modular and flexible framework for developing A.I-native applications. Some of the use cases Feb 13, 2023 2 min read Page 1 of 2 Older Posts → LangChain © 2023 Sign up Powered by Ghost
Thought:
> Finished chain.
The LangChain blog has recently released an open-source auto-evaluator tool for grading LLM question-answer chains and is now releasing an open-source, free-to-use hosted app and API to expand usability. The blog also discusses various opportunities to further improve the LangChain platform.
response = await agent_chain.arun(input="What's the latest xkcd comic about?")
print(response)
> Entering new AgentExecutor chain...
Thought: I can navigate to the xkcd website and extract the latest comic title and alt text to answer the question.
Action:
```
{
"action": "navigate_browser",
"action_input": {
"url": "https://xkcd.com/"
}
}
```
Observation: Navigating to https://xkcd.com/ returned status code 200
Thought:I can extract the latest comic title and alt text using CSS selectors.
Action:
```
{
"action": "get_elements",
"action_input": {
"selector": "#ctitle, #comic img",
"attributes": ["alt", "src"]
}
}
```
Observation: [{"alt": "Tapetum Lucidum", "src": "//imgs.xkcd.com/comics/tapetum_lucidum.png"}]
Thought:
> Finished chain.
The latest xkcd comic is titled "Tapetum Lucidum" and the image can be found at https://xkcd.com/2565/.
Adding in memory#
Here is how you add in memory to this agent
from langchain.prompts import MessagesPlaceholder
from langchain.memory import ConversationBufferMemory
chat_history = MessagesPlaceholder(variable_name="chat_history")
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
agent_chain = initialize_agent(
tools,
llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
memory=memory,
agent_kwargs = {
"memory_prompts": [chat_history],
"input_variables": ["input", "agent_scratchpad", "chat_history"]
}
)
response = await agent_chain.arun(input="Hi I'm Erica.")
print(response)
> Entering new AgentExecutor chain...
Action:
```
{
"action": "Final Answer",
"action_input": "Hi Erica! How can I assist you today?"
}
```
> Finished chain.
Hi Erica! How can I assist you today?
response = await agent_chain.arun(input="whats my name?")
print(response)
> Entering new AgentExecutor chain...
Your name is Erica.
> Finished chain.
Your name is Erica.
previous
Self Ask With Search
next
Toolkits
Contents
Initialize Tools
Adding in memory
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/agents/examples/structured_chat.html |
7bbd873be40a-0 | .ipynb
.pdf
Self Ask With Search
Self Ask With Search#
This notebook showcases the Self Ask With Search chain.
from langchain import OpenAI, SerpAPIWrapper
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
llm = OpenAI(temperature=0)
search = SerpAPIWrapper()
tools = [
Tool(
name="Intermediate Answer",
func=search.run,
description="useful for when you need to ask with search"
)
]
self_ask_with_search = initialize_agent(tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True)
self_ask_with_search.run("What is the hometown of the reigning men's U.S. Open champion?")
> Entering new AgentExecutor chain...
Yes.
Follow up: Who is the reigning men's U.S. Open champion?
Intermediate answer: Carlos Alcaraz Garfia
Follow up: Where is Carlos Alcaraz Garfia from?
Intermediate answer: El Palmar, Spain
So the final answer is: El Palmar, Spain
> Finished chain.
'El Palmar, Spain'
previous
ReAct
next
Structured Tool Chat Agent
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/agents/examples/self_ask_with_search.html |
82e6e73de4ae-0 | .ipynb
.pdf
MRKL Chat
MRKL Chat#
This notebook showcases using an agent to replicate the MRKL chain using an agent optimized for chat models.
This uses the example Chinook database.
To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file in a notebooks folder at the root of this repository.
from langchain import OpenAI, LLMMathChain, SerpAPIWrapper, SQLDatabase, SQLDatabaseChain
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(temperature=0)
llm1 = OpenAI(temperature=0)
search = SerpAPIWrapper()
llm_math_chain = LLMMathChain(llm=llm1, verbose=True)
db = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")
db_chain = SQLDatabaseChain.from_llm(llm1, db, verbose=True)
tools = [
Tool(
name = "Search",
func=search.run,
description="useful for when you need to answer questions about current events. You should ask targeted questions"
),
Tool(
name="Calculator",
func=llm_math_chain.run,
description="useful for when you need to answer questions about math"
),
Tool(
name="FooBar DB",
func=db_chain.run,
description="useful for when you need to answer questions about FooBar. Input should be in the form of a question containing full context"
)
]
mrkl = initialize_agent(tools, llm, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
mrkl.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
> Entering new AgentExecutor chain...
Thought: The first question requires a search, while the second question requires a calculator.
Action:
```
{
"action": "Search",
"action_input": "Leo DiCaprio girlfriend"
}
```
Observation: Gigi Hadid: 2022 Leo and Gigi were first linked back in September 2022, when a source told Us Weekly that Leo had his “sights set" on her (alarming way to put it, but okay).
Thought:For the second question, I need to calculate the age raised to the 0.43 power. I will use the calculator tool.
Action:
```
{
"action": "Calculator",
"action_input": "((2022-1995)^0.43)"
}
```
> Entering new LLMMathChain chain...
((2022-1995)^0.43)
```text
(2022-1995)**0.43
```
...numexpr.evaluate("(2022-1995)**0.43")...
Answer: 4.125593352125936
> Finished chain.
Observation: Answer: 4.125593352125936
Thought:I now know the final answer.
Final Answer: Gigi Hadid is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is approximately 4.13.
> Finished chain.
"Gigi Hadid is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is approximately 4.13."
mrkl.run("What is the full name of the artist who recently released an album called 'The Storm Before the Calm' and are they in the FooBar database? If so, what albums of theirs are in the FooBar database?")
> Entering new AgentExecutor chain...
Question: What is the full name of the artist who recently released an album called 'The Storm Before the Calm' and are they in the FooBar database? If so, what albums of theirs are in the FooBar database?
Thought: I should use the Search tool to find the answer to the first part of the question and then use the FooBar DB tool to find the answer to the second part.
Action:
```
{
"action": "Search",
"action_input": "Who recently released an album called 'The Storm Before the Calm'"
}
```
Observation: Alanis Morissette | https://langchain.readthedocs.io/en/latest/modules/agents/agents/examples/mrkl_chat.html |
82e6e73de4ae-1 | }
```
Observation: Alanis Morissette
Thought:Now that I know the artist's name, I can use the FooBar DB tool to find out if they are in the database and what albums of theirs are in it.
Action:
```
{
"action": "FooBar DB",
"action_input": "What albums does Alanis Morissette have in the database?"
}
```
> Entering new SQLDatabaseChain chain...
What albums does Alanis Morissette have in the database?
SQLQuery:
/Users/harrisonchase/workplace/langchain/langchain/sql_database.py:191: SAWarning: Dialect sqlite+pysqlite does *not* support Decimal objects natively, and SQLAlchemy must convert from floating point - rounding errors and other issues may occur. Please consider storing Decimal numbers as strings or integers on this platform for lossless storage.
sample_rows = connection.execute(command)
SELECT "Title" FROM "Album" WHERE "ArtistId" IN (SELECT "ArtistId" FROM "Artist" WHERE "Name" = 'Alanis Morissette') LIMIT 5;
SQLResult: [('Jagged Little Pill',)]
Answer: Alanis Morissette has the album Jagged Little Pill in the database.
> Finished chain.
Observation: Alanis Morissette has the album Jagged Little Pill in the database.
Thought:The artist Alanis Morissette is in the FooBar database and has the album Jagged Little Pill in it.
Final Answer: Alanis Morissette is in the FooBar database and has the album Jagged Little Pill in it.
> Finished chain.
'Alanis Morissette is in the FooBar database and has the album Jagged Little Pill in it.'
previous
MRKL
next
ReAct
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/agents/examples/mrkl_chat.html |
423324c54048-0 | .ipynb
.pdf
ReAct
ReAct#
This notebook showcases using an agent to implement the ReAct logic.
from langchain import OpenAI, Wikipedia
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
from langchain.agents.react.base import DocstoreExplorer
docstore=DocstoreExplorer(Wikipedia())
tools = [
Tool(
name="Search",
func=docstore.search,
description="useful for when you need to ask with search"
),
Tool(
name="Lookup",
func=docstore.lookup,
description="useful for when you need to ask with lookup"
)
]
llm = OpenAI(temperature=0, model_name="text-davinci-002")
react = initialize_agent(tools, llm, agent=AgentType.REACT_DOCSTORE, verbose=True)
question = "Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?"
react.run(question)
> Entering new AgentExecutor chain...
Thought: I need to search David Chanoff and find the U.S. Navy admiral he collaborated with. Then I need to find which President the admiral served under.
Action: Search[David Chanoff]
Observation: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.
Thought: The U.S. Navy admiral David Chanoff collaborated with is William J. Crowe. I need to find which President he served under.
Action: Search[William J. Crowe]
Observation: William James Crowe Jr. (January 2, 1925 – October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton.
Thought: William J. Crowe served as the ambassador to the United Kingdom under President Bill Clinton, so the answer is Bill Clinton.
Action: Finish[Bill Clinton]
> Finished chain.
'Bill Clinton'
previous
MRKL Chat
next
Self Ask With Search
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/agents/examples/react.html |
a0bc0623318e-0 | .ipynb
.pdf
Conversation Agent (for Chat Models)
Conversation Agent (for Chat Models)#
This notebook walks through using an agent optimized for conversation, using ChatModels. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well.
This is accomplished with a specific type of agent (chat-conversational-react-description) which expects to be used with a memory component.
!pip install langchain
!pip install google-search-results
!pip install openai
from langchain.agents import Tool
from langchain.memory import ConversationBufferMemory
from langchain.chat_models import ChatOpenAI
from langchain.utilities import SerpAPIWrapper
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from getpass import getpass
SERPAPI_API_KEY = getpass()
search = SerpAPIWrapper(serpapi_api_key=SERPAPI_API_KEY)
tools = [
Tool(
name = "Current Search",
func=search.run,
description="useful for when you need to answer questions about current events or the current state of the world. the input to this should be a single search term."
),
]
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
OPENAI_API_KEY = getpass()
llm=ChatOpenAI(openai_api_key=OPENAI_API_KEY, temperature=0)
agent_chain = initialize_agent(tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)
agent_chain.run(input="hi, i am bob")
> Entering new AgentExecutor chain...
{
"action": "Final Answer",
"action_input": "Hello Bob! How can I assist you today?"
}
> Finished chain.
'Hello Bob! How can I assist you today?'
agent_chain.run(input="what's my name?")
> Entering new AgentExecutor chain...
{
"action": "Final Answer",
"action_input": "Your name is Bob."
}
> Finished chain.
'Your name is Bob.'
agent_chain.run("what are some good dinners to make this week, if i like thai food?")
> Entering new AgentExecutor chain...
{
"action": "Current Search",
"action_input": "Thai food dinner recipes"
}
Observation: 64 easy Thai recipes for any night of the week · Thai curry noodle soup · Thai yellow cauliflower, snake bean and tofu curry · Thai-spiced chicken hand pies · Thai ...
Thought:{
"action": "Final Answer",
"action_input": "Here are some Thai food dinner recipes you can try this week: Thai curry noodle soup, Thai yellow cauliflower, snake bean and tofu curry, Thai-spiced chicken hand pies, and many more. You can find the full list of recipes at the source I found earlier."
}
> Finished chain.
'Here are some Thai food dinner recipes you can try this week: Thai curry noodle soup, Thai yellow cauliflower, snake bean and tofu curry, Thai-spiced chicken hand pies, and many more. You can find the full list of recipes at the source I found earlier.'
agent_chain.run(input="tell me the last letter in my name, and also tell me who won the world cup in 1978?")
> Entering new AgentExecutor chain...
{
"action": "Final Answer",
"action_input": "The last letter in your name is 'b'. Argentina won the World Cup in 1978."
}
> Finished chain.
"The last letter in your name is 'b'. Argentina won the World Cup in 1978."
agent_chain.run(input="whats the weather like in pomfret?")
> Entering new AgentExecutor chain...
{
"action": "Current Search",
"action_input": "weather in pomfret"
}
Observation: Cloudy with showers. Low around 55F. Winds S at 5 to 10 mph. Chance of rain 60%. Humidity76%.
Thought:{
"action": "Final Answer",
"action_input": "Cloudy with showers. Low around 55F. Winds S at 5 to 10 mph. Chance of rain 60%. Humidity76%."
}
> Finished chain. | https://langchain.readthedocs.io/en/latest/modules/agents/agents/examples/chat_conversation_agent.html |
a0bc0623318e-1 | }
> Finished chain.
'Cloudy with showers. Low around 55F. Winds S at 5 to 10 mph. Chance of rain 60%. Humidity76%.'
previous
Custom Agent with Tool Retrieval
next
Conversation Agent
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/agents/examples/chat_conversation_agent.html |
37aef4dd68a1-0 | .ipynb
.pdf
MRKL
MRKL#
This notebook showcases using an agent to replicate the MRKL chain.
This uses the example Chinook database.
To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file in a notebooks folder at the root of this repository.
from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, SQLDatabase, SQLDatabaseChain
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
llm = OpenAI(temperature=0)
search = SerpAPIWrapper()
llm_math_chain = LLMMathChain(llm=llm, verbose=True)
db = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
tools = [
Tool(
name = "Search",
func=search.run,
description="useful for when you need to answer questions about current events. You should ask targeted questions"
),
Tool(
name="Calculator",
func=llm_math_chain.run,
description="useful for when you need to answer questions about math"
),
Tool(
name="FooBar DB",
func=db_chain.run,
description="useful for when you need to answer questions about FooBar. Input should be in the form of a question containing full context"
)
]
mrkl = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
mrkl.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
> Entering new AgentExecutor chain...
I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.
Action: Search
Action Input: "Who is Leo DiCaprio's girlfriend?"
Observation: DiCaprio met actor Camila Morrone in December 2017, when she was 20 and he was 43. They were spotted at Coachella and went on multiple vacations together. Some reports suggested that DiCaprio was ready to ask Morrone to marry him. The couple made their red carpet debut at the 2020 Academy Awards.
Thought: I need to calculate Camila Morrone's age raised to the 0.43 power.
Action: Calculator
Action Input: 21^0.43
> Entering new LLMMathChain chain...
21^0.43
```text
21**0.43
```
...numexpr.evaluate("21**0.43")...
Answer: 3.7030049853137306
> Finished chain.
Observation: Answer: 3.7030049853137306
Thought: I now know the final answer.
Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.7030049853137306.
> Finished chain.
"Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.7030049853137306."
mrkl.run("What is the full name of the artist who recently released an album called 'The Storm Before the Calm' and are they in the FooBar database? If so, what albums of theirs are in the FooBar database?")
> Entering new AgentExecutor chain...
I need to find out the artist's full name and then search the FooBar database for their albums.
Action: Search
Action Input: "The Storm Before the Calm" artist
Observation: The Storm Before the Calm (stylized in all lowercase) is the tenth (and eighth international) studio album by Canadian-American singer-songwriter Alanis Morissette, released June 17, 2022, via Epiphany Music and Thirty Tigers, as well as by RCA Records in Europe.
Thought: I now need to search the FooBar database for Alanis Morissette's albums.
Action: FooBar DB
Action Input: What albums by Alanis Morissette are in the FooBar database?
> Entering new SQLDatabaseChain chain...
What albums by Alanis Morissette are in the FooBar database?
SQLQuery: | https://langchain.readthedocs.io/en/latest/modules/agents/agents/examples/mrkl.html |
37aef4dd68a1-1 | What albums by Alanis Morissette are in the FooBar database?
SQLQuery:
/Users/harrisonchase/workplace/langchain/langchain/sql_database.py:191: SAWarning: Dialect sqlite+pysqlite does *not* support Decimal objects natively, and SQLAlchemy must convert from floating point - rounding errors and other issues may occur. Please consider storing Decimal numbers as strings or integers on this platform for lossless storage.
sample_rows = connection.execute(command)
SELECT "Title" FROM "Album" INNER JOIN "Artist" ON "Album"."ArtistId" = "Artist"."ArtistId" WHERE "Name" = 'Alanis Morissette' LIMIT 5;
SQLResult: [('Jagged Little Pill',)]
Answer: The albums by Alanis Morissette in the FooBar database are Jagged Little Pill.
> Finished chain.
Observation: The albums by Alanis Morissette in the FooBar database are Jagged Little Pill.
Thought: I now know the final answer.
Final Answer: The artist who released the album 'The Storm Before the Calm' is Alanis Morissette and the albums of hers in the FooBar database are Jagged Little Pill.
> Finished chain.
"The artist who released the album 'The Storm Before the Calm' is Alanis Morissette and the albums of hers in the FooBar database are Jagged Little Pill."
previous
Conversation Agent
next
MRKL Chat
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/agents/examples/mrkl.html |
a98863dae490-0 | .ipynb
.pdf
Multi-Input Tools
Contents
Multi-Input Tools with a string format
Multi-Input Tools#
This notebook shows how to use a tool that requires multiple inputs with an agent. The recommended way to do so is with the StructuredTool class.
import os
os.environ["LANGCHAIN_TRACING"] = "true"
from langchain import OpenAI
from langchain.agents import initialize_agent, AgentType
llm = OpenAI(temperature=0)
from langchain.tools import StructuredTool
def multiplier(a: float, b: float) -> float:
"""Multiply the provided floats."""
return a * b
tool = StructuredTool.from_function(multiplier)
# Structured tools are compatible with the STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION agent type.
agent_executor = initialize_agent([tool], llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent_executor.run("What is 3 times 4")
> Entering new AgentExecutor chain...
Thought: I need to multiply 3 and 4
Action:
```
{
"action": "multiplier",
"action_input": {"a": 3, "b": 4}
}
```
Observation: 12
Thought: I know what to respond
Action:
```
{
"action": "Final Answer",
"action_input": "3 times 4 is 12"
}
```
> Finished chain.
'3 times 4 is 12'
Multi-Input Tools with a string format#
An alternative to the structured tool would be to use the regular Tool class and accept a single string. The tool would then have to handle the parsing logic to extract the relavent values from the text, which tightly couples the tool representation to the agent prompt. This is still useful if the underlying language model can’t reliabl generate structured schema.
Let’s take the multiplication function as an example. In order to use this, we will tell the agent to generate the “Action Input” as a comma-separated list of length two. We will then write a thin wrapper that takes a string, splits it into two around a comma, and passes both parsed sides as integers to the multiplication function.
from langchain.llms import OpenAI
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
Here is the multiplication function, as well as a wrapper to parse a string as input.
def multiplier(a, b):
return a * b
def parsing_multiplier(string):
a, b = string.split(",")
return multiplier(int(a), int(b))
llm = OpenAI(temperature=0)
tools = [
Tool(
name = "Multiplier",
func=parsing_multiplier,
description="useful for when you need to multiply two numbers together. The input to this tool should be a comma separated list of numbers of length two, representing the two numbers you want to multiply together. For example, `1,2` would be the input if you wanted to multiply 1 by 2."
)
]
mrkl = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
mrkl.run("What is 3 times 4")
> Entering new AgentExecutor chain...
I need to multiply two numbers
Action: Multiplier
Action Input: 3,4
Observation: 12
Thought: I now know the final answer
Final Answer: 3 times 4 is 12
> Finished chain.
'3 times 4 is 12'
previous
Defining Custom Tools
next
Tool Input Schema
Contents
Multi-Input Tools with a string format
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/tools/multi_input_tool.html |
878ef5333bfe-0 | .ipynb
.pdf
Defining Custom Tools
Contents
Completely New Tools - String Input and Output
Tool dataclass
Subclassing the BaseTool class
Using the tool decorator
Custom Structured Tools
StructuredTool dataclass
Subclassing the BaseTool
Using the decorator
Modify existing tools
Defining the priorities among Tools
Using tools to return directly
Handling Tool Errors
Defining Custom Tools#
When constructing your own agent, you will need to provide it with a list of Tools that it can use. Besides the actual function that is called, the Tool consists of several components:
name (str), is required and must be unique within a set of tools provided to an agent
description (str), is optional but recommended, as it is used by an agent to determine tool use
return_direct (bool), defaults to False
args_schema (Pydantic BaseModel), is optional but recommended, can be used to provide more information (e.g., few-shot examples) or validation for expected parameters.
There are two main ways to define a tool, we will cover both in the example below.
# Import things that are needed generically
from langchain import LLMMathChain, SerpAPIWrapper
from langchain.agents import AgentType, initialize_agent
from langchain.chat_models import ChatOpenAI
from langchain.tools import BaseTool, StructuredTool, Tool, tool
Initialize the LLM to use for the agent.
llm = ChatOpenAI(temperature=0)
Completely New Tools - String Input and Output#
The simplest tools accept a single query string and return a string output. If your tool function requires multiple arguments, you might want to skip down to the StructuredTool section below.
There are two ways to do this: either by using the Tool dataclass, or by subclassing the BaseTool class.
Tool dataclass#
The ‘Tool’ dataclass wraps functions that accept a single string input and returns a string output.
# Load the tool configs that are needed.
search = SerpAPIWrapper()
llm_math_chain = LLMMathChain(llm=llm, verbose=True)
tools = [
Tool.from_function(
func=search.run,
name = "Search",
description="useful for when you need to answer questions about current events"
# coroutine= ... <- you can specify an async method if desired as well
),
]
/Users/wfh/code/lc/lckg/langchain/chains/llm_math/base.py:50: UserWarning: Directly instantiating an LLMMathChain with an llm is deprecated. Please instantiate with llm_chain argument or using the from_llm class method.
warnings.warn(
You can also define a custom `args_schema`` to provide more information about inputs.
from pydantic import BaseModel, Field
class CalculatorInput(BaseModel):
question: str = Field()
tools.append(
Tool.from_function(
func=llm_math_chain.run,
name="Calculator",
description="useful for when you need to answer questions about math",
args_schema=CalculatorInput
# coroutine= ... <- you can specify an async method if desired as well
)
)
# Construct the agent. We will use the default agent type here.
# See documentation for a full list of options.
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
> Entering new AgentExecutor chain...
I need to find out Leo DiCaprio's girlfriend's name and her age
Action: Search
Action Input: "Leo DiCaprio girlfriend"
Observation: After rumours of a romance with Gigi Hadid, the Oscar winner has seemingly moved on. First being linked to the television personality in September 2022, it appears as if his "age bracket" has moved up. This follows his rumoured relationship with mere 19-year-old Eden Polani.
Thought:I still need to find out his current girlfriend's name and age
Action: Search
Action Input: "Leo DiCaprio current girlfriend"
Observation: Just Jared on Instagram: “Leonardo DiCaprio & girlfriend Camila Morrone couple up for a lunch date!
Thought:Now that I know his girlfriend's name is Camila Morrone, I need to find her current age
Action: Search
Action Input: "Camila Morrone age"
Observation: 25 years | https://langchain.readthedocs.io/en/latest/modules/agents/tools/custom_tools.html |
878ef5333bfe-1 | Action Input: "Camila Morrone age"
Observation: 25 years
Thought:Now that I have her age, I need to calculate her age raised to the 0.43 power
Action: Calculator
Action Input: 25^(0.43)
> Entering new LLMMathChain chain...
25^(0.43)```text
25**(0.43)
```
...numexpr.evaluate("25**(0.43)")...
Answer: 3.991298452658078
> Finished chain.
Observation: Answer: 3.991298452658078
Thought:I now know the final answer
Final Answer: Camila Morrone's current age raised to the 0.43 power is approximately 3.99.
> Finished chain.
"Camila Morrone's current age raised to the 0.43 power is approximately 3.99."
Subclassing the BaseTool class#
You can also directly subclass BaseTool. This is useful if you want more control over the instance variables or if you want to propagate callbacks to nested chains or other tools.
from typing import Optional, Type
from langchain.callbacks.manager import AsyncCallbackManagerForToolRun, CallbackManagerForToolRun
class CustomSearchTool(BaseTool):
name = "custom_search"
description = "useful for when you need to answer questions about current events"
def _run(self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None) -> str:
"""Use the tool."""
return search.run(query)
async def _arun(self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("custom_search does not support async")
class CustomCalculatorTool(BaseTool):
name = "Calculator"
description = "useful for when you need to answer questions about math"
args_schema: Type[BaseModel] = CalculatorInput
def _run(self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None) -> str:
"""Use the tool."""
return llm_math_chain.run(query)
async def _arun(self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("Calculator does not support async")
tools = [CustomSearchTool(), CustomCalculatorTool()]
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
> Entering new AgentExecutor chain...
I need to use custom_search to find out who Leo DiCaprio's girlfriend is, and then use the Calculator to raise her age to the 0.43 power.
Action: custom_search
Action Input: "Leo DiCaprio girlfriend"
Observation: After rumours of a romance with Gigi Hadid, the Oscar winner has seemingly moved on. First being linked to the television personality in September 2022, it appears as if his "age bracket" has moved up. This follows his rumoured relationship with mere 19-year-old Eden Polani.
Thought:I need to find out the current age of Eden Polani.
Action: custom_search
Action Input: "Eden Polani age"
Observation: 19 years old
Thought:Now I can use the Calculator to raise her age to the 0.43 power.
Action: Calculator
Action Input: 19 ^ 0.43
> Entering new LLMMathChain chain...
19 ^ 0.43```text
19 ** 0.43
```
...numexpr.evaluate("19 ** 0.43")...
Answer: 3.547023357958959
> Finished chain.
Observation: Answer: 3.547023357958959
Thought:I now know the final answer.
Final Answer: 3.547023357958959
> Finished chain.
'3.547023357958959'
Using the tool decorator#
To make it easier to define custom tools, a @tool decorator is provided. This decorator can be used to quickly create a Tool from a simple function. The decorator uses the function name as the tool name by default, but this can be overridden by passing a string as the first argument. Additionally, the decorator will use the function’s docstring as the tool’s description.
from langchain.tools import tool | https://langchain.readthedocs.io/en/latest/modules/agents/tools/custom_tools.html |
878ef5333bfe-2 | from langchain.tools import tool
@tool
def search_api(query: str) -> str:
"""Searches the API for the query."""
return f"Results for query {query}"
search_api
You can also provide arguments like the tool name and whether to return directly.
@tool("search", return_direct=True)
def search_api(query: str) -> str:
"""Searches the API for the query."""
return "Results"
search_api
Tool(name='search', description='search(query: str) -> str - Searches the API for the query.', args_schema=<class 'pydantic.main.SearchApi'>, return_direct=True, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x12748c4c0>, func=<function search_api at 0x16bd66310>, coroutine=None)
You can also provide args_schema to provide more information about the argument
class SearchInput(BaseModel):
query: str = Field(description="should be a search query")
@tool("search", return_direct=True, args_schema=SearchInput)
def search_api(query: str) -> str:
"""Searches the API for the query."""
return "Results"
search_api
Tool(name='search', description='search(query: str) -> str - Searches the API for the query.', args_schema=<class '__main__.SearchInput'>, return_direct=True, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x12748c4c0>, func=<function search_api at 0x16bcf0ee0>, coroutine=None)
Custom Structured Tools#
If your functions require more structured arguments, you can use the StructuredTool class directly, or still subclass the BaseTool class.
StructuredTool dataclass#
To dynamically generate a structured tool from a given function, the fastest way to get started is with StructuredTool.from_function().
import requests
from langchain.tools import StructuredTool
def post_message(url: str, body: dict, parameters: Optional[dict] = None) -> str:
"""Sends a POST request to the given url with the given body and parameters."""
result = requests.post(url, json=body, params=parameters)
return f"Status: {result.status_code} - {result.text}"
tool = StructuredTool.from_function(post_message)
Subclassing the BaseTool#
The BaseTool automatically infers the schema from the _run method’s signature.
from typing import Optional, Type
from langchain.callbacks.manager import AsyncCallbackManagerForToolRun, CallbackManagerForToolRun
class CustomSearchTool(BaseTool):
name = "custom_search"
description = "useful for when you need to answer questions about current events"
def _run(self, query: str, engine: str = "google", gl: str = "us", hl: str = "en", run_manager: Optional[CallbackManagerForToolRun] = None) -> str:
"""Use the tool."""
search_wrapper = SerpAPIWrapper(params={"engine": engine, "gl": gl, "hl": hl})
return search_wrapper.run(query)
async def _arun(self, query: str, engine: str = "google", gl: str = "us", hl: str = "en", run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("custom_search does not support async")
# You can provide a custom args schema to add descriptions or custom validation
class SearchSchema(BaseModel):
query: str = Field(description="should be a search query")
engine: str = Field(description="should be a search engine")
gl: str = Field(description="should be a country code")
hl: str = Field(description="should be a language code")
class CustomSearchTool(BaseTool):
name = "custom_search"
description = "useful for when you need to answer questions about current events"
args_schema: Type[SearchSchema] = SearchSchema
def _run(self, query: str, engine: str = "google", gl: str = "us", hl: str = "en", run_manager: Optional[CallbackManagerForToolRun] = None) -> str:
"""Use the tool."""
search_wrapper = SerpAPIWrapper(params={"engine": engine, "gl": gl, "hl": hl})
return search_wrapper.run(query) | https://langchain.readthedocs.io/en/latest/modules/agents/tools/custom_tools.html |
878ef5333bfe-3 | return search_wrapper.run(query)
async def _arun(self, query: str, engine: str = "google", gl: str = "us", hl: str = "en", run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("custom_search does not support async")
Using the decorator#
The tool decorator creates a structured tool automatically if the signature has multiple arguments.
import requests
from langchain.tools import tool
@tool
def post_message(url: str, body: dict, parameters: Optional[dict] = None) -> str:
"""Sends a POST request to the given url with the given body and parameters."""
result = requests.post(url, json=body, params=parameters)
return f"Status: {result.status_code} - {result.text}"
Modify existing tools#
Now, we show how to load existing tools and modify them directly. In the example below, we do something really simple and change the Search tool to have the name Google Search.
from langchain.agents import load_tools
tools = load_tools(["serpapi", "llm-math"], llm=llm)
tools[0].name = "Google Search"
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
> Entering new AgentExecutor chain...
I need to find out Leo DiCaprio's girlfriend's name and her age.
Action: Google Search
Action Input: "Leo DiCaprio girlfriend"
Observation: After rumours of a romance with Gigi Hadid, the Oscar winner has seemingly moved on. First being linked to the television personality in September 2022, it appears as if his "age bracket" has moved up. This follows his rumoured relationship with mere 19-year-old Eden Polani.
Thought:I still need to find out his current girlfriend's name and her age.
Action: Google Search
Action Input: "Leo DiCaprio current girlfriend age"
Observation: Leonardo DiCaprio has been linked with 19-year-old model Eden Polani, continuing the rumour that he doesn't date any women over the age of ...
Thought:I need to find out the age of Eden Polani.
Action: Calculator
Action Input: 19^(0.43)
Observation: Answer: 3.547023357958959
Thought:I now know the final answer.
Final Answer: The age of Leo DiCaprio's girlfriend raised to the 0.43 power is approximately 3.55.
> Finished chain.
"The age of Leo DiCaprio's girlfriend raised to the 0.43 power is approximately 3.55."
Defining the priorities among Tools#
When you made a Custom tool, you may want the Agent to use the custom tool more than normal tools.
For example, you made a custom tool, which gets information on music from your database. When a user wants information on songs, You want the Agent to use the custom tool more than the normal Search tool. But the Agent might prioritize a normal Search tool.
This can be accomplished by adding a statement such as Use this more than the normal search if the question is about Music, like 'who is the singer of yesterday?' or 'what is the most popular song in 2022?' to the description.
An example is below.
# Import things that are needed generically
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
from langchain.llms import OpenAI
from langchain import LLMMathChain, SerpAPIWrapper
search = SerpAPIWrapper()
tools = [
Tool(
name = "Search",
func=search.run,
description="useful for when you need to answer questions about current events"
),
Tool(
name="Music Search",
func=lambda x: "'All I Want For Christmas Is You' by Mariah Carey.", #Mock Function
description="A Music search engine. Use this more than the normal search if the question is about Music, like 'who is the singer of yesterday?' or 'what is the most popular song in 2022?'",
)
]
agent = initialize_agent(tools, OpenAI(temperature=0), agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("what is the most famous song of christmas") | https://langchain.readthedocs.io/en/latest/modules/agents/tools/custom_tools.html |
878ef5333bfe-4 | agent.run("what is the most famous song of christmas")
> Entering new AgentExecutor chain...
I should use a music search engine to find the answer
Action: Music Search
Action Input: most famous song of christmas'All I Want For Christmas Is You' by Mariah Carey. I now know the final answer
Final Answer: 'All I Want For Christmas Is You' by Mariah Carey.
> Finished chain.
"'All I Want For Christmas Is You' by Mariah Carey."
Using tools to return directly#
Often, it can be desirable to have a tool output returned directly to the user, if it’s called. You can do this easily with LangChain by setting the return_direct flag for a tool to be True.
llm_math_chain = LLMMathChain(llm=llm)
tools = [
Tool(
name="Calculator",
func=llm_math_chain.run,
description="useful for when you need to answer questions about math",
return_direct=True
)
]
llm = OpenAI(temperature=0)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("whats 2**.12")
> Entering new AgentExecutor chain...
I need to calculate this
Action: Calculator
Action Input: 2**.12Answer: 1.086734862526058
> Finished chain.
'Answer: 1.086734862526058'
Handling Tool Errors#
When a tool encounters an error and the exception is not caught, the agent will stop executing. If you want the agent to continue execution, you can raise a ToolException and set handle_tool_error accordingly.
When ToolException is thrown, the agent will not stop working, but will handle the exception according to the handle_tool_error variable of the tool, and the processing result will be returned to the agent as observation, and printed in red.
You can set handle_tool_error to True, set it a unified string value, or set it as a function. If it’s set as a function, the function should take a ToolException as a parameter and return a str value.
Please note that only raising a ToolException won’t be effective. You need to first set the handle_tool_error of the tool because its default value is False.
from langchain.schema import ToolException
from langchain import SerpAPIWrapper
from langchain.agents import AgentType, initialize_agent
from langchain.chat_models import ChatOpenAI
from langchain.tools import Tool
from langchain.chat_models import ChatOpenAI
def _handle_error(error:ToolException) -> str:
return "The following errors occurred during tool execution:" + error.args[0]+ "Please try another tool."
def search_tool1(s: str):raise ToolException("The search tool1 is not available.")
def search_tool2(s: str):raise ToolException("The search tool2 is not available.")
search_tool3 = SerpAPIWrapper()
description="useful for when you need to answer questions about current events.You should give priority to using it."
tools = [
Tool.from_function(
func=search_tool1,
name="Search_tool1",
description=description,
handle_tool_error=True,
),
Tool.from_function(
func=search_tool2,
name="Search_tool2",
description=description,
handle_tool_error=_handle_error,
),
Tool.from_function(
func=search_tool3.run,
name="Search_tool3",
description="useful for when you need to answer questions about current events",
),
]
agent = initialize_agent(
tools,
ChatOpenAI(temperature=0),
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
)
agent.run("Who is Leo DiCaprio's girlfriend?")
> Entering new AgentExecutor chain...
I should use Search_tool1 to find recent news articles about Leo DiCaprio's personal life.
Action: Search_tool1
Action Input: "Leo DiCaprio girlfriend"
Observation: The search tool1 is not available.
Thought:I should try using Search_tool2 instead.
Action: Search_tool2
Action Input: "Leo DiCaprio girlfriend"
Observation: The following errors occurred during tool execution:The search tool2 is not available.Please try another tool.
Thought:I should try using Search_tool3 as a last resort.
Action: Search_tool3
Action Input: "Leo DiCaprio girlfriend" | https://langchain.readthedocs.io/en/latest/modules/agents/tools/custom_tools.html |
878ef5333bfe-5 | Action: Search_tool3
Action Input: "Leo DiCaprio girlfriend"
Observation: Leonardo DiCaprio and Gigi Hadid were recently spotted at a pre-Oscars party, sparking interest once again in their rumored romance. The Revenant actor and the model first made headlines when they were spotted together at a New York Fashion Week afterparty in September 2022.
Thought:Based on the information from Search_tool3, it seems that Gigi Hadid is currently rumored to be Leo DiCaprio's girlfriend.
Final Answer: Gigi Hadid is currently rumored to be Leo DiCaprio's girlfriend.
> Finished chain.
"Gigi Hadid is currently rumored to be Leo DiCaprio's girlfriend."
previous
Getting Started
next
Multi-Input Tools
Contents
Completely New Tools - String Input and Output
Tool dataclass
Subclassing the BaseTool class
Using the tool decorator
Custom Structured Tools
StructuredTool dataclass
Subclassing the BaseTool
Using the decorator
Modify existing tools
Defining the priorities among Tools
Using tools to return directly
Handling Tool Errors
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/tools/custom_tools.html |
4e8aac2d530d-0 | .md
.pdf
Getting Started
Contents
List of Tools
Getting Started#
Tools are functions that agents can use to interact with the world.
These tools can be generic utilities (e.g. search), other chains, or even other agents.
Currently, tools can be loaded with the following snippet:
from langchain.agents import load_tools
tool_names = [...]
tools = load_tools(tool_names)
Some tools (e.g. chains, agents) may require a base LLM to use to initialize them.
In that case, you can pass in an LLM as well:
from langchain.agents import load_tools
tool_names = [...]
llm = ...
tools = load_tools(tool_names, llm=llm)
Below is a list of all supported tools and relevant information:
Tool Name: The name the LLM refers to the tool by.
Tool Description: The description of the tool that is passed to the LLM.
Notes: Notes about the tool that are NOT passed to the LLM.
Requires LLM: Whether this tool requires an LLM to be initialized.
(Optional) Extra Parameters: What extra parameters are required to initialize this tool.
List of Tools#
python_repl
Tool Name: Python REPL
Tool Description: A Python shell. Use this to execute python commands. Input should be a valid python command. If you expect output it should be printed out.
Notes: Maintains state.
Requires LLM: No
serpapi
Tool Name: Search
Tool Description: A search engine. Useful for when you need to answer questions about current events. Input should be a search query.
Notes: Calls the Serp API and then parses results.
Requires LLM: No
wolfram-alpha
Tool Name: Wolfram Alpha
Tool Description: A wolfram alpha search engine. Useful for when you need to answer questions about Math, Science, Technology, Culture, Society and Everyday Life. Input should be a search query.
Notes: Calls the Wolfram Alpha API and then parses results.
Requires LLM: No
Extra Parameters: wolfram_alpha_appid: The Wolfram Alpha app id.
requests
Tool Name: Requests
Tool Description: A portal to the internet. Use this when you need to get specific content from a site. Input should be a specific url, and the output will be all the text on that page.
Notes: Uses the Python requests module.
Requires LLM: No
terminal
Tool Name: Terminal
Tool Description: Executes commands in a terminal. Input should be valid commands, and the output will be any output from running that command.
Notes: Executes commands with subprocess.
Requires LLM: No
pal-math
Tool Name: PAL-MATH
Tool Description: A language model that is excellent at solving complex word math problems. Input should be a fully worded hard word math problem.
Notes: Based on this paper.
Requires LLM: Yes
pal-colored-objects
Tool Name: PAL-COLOR-OBJ
Tool Description: A language model that is wonderful at reasoning about position and the color attributes of objects. Input should be a fully worded hard reasoning problem. Make sure to include all information about the objects AND the final question you want to answer.
Notes: Based on this paper.
Requires LLM: Yes
llm-math
Tool Name: Calculator
Tool Description: Useful for when you need to answer questions about math.
Notes: An instance of the LLMMath chain.
Requires LLM: Yes
open-meteo-api
Tool Name: Open Meteo API
Tool Description: Useful for when you want to get weather information from the OpenMeteo API. The input should be a question in natural language that this API can answer.
Notes: A natural language connection to the Open Meteo API (https://api.open-meteo.com/), specifically the /v1/forecast endpoint.
Requires LLM: Yes
news-api
Tool Name: News API
Tool Description: Use this when you want to get information about the top headlines of current news stories. The input should be a question in natural language that this API can answer.
Notes: A natural language connection to the News API (https://newsapi.org), specifically the /v2/top-headlines endpoint.
Requires LLM: Yes
Extra Parameters: news_api_key (your API key to access this endpoint)
tmdb-api
Tool Name: TMDB API
Tool Description: Useful for when you want to get information from The Movie Database. The input should be a question in natural language that this API can answer. | https://langchain.readthedocs.io/en/latest/modules/agents/tools/getting_started.html |
4e8aac2d530d-1 | Notes: A natural language connection to the TMDB API (https://api.themoviedb.org/3), specifically the /search/movie endpoint.
Requires LLM: Yes
Extra Parameters: tmdb_bearer_token (your Bearer Token to access this endpoint - note that this is different from the API key)
google-search
Tool Name: Search
Tool Description: A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query.
Notes: Uses the Google Custom Search API
Requires LLM: No
Extra Parameters: google_api_key, google_cse_id
For more information on this, see this page
searx-search
Tool Name: Search
Tool Description: A wrapper around SearxNG meta search engine. Input should be a search query.
Notes: SearxNG is easy to deploy self-hosted. It is a good privacy friendly alternative to Google Search. Uses the SearxNG API.
Requires LLM: No
Extra Parameters: searx_host
google-serper
Tool Name: Search
Tool Description: A low-cost Google Search API. Useful for when you need to answer questions about current events. Input should be a search query.
Notes: Calls the serper.dev Google Search API and then parses results.
Requires LLM: No
Extra Parameters: serper_api_key
For more information on this, see this page
wikipedia
Tool Name: Wikipedia
Tool Description: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, historical events, or other subjects. Input should be a search query.
Notes: Uses the wikipedia Python package to call the MediaWiki API and then parses results.
Requires LLM: No
Extra Parameters: top_k_results
podcast-api
Tool Name: Podcast API
Tool Description: Use the Listen Notes Podcast API to search all podcasts or episodes. The input should be a question in natural language that this API can answer.
Notes: A natural language connection to the Listen Notes Podcast API (https://www.PodcastAPI.com), specifically the /search/ endpoint.
Requires LLM: Yes
Extra Parameters: listen_api_key (your api key to access this endpoint)
openweathermap-api
Tool Name: OpenWeatherMap
Tool Description: A wrapper around OpenWeatherMap API. Useful for fetching current weather information for a specified location. Input should be a location string (e.g. London,GB).
Notes: A connection to the OpenWeatherMap API (https://api.openweathermap.org), specifically the /data/2.5/weather endpoint.
Requires LLM: No
Extra Parameters: openweathermap_api_key (your API key to access this endpoint)
sleep
Tool Name: Sleep
Tool Description: Make agent sleep for some time.
Requires LLM: No
previous
Tools
next
Defining Custom Tools
Contents
List of Tools
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/tools/getting_started.html |
8fa46b33b6b6-0 | .ipynb
.pdf
Tool Input Schema
Tool Input Schema#
By default, tools infer the argument schema by inspecting the function signature. For more strict requirements, custom input schema can be specified, along with custom validation logic.
from typing import Any, Dict
from langchain.agents import AgentType, initialize_agent
from langchain.llms import OpenAI
from langchain.tools.requests.tool import RequestsGetTool, TextRequestsWrapper
from pydantic import BaseModel, Field, root_validator
llm = OpenAI(temperature=0)
!pip install tldextract > /dev/null
[notice] A new release of pip is available: 23.0.1 -> 23.1
[notice] To update, run: pip install --upgrade pip
import tldextract
_APPROVED_DOMAINS = {
"langchain",
"wikipedia",
}
class ToolInputSchema(BaseModel):
url: str = Field(...)
@root_validator
def validate_query(cls, values: Dict[str, Any]) -> Dict:
url = values["url"]
domain = tldextract.extract(url).domain
if domain not in _APPROVED_DOMAINS:
raise ValueError(f"Domain {domain} is not on the approved list:"
f" {sorted(_APPROVED_DOMAINS)}")
return values
tool = RequestsGetTool(args_schema=ToolInputSchema, requests_wrapper=TextRequestsWrapper())
agent = initialize_agent([tool], llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=False)
# This will succeed, since there aren't any arguments that will be triggered during validation
answer = agent.run("What's the main title on langchain.com?")
print(answer)
The main title of langchain.com is "LANG CHAIN 🦜️🔗 Official Home Page"
agent.run("What's the main title on google.com?")
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[7], line 1
----> 1 agent.run("What's the main title on google.com?")
File ~/code/lc/lckg/langchain/chains/base.py:213, in Chain.run(self, *args, **kwargs)
211 if len(args) != 1:
212 raise ValueError("`run` supports only one positional argument.")
--> 213 return self(args[0])[self.output_keys[0]]
215 if kwargs and not args:
216 return self(kwargs)[self.output_keys[0]]
File ~/code/lc/lckg/langchain/chains/base.py:116, in Chain.__call__(self, inputs, return_only_outputs)
114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
--> 116 raise e
117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
118 return self.prep_outputs(inputs, outputs, return_only_outputs)
File ~/code/lc/lckg/langchain/chains/base.py:113, in Chain.__call__(self, inputs, return_only_outputs)
107 self.callback_manager.on_chain_start(
108 {"name": self.__class__.__name__},
109 inputs,
110 verbose=self.verbose,
111 )
112 try:
--> 113 outputs = self._call(inputs)
114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
File ~/code/lc/lckg/langchain/agents/agent.py:792, in AgentExecutor._call(self, inputs)
790 # We now enter the agent loop (until it returns something).
791 while self._should_continue(iterations, time_elapsed):
--> 792 next_step_output = self._take_next_step(
793 name_to_tool_map, color_mapping, inputs, intermediate_steps
794 )
795 if isinstance(next_step_output, AgentFinish):
796 return self._return(next_step_output, intermediate_steps)
File ~/code/lc/lckg/langchain/agents/agent.py:695, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps)
693 tool_run_kwargs["llm_prefix"] = ""
694 # We then call the tool on the tool input to get an observation | https://langchain.readthedocs.io/en/latest/modules/agents/tools/tool_input_validation.html |
8fa46b33b6b6-1 | 694 # We then call the tool on the tool input to get an observation
--> 695 observation = tool.run(
696 agent_action.tool_input,
697 verbose=self.verbose,
698 color=color,
699 **tool_run_kwargs,
700 )
701 else:
702 tool_run_kwargs = self.agent.tool_run_logging_kwargs()
File ~/code/lc/lckg/langchain/tools/base.py:110, in BaseTool.run(self, tool_input, verbose, start_color, color, **kwargs)
101 def run(
102 self,
103 tool_input: Union[str, Dict],
(...)
107 **kwargs: Any,
108 ) -> str:
109 """Run the tool."""
--> 110 run_input = self._parse_input(tool_input)
111 if not self.verbose and verbose is not None:
112 verbose_ = verbose
File ~/code/lc/lckg/langchain/tools/base.py:71, in BaseTool._parse_input(self, tool_input)
69 if issubclass(input_args, BaseModel):
70 key_ = next(iter(input_args.__fields__.keys()))
---> 71 input_args.parse_obj({key_: tool_input})
72 # Passing as a positional argument is more straightforward for
73 # backwards compatability
74 return tool_input
File ~/code/lc/lckg/.venv/lib/python3.11/site-packages/pydantic/main.py:526, in pydantic.main.BaseModel.parse_obj()
File ~/code/lc/lckg/.venv/lib/python3.11/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for ToolInputSchema
__root__
Domain google is not on the approved list: ['langchain', 'wikipedia'] (type=value_error)
previous
Multi-Input Tools
next
Apify
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/tools/tool_input_validation.html |
8ba763380ace-0 | .ipynb
.pdf
Gradio Tools
Contents
Using a tool
Using within an agent
Gradio Tools#
There are many 1000s of Gradio apps on Hugging Face Spaces. This library puts them at the tips of your LLM’s fingers 🦾
Specifically, gradio-tools is a Python library for converting Gradio apps into tools that can be leveraged by a large language model (LLM)-based agent to complete its task. For example, an LLM could use a Gradio tool to transcribe a voice recording it finds online and then summarize it for you. Or it could use a different Gradio tool to apply OCR to a document on your Google Drive and then answer questions about it.
It’s very easy to create you own tool if you want to use a space that’s not one of the pre-built tools. Please see this section of the gradio-tools documentation for information on how to do that. All contributions are welcome!
# !pip install gradio_tools
Using a tool#
from gradio_tools.tools import StableDiffusionTool
local_file_path = StableDiffusionTool().langchain.run("Please create a photo of a dog riding a skateboard")
local_file_path
Loaded as API: https://gradio-client-demos-stable-diffusion.hf.space ✔
Job Status: Status.STARTING eta: None
'/Users/harrisonchase/workplace/langchain/docs/modules/agents/tools/examples/b61c1dd9-47e2-46f1-a47c-20d27640993d/tmp4ap48vnm.jpg'
from PIL import Image
im = Image.open(local_file_path)
display(im)
Using within an agent#
from langchain.agents import initialize_agent
from langchain.llms import OpenAI
from gradio_tools.tools import (StableDiffusionTool, ImageCaptioningTool, StableDiffusionPromptGeneratorTool,
TextToVideoTool)
from langchain.memory import ConversationBufferMemory
llm = OpenAI(temperature=0)
memory = ConversationBufferMemory(memory_key="chat_history")
tools = [StableDiffusionTool().langchain, ImageCaptioningTool().langchain,
StableDiffusionPromptGeneratorTool().langchain, TextToVideoTool().langchain]
agent = initialize_agent(tools, llm, memory=memory, agent="conversational-react-description", verbose=True)
output = agent.run(input=("Please create a photo of a dog riding a skateboard "
"but improve my prompt prior to using an image generator."
"Please caption the generated image and create a video for it using the improved prompt."))
Loaded as API: https://gradio-client-demos-stable-diffusion.hf.space ✔
Loaded as API: https://taesiri-blip-2.hf.space ✔
Loaded as API: https://microsoft-promptist.hf.space ✔
Loaded as API: https://damo-vilab-modelscope-text-to-video-synthesis.hf.space ✔
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: StableDiffusionPromptGenerator
Action Input: A dog riding a skateboard
Job Status: Status.STARTING eta: None
Observation: A dog riding a skateboard, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha
Thought: Do I need to use a tool? Yes
Action: StableDiffusion
Action Input: A dog riding a skateboard, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha
Job Status: Status.STARTING eta: None
Job Status: Status.PROCESSING eta: None
Observation: /Users/harrisonchase/workplace/langchain/docs/modules/agents/tools/examples/2e280ce4-4974-4420-8680-450825c31601/tmpfmiz2g1c.jpg
Thought: Do I need to use a tool? Yes
Action: ImageCaptioner
Action Input: /Users/harrisonchase/workplace/langchain/docs/modules/agents/tools/examples/2e280ce4-4974-4420-8680-450825c31601/tmpfmiz2g1c.jpg
Job Status: Status.STARTING eta: None
Observation: a painting of a dog sitting on a skateboard
Thought: Do I need to use a tool? Yes
Action: TextToVideo
Action Input: a painting of a dog sitting on a skateboard
Job Status: Status.STARTING eta: None | https://langchain.readthedocs.io/en/latest/modules/agents/tools/examples/gradio_tools.html |
8ba763380ace-1 | Job Status: Status.STARTING eta: None
Due to heavy traffic on this app, the prediction will take approximately 73 seconds.For faster predictions without waiting in queue, you may duplicate the space using: Client.duplicate(damo-vilab/modelscope-text-to-video-synthesis)
Job Status: Status.IN_QUEUE eta: 73.89824726581574
Due to heavy traffic on this app, the prediction will take approximately 42 seconds.For faster predictions without waiting in queue, you may duplicate the space using: Client.duplicate(damo-vilab/modelscope-text-to-video-synthesis)
Job Status: Status.IN_QUEUE eta: 42.49370198879602
Job Status: Status.IN_QUEUE eta: 21.314297944849187
Observation: /var/folders/bm/ylzhm36n075cslb9fvvbgq640000gn/T/tmp5snj_nmzf20_cb3m.mp4
Thought: Do I need to use a tool? No
AI: Here is a video of a painting of a dog sitting on a skateboard.
> Finished chain.
previous
Google Serper API
next
GraphQL tool
Contents
Using a tool
Using within an agent
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/tools/examples/gradio_tools.html |
8d8e4ee4b834-0 | .ipynb
.pdf
ChatGPT Plugins
ChatGPT Plugins#
This example shows how to use ChatGPT Plugins within LangChain abstractions.
Note 1: This currently only works for plugins with no auth.
Note 2: There are almost certainly other ways to do this, this is just a first pass. If you have better ideas, please open a PR!
from langchain.chat_models import ChatOpenAI
from langchain.agents import load_tools, initialize_agent
from langchain.agents import AgentType
from langchain.tools import AIPluginTool
tool = AIPluginTool.from_plugin_url("https://www.klarna.com/.well-known/ai-plugin.json")
llm = ChatOpenAI(temperature=0)
tools = load_tools(["requests_all"] )
tools += [tool]
agent_chain = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent_chain.run("what t shirts are available in klarna?")
> Entering new AgentExecutor chain...
I need to check the Klarna Shopping API to see if it has information on available t shirts.
Action: KlarnaProducts
Action Input: None
Observation: Usage Guide: Use the Klarna plugin to get relevant product suggestions for any shopping or researching purpose. The query to be sent should not include stopwords like articles, prepositions and determinants. The api works best when searching for words that are related to products, like their name, brand, model or category. Links will always be returned and should be shown to the user.
OpenAPI Spec: {'openapi': '3.0.1', 'info': {'version': 'v0', 'title': 'Open AI Klarna product Api'}, 'servers': [{'url': 'https://www.klarna.com/us/shopping'}], 'tags': [{'name': 'open-ai-product-endpoint', 'description': 'Open AI Product Endpoint. Query for products.'}], 'paths': {'/public/openai/v0/products': {'get': {'tags': ['open-ai-product-endpoint'], 'summary': 'API for fetching Klarna product information', 'operationId': 'productsUsingGET', 'parameters': [{'name': 'q', 'in': 'query', 'description': 'query, must be between 2 and 100 characters', 'required': True, 'schema': {'type': 'string'}}, {'name': 'size', 'in': 'query', 'description': 'number of products returned', 'required': False, 'schema': {'type': 'integer'}}, {'name': 'budget', 'in': 'query', 'description': 'maximum price of the matching product in local currency, filters results', 'required': False, 'schema': {'type': 'integer'}}], 'responses': {'200': {'description': 'Products found', 'content': {'application/json': {'schema': {'$ref': '#/components/schemas/ProductResponse'}}}}, '503': {'description': 'one or more services are unavailable'}}, 'deprecated': False}}}, 'components': {'schemas': {'Product': {'type': 'object', 'properties': {'attributes': {'type': 'array', 'items': {'type': 'string'}}, 'name': {'type': 'string'}, 'price': {'type': 'string'}, 'url': {'type': 'string'}}, 'title': 'Product'}, 'ProductResponse': {'type': 'object', 'properties': {'products': {'type': 'array', 'items': {'$ref': '#/components/schemas/Product'}}}, 'title': 'ProductResponse'}}}}
Thought:I need to use the Klarna Shopping API to search for t shirts.
Action: requests_get
Action Input: https://www.klarna.com/us/shopping/public/openai/v0/products?q=t%20shirts | https://langchain.readthedocs.io/en/latest/modules/agents/tools/examples/chatgpt_plugins.html |
8d8e4ee4b834-1 | Observation: {"products":[{"name":"Lacoste Men's Pack of Plain T-Shirts","url":"https://www.klarna.com/us/shopping/pl/cl10001/3202043025/Clothing/Lacoste-Men-s-Pack-of-Plain-T-Shirts/?utm_source=openai","price":"$26.60","attributes":["Material:Cotton","Target Group:Man","Color:White,Black"]},{"name":"Hanes Men's Ultimate 6pk. Crewneck T-Shirts","url":"https://www.klarna.com/us/shopping/pl/cl10001/3201808270/Clothing/Hanes-Men-s-Ultimate-6pk.-Crewneck-T-Shirts/?utm_source=openai","price":"$13.82","attributes":["Material:Cotton","Target Group:Man","Color:White"]},{"name":"Nike Boy's Jordan Stretch T-shirts","url":"https://www.klarna.com/us/shopping/pl/cl359/3201863202/Children-s-Clothing/Nike-Boy-s-Jordan-Stretch-T-shirts/?utm_source=openai","price":"$14.99","attributes":["Material:Cotton","Color:White,Green","Model:Boy","Size (Small-Large):S,XL,L,M"]},{"name":"Polo Classic Fit Cotton V-Neck T-Shirts 3-Pack","url":"https://www.klarna.com/us/shopping/pl/cl10001/3203028500/Clothing/Polo-Classic-Fit-Cotton-V-Neck-T-Shirts-3-Pack/?utm_source=openai","price":"$29.95","attributes":["Material:Cotton","Target Group:Man","Color:White,Blue,Black"]},{"name":"adidas Comfort T-shirts Men's 3-pack","url":"https://www.klarna.com/us/shopping/pl/cl10001/3202640533/Clothing/adidas-Comfort-T-shirts-Men-s-3-pack/?utm_source=openai","price":"$14.99","attributes":["Material:Cotton","Target Group:Man","Color:White,Black","Neckline:Round"]}]}
Thought:The available t shirts in Klarna are Lacoste Men's Pack of Plain T-Shirts, Hanes Men's Ultimate 6pk. Crewneck T-Shirts, Nike Boy's Jordan Stretch T-shirts, Polo Classic Fit Cotton V-Neck T-Shirts 3-Pack, and adidas Comfort T-shirts Men's 3-pack.
Final Answer: The available t shirts in Klarna are Lacoste Men's Pack of Plain T-Shirts, Hanes Men's Ultimate 6pk. Crewneck T-Shirts, Nike Boy's Jordan Stretch T-shirts, Polo Classic Fit Cotton V-Neck T-Shirts 3-Pack, and adidas Comfort T-shirts Men's 3-pack.
> Finished chain.
"The available t shirts in Klarna are Lacoste Men's Pack of Plain T-Shirts, Hanes Men's Ultimate 6pk. Crewneck T-Shirts, Nike Boy's Jordan Stretch T-shirts, Polo Classic Fit Cotton V-Neck T-Shirts 3-Pack, and adidas Comfort T-shirts Men's 3-pack."
previous
Brave Search
next
DuckDuckGo Search
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/tools/examples/chatgpt_plugins.html |
3d7d03d2161c-0 | .ipynb
.pdf
Search Tools
Contents
Google Serper API Wrapper
SerpAPI
GoogleSearchAPIWrapper
SearxNG Meta Search Engine
Search Tools#
This notebook shows off usage of various search tools.
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
Google Serper API Wrapper#
First, let’s try to use the Google Serper API tool.
tools = load_tools(["google-serper"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("What is the weather in Pomfret?")
> Entering new AgentExecutor chain...
I should look up the current weather conditions.
Action: Search
Action Input: "weather in Pomfret"
Observation: 37°F
Thought: I now know the current temperature in Pomfret.
Final Answer: The current temperature in Pomfret is 37°F.
> Finished chain.
'The current temperature in Pomfret is 37°F.'
SerpAPI#
Now, let’s use the SerpAPI tool.
tools = load_tools(["serpapi"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("What is the weather in Pomfret?")
> Entering new AgentExecutor chain...
I need to find out what the current weather is in Pomfret.
Action: Search
Action Input: "weather in Pomfret"
Observation: Partly cloudy skies during the morning hours will give way to cloudy skies with light rain and snow developing in the afternoon. High 42F. Winds WNW at 10 to 15 ...
Thought: I now know the current weather in Pomfret.
Final Answer: Partly cloudy skies during the morning hours will give way to cloudy skies with light rain and snow developing in the afternoon. High 42F. Winds WNW at 10 to 15 mph.
> Finished chain.
'Partly cloudy skies during the morning hours will give way to cloudy skies with light rain and snow developing in the afternoon. High 42F. Winds WNW at 10 to 15 mph.'
GoogleSearchAPIWrapper#
Now, let’s use the official Google Search API Wrapper.
tools = load_tools(["google-search"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("What is the weather in Pomfret?")
> Entering new AgentExecutor chain...
I should look up the current weather conditions.
Action: Google Search
Action Input: "weather in Pomfret"
Observation: Showers early becoming a steady light rain later in the day. Near record high temperatures. High around 60F. Winds SW at 10 to 15 mph. Chance of rain 60%. Pomfret, CT Weather Forecast, with current conditions, wind, air quality, and what to expect for the next 3 days. Hourly Weather-Pomfret, CT. As of 12:52 am EST. Special Weather Statement +2 ... Hazardous Weather Conditions. Special Weather Statement ... Pomfret CT. Tonight ... National Digital Forecast Database Maximum Temperature Forecast. Pomfret Center Weather Forecasts. Weather Underground provides local & long-range weather forecasts, weatherreports, maps & tropical weather conditions for ... Pomfret, CT 12 hour by hour weather forecast includes precipitation, temperatures, sky conditions, rain chance, dew-point, relative humidity, wind direction ... North Pomfret Weather Forecasts. Weather Underground provides local & long-range weather forecasts, weatherreports, maps & tropical weather conditions for ... Today's Weather - Pomfret, CT. Dec 31, 2022 4:00 PM. Putnam MS. --. Weather forecast icon. Feels like --. Hi --. Lo --. Pomfret, CT temperature trend for the next 14 Days. Find daytime highs and nighttime lows from TheWeatherNetwork.com. Pomfret, MD Weather Forecast Date: 332 PM EST Wed Dec 28 2022. The area/counties/county of: Charles, including the cites of: St. Charles and Waldorf.
Thought: I now know the current weather conditions in Pomfret. | https://langchain.readthedocs.io/en/latest/modules/agents/tools/examples/search_tools.html |
3d7d03d2161c-1 | Thought: I now know the current weather conditions in Pomfret.
Final Answer: Showers early becoming a steady light rain later in the day. Near record high temperatures. High around 60F. Winds SW at 10 to 15 mph. Chance of rain 60%.
> Finished AgentExecutor chain.
'Showers early becoming a steady light rain later in the day. Near record high temperatures. High around 60F. Winds SW at 10 to 15 mph. Chance of rain 60%.'
SearxNG Meta Search Engine#
Here we will be using a self hosted SearxNG meta search engine.
tools = load_tools(["searx-search"], searx_host="http://localhost:8888", llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("What is the weather in Pomfret")
> Entering new AgentExecutor chain...
I should look up the current weather
Action: SearX Search
Action Input: "weather in Pomfret"
Observation: Mainly cloudy with snow showers around in the morning. High around 40F. Winds NNW at 5 to 10 mph. Chance of snow 40%. Snow accumulations less than one inch.
10 Day Weather - Pomfret, MD As of 1:37 pm EST Today 49°/ 41° 52% Mon 27 | Day 49° 52% SE 14 mph Cloudy with occasional rain showers. High 49F. Winds SE at 10 to 20 mph. Chance of rain 50%....
10 Day Weather - Pomfret, VT As of 3:51 am EST Special Weather Statement Today 39°/ 32° 37% Wed 01 | Day 39° 37% NE 4 mph Cloudy with snow showers developing for the afternoon. High 39F....
Pomfret, CT ; Current Weather. 1:06 AM. 35°F · RealFeel® 32° ; TODAY'S WEATHER FORECAST. 3/3. 44°Hi. RealFeel® 50° ; TONIGHT'S WEATHER FORECAST. 3/3. 32°Lo.
Pomfret, MD Forecast Today Hourly Daily Morning 41° 1% Afternoon 43° 0% Evening 35° 3% Overnight 34° 2% Don't Miss Finally, Here’s Why We Get More Colds and Flu When It’s Cold Coast-To-Coast...
Pomfret, MD Weather Forecast | AccuWeather Current Weather 5:35 PM 35° F RealFeel® 36° RealFeel Shade™ 36° Air Quality Excellent Wind E 3 mph Wind Gusts 5 mph Cloudy More Details WinterCast...
Pomfret, VT Weather Forecast | AccuWeather Current Weather 11:21 AM 23° F RealFeel® 27° RealFeel Shade™ 25° Air Quality Fair Wind ESE 3 mph Wind Gusts 7 mph Cloudy More Details WinterCast...
Pomfret Center, CT Weather Forecast | AccuWeather Daily Current Weather 6:50 PM 39° F RealFeel® 36° Air Quality Fair Wind NW 6 mph Wind Gusts 16 mph Mostly clear More Details WinterCast...
12:00 pm · Feels Like36° · WindN 5 mph · Humidity43% · UV Index3 of 10 · Cloud Cover65% · Rain Amount0 in ...
Pomfret Center, CT Weather Conditions | Weather Underground star Popular Cities San Francisco, CA 49 °F Clear Manhattan, NY 37 °F Fair Schiller Park, IL (60176) warning39 °F Mostly Cloudy...
Thought: I now know the final answer
Final Answer: The current weather in Pomfret is mainly cloudy with snow showers around in the morning. The temperature is around 40F with winds NNW at 5 to 10 mph. Chance of snow is 40%.
> Finished chain.
'The current weather in Pomfret is mainly cloudy with snow showers around in the morning. The temperature is around 40F with winds NNW at 5 to 10 mph. Chance of snow is 40%.'
previous
SceneXplain
next
SearxNG Search API
Contents
Google Serper API Wrapper
SerpAPI
GoogleSearchAPIWrapper
SearxNG Meta Search Engine
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/modules/agents/tools/examples/search_tools.html |
84ab24a8de5f-0 | .ipynb
.pdf
Requests
Contents
Inside the tool
Requests#
The web contains a lot of information that LLMs do not have access to. In order to easily let LLMs interact with that information, we provide a wrapper around the Python Requests module that takes in a URL and fetches data from that URL.
from langchain.agents import load_tools
requests_tools = load_tools(["requests_all"])
requests_tools
[RequestsGetTool(name='requests_get', description='A portal to the internet. Use this when you need to get specific content from a website. Input should be a url (i.e. https://www.google.com). The output will be the text response of the GET request.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, requests_wrapper=TextRequestsWrapper(headers=None, aiosession=None)),
RequestsPostTool(name='requests_post', description='Use this when you want to POST to a website.\n Input should be a json string with two keys: "url" and "data".\n The value of "url" should be a string, and the value of "data" should be a dictionary of \n key-value pairs you want to POST to the url.\n Be careful to always use double quotes for strings in the json string\n The output will be the text response of the POST request.\n ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, requests_wrapper=TextRequestsWrapper(headers=None, aiosession=None)),
RequestsPatchTool(name='requests_patch', description='Use this when you want to PATCH to a website.\n Input should be a json string with two keys: "url" and "data".\n The value of "url" should be a string, and the value of "data" should be a dictionary of \n key-value pairs you want to PATCH to the url.\n Be careful to always use double quotes for strings in the json string\n The output will be the text response of the PATCH request.\n ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, requests_wrapper=TextRequestsWrapper(headers=None, aiosession=None)),
RequestsPutTool(name='requests_put', description='Use this when you want to PUT to a website.\n Input should be a json string with two keys: "url" and "data".\n The value of "url" should be a string, and the value of "data" should be a dictionary of \n key-value pairs you want to PUT to the url.\n Be careful to always use double quotes for strings in the json string.\n The output will be the text response of the PUT request.\n ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, requests_wrapper=TextRequestsWrapper(headers=None, aiosession=None)),
RequestsDeleteTool(name='requests_delete', description='A portal to the internet. Use this when you need to make a DELETE request to a URL. Input should be a specific url, and the output will be the text response of the DELETE request.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, requests_wrapper=TextRequestsWrapper(headers=None, aiosession=None))]
Inside the tool#
Each requests tool contains a requests wrapper. You can work with these wrappers directly below
# Each tool wrapps a requests wrapper
requests_tools[0].requests_wrapper
TextRequestsWrapper(headers=None, aiosession=None)
from langchain.utilities import TextRequestsWrapper
requests = TextRequestsWrapper()
requests.get("https://www.google.com") | https://langchain.readthedocs.io/en/latest/modules/agents/tools/examples/requests.html |
84ab24a8de5f-1 | '<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="en"><head><meta content="Search the world\'s information, including webpages, images, videos and more. Google has many special features to help you find exactly what you\'re looking for." name="description"><meta content="noodp" name="robots"><meta content="text/html; charset=UTF-8" http-equiv="Content-Type"><meta content="/images/branding/googleg/1x/googleg_standard_color_128dp.png" itemprop="image"><title>Google</title><script nonce="MXrF0nnIBPkxBza4okrgPA">(function(){window.google={kEI:\'TA9QZOa5EdTakPIPuIad-Ac\',kEXPI:\'0,1359409,6059,206,4804,2316,383,246,5,1129120,1197768,626,380097,16111,28687,22431,1361,12319,17581,4997,13228,37471,7692,2891,3926,213,7615,606,50058,8228,17728,432,3,346,1244,1,16920,2648,4,1528,2304,29062,9871,3194,13658,2980,1457,16786,5803,2554,4094,7596,1,42154,2,14022,2373,342,23024,6699,31123,4568,6258,23418,1252,5835,14967,4333,4239,3245,445,2,2,1,26632,239,7916,7321,60,2,3,15965,872,7830,1796,10008,7,1922,9779,36154,6305,2007,17765,427,20136,14,82,2730,184,13600,3692,109,2412,1548,4308,3785,15175,3888,1515,3030,5628,478,4,9706,1804,7734,2738,1853,1032,9480,2995,576,1041,5648,3722,2058,3048,2130,2365,662,476,958,87,111,5807,2,975,1167,891,3580,1439,1128,7343,426,249,517,95,1102,14,696,1270,750,400,2208,274,2776,164,89,119,204,139,129,1710,2505,320,3,631,439,2,300,1645,172,1783,784,169,642,329,401,50,479,614,238,757,535,717,102,2,739,738,44,232,22,442,961,45,214,383,567,500,487,151,120,256,253,179,673,2,102,2,10,535,123,135,1685,5206695,190,2,20,50,198,5994221,2804424,3311,141,795,19735,1,1,346,5008,7,13,10,24,31,2,39,1,5,1,16,7,2,41,247,4,9,7,9,15,4,4,121,24,23944834,4042142,1964,16672,2894,6250,15739,1726,647,409,837,1411438,146986,23612960,7,84,93,33,101,816,57,532,163,1,441,86,1,951,73,31,2,345,178,243,472,2,148,962,455,167,178,29,702,1856,288,292,805,93,137,68,416,177,292,399,55,95,2566\',kBL:\'hw1A\',kOPI:89978449};google.sn=\'webhp\';google.kHL=\'en\';})();(function(){\nvar h=this||self;function l(){return void 0!==window.google&&void | https://langchain.readthedocs.io/en/latest/modules/agents/tools/examples/requests.html |
Subsets and Splits