id
stringlengths
14
16
text
stringlengths
36
2.73k
source
stringlengths
49
117
7679d79a27e5-0
.ipynb .pdf RetryOutputParser RetryOutputParser# While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it can’t. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example. from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate from langchain.llms import OpenAI from langchain.chat_models import ChatOpenAI from langchain.output_parsers import PydanticOutputParser, OutputFixingParser, RetryOutputParser from pydantic import BaseModel, Field, validator from typing import List template = """Based on the user question, provide an Action and Action Input for what step should be taken. {format_instructions} Question: {query} Response:""" class Action(BaseModel): action: str = Field(description="action to take") action_input: str = Field(description="input to the action") parser = PydanticOutputParser(pydantic_object=Action) prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()} ) prompt_value = prompt.format_prompt(query="who is leo di caprios gf?") bad_response = '{"action": "search"}' If we try to parse this response as is, we will get an error parser.parse(bad_response) --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) File ~/workplace/langchain/langchain/output_parsers/pydantic.py:24, in PydanticOutputParser.parse(self, text) 23 json_object = json.loads(json_str)
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/retry.html
7679d79a27e5-1
23 json_object = json.loads(json_str) ---> 24 return self.pydantic_object.parse_obj(json_object) 26 except (json.JSONDecodeError, ValidationError) as e: File ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:527, in pydantic.main.BaseModel.parse_obj() File ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:342, in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for Action action_input field required (type=value_error.missing) During handling of the above exception, another exception occurred: OutputParserException Traceback (most recent call last) Cell In[6], line 1 ----> 1 parser.parse(bad_response) File ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text) 27 name = self.pydantic_object.__name__ 28 msg = f"Failed to parse {name} from completion {text}. Got: {e}" ---> 29 raise OutputParserException(msg) OutputParserException: Failed to parse Action from completion {"action": "search"}. Got: 1 validation error for Action action_input field required (type=value_error.missing) If we try to use the OutputFixingParser to fix this error, it will be confused - namely, it doesn’t know what to actually put for action input. fix_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI()) fix_parser.parse(bad_response) Action(action='search', action_input='')
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/retry.html
7679d79a27e5-2
fix_parser.parse(bad_response) Action(action='search', action_input='') Instead, we can use the RetryOutputParser, which passes in the prompt (as well as the original output) to try again to get a better response. from langchain.output_parsers import RetryWithErrorOutputParser retry_parser = RetryWithErrorOutputParser.from_llm(parser=parser, llm=OpenAI(temperature=0)) retry_parser.parse_with_prompt(bad_response, prompt_value) Action(action='search', action_input='who is leo di caprios gf?') previous PydanticOutputParser next Structured Output Parser By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/retry.html
98b994fdf489-0
.ipynb .pdf Structured Output Parser Structured Output Parser# While the Pydantic/JSON parser is more powerful, we initially experimented data structures having text fields only. from langchain.output_parsers import StructuredOutputParser, ResponseSchema from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate from langchain.llms import OpenAI from langchain.chat_models import ChatOpenAI Here we define the response schema we want to receive. response_schemas = [ ResponseSchema(name="answer", description="answer to the user's question"), ResponseSchema(name="source", description="source used to answer the user's question, should be a website.") ] output_parser = StructuredOutputParser.from_response_schemas(response_schemas) We now get a string that contains instructions for how the response should be formatted, and we then insert that into our prompt. format_instructions = output_parser.get_format_instructions() prompt = PromptTemplate( template="answer the users question as best as possible.\n{format_instructions}\n{question}", input_variables=["question"], partial_variables={"format_instructions": format_instructions} ) We can now use this to format a prompt to send to the language model, and then parse the returned result. model = OpenAI(temperature=0) _input = prompt.format_prompt(question="what's the capital of france?") output = model(_input.to_string()) output_parser.parse(output) {'answer': 'Paris', 'source': 'https://www.worldatlas.com/articles/what-is-the-capital-of-france.html'} And here’s an example of using this in a chat model chat_model = ChatOpenAI(temperature=0) prompt = ChatPromptTemplate( messages=[
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/structured.html
98b994fdf489-1
prompt = ChatPromptTemplate( messages=[ HumanMessagePromptTemplate.from_template("answer the users question as best as possible.\n{format_instructions}\n{question}") ], input_variables=["question"], partial_variables={"format_instructions": format_instructions} ) _input = prompt.format_prompt(question="what's the capital of france?") output = chat_model(_input.to_messages()) output_parser.parse(output.content) {'answer': 'Paris', 'source': 'https://en.wikipedia.org/wiki/Paris'} previous RetryOutputParser next Memory By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/structured.html
ca4445cae06b-0
.ipynb .pdf CommaSeparatedListOutputParser CommaSeparatedListOutputParser# Here’s another parser strictly less powerful than Pydantic/JSON parsing. from langchain.output_parsers import CommaSeparatedListOutputParser from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate from langchain.llms import OpenAI from langchain.chat_models import ChatOpenAI output_parser = CommaSeparatedListOutputParser() format_instructions = output_parser.get_format_instructions() prompt = PromptTemplate( template="List five {subject}.\n{format_instructions}", input_variables=["subject"], partial_variables={"format_instructions": format_instructions} ) model = OpenAI(temperature=0) _input = prompt.format(subject="ice cream flavors") output = model(_input) output_parser.parse(output) ['Vanilla', 'Chocolate', 'Strawberry', 'Mint Chocolate Chip', 'Cookies and Cream'] previous Output Parsers next Datetime By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/comma_separated.html
d2e3bc2269de-0
.ipynb .pdf PydanticOutputParser PydanticOutputParser# This output parser allows users to specify an arbitrary JSON schema and query LLMs for JSON outputs that conform to that schema. Keep in mind that large language models are leaky abstractions! You’ll have to use an LLM with sufficient capacity to generate well-formed JSON. In the OpenAI family, DaVinci can do reliably but Curie’s ability already drops off dramatically. Use Pydantic to declare your data model. Pydantic’s BaseModel like a Python dataclass, but with actual type checking + coercion. from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate from langchain.llms import OpenAI from langchain.chat_models import ChatOpenAI from langchain.output_parsers import PydanticOutputParser from pydantic import BaseModel, Field, validator from typing import List model_name = 'text-davinci-003' temperature = 0.0 model = OpenAI(model_name=model_name, temperature=temperature) # Define your desired data structure. class Joke(BaseModel): setup: str = Field(description="question to set up a joke") punchline: str = Field(description="answer to resolve the joke") # You can add custom validation logic easily with Pydantic. @validator('setup') def question_ends_with_question_mark(cls, field): if field[-1] != '?': raise ValueError("Badly formed question!") return field # And a query intented to prompt a language model to populate the data structure. joke_query = "Tell me a joke." # Set up a parser + inject instructions into the prompt template. parser = PydanticOutputParser(pydantic_object=Joke) prompt = PromptTemplate(
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/pydantic.html
d2e3bc2269de-1
prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()} ) _input = prompt.format_prompt(query=joke_query) output = model(_input.to_string()) parser.parse(output) Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!') # Here's another example, but with a compound typed field. class Actor(BaseModel): name: str = Field(description="name of an actor") film_names: List[str] = Field(description="list of names of films they starred in") actor_query = "Generate the filmography for a random actor." parser = PydanticOutputParser(pydantic_object=Actor) prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()} ) _input = prompt.format_prompt(query=actor_query) output = model(_input.to_string()) parser.parse(output) Actor(name='Tom Hanks', film_names=['Forrest Gump', 'Saving Private Ryan', 'The Green Mile', 'Cast Away', 'Toy Story']) previous OutputFixingParser next RetryOutputParser By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/pydantic.html
9df47a58bbf9-0
.ipynb .pdf OutputFixingParser OutputFixingParser# This output parser wraps another output parser and tries to fix any mistakes The Pydantic guardrail simply tries to parse the LLM response. If it does not parse correctly, then it errors. But we can do other things besides throw errors. Specifically, we can pass the misformatted output, along with the formatted instructions, to the model and ask it to fix it. For this example, we’ll use the above OutputParser. Here’s what happens if we pass it a result that does not comply with the schema: from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate from langchain.llms import OpenAI from langchain.chat_models import ChatOpenAI from langchain.output_parsers import PydanticOutputParser from pydantic import BaseModel, Field, validator from typing import List class Actor(BaseModel): name: str = Field(description="name of an actor") film_names: List[str] = Field(description="list of names of films they starred in") actor_query = "Generate the filmography for a random actor." parser = PydanticOutputParser(pydantic_object=Actor) misformatted = "{'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}" parser.parse(misformatted) --------------------------------------------------------------------------- JSONDecodeError Traceback (most recent call last) File ~/workplace/langchain/langchain/output_parsers/pydantic.py:23, in PydanticOutputParser.parse(self, text) 22 json_str = match.group() ---> 23 json_object = json.loads(json_str) 24 return self.pydantic_object.parse_obj(json_object)
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html
9df47a58bbf9-1
24 return self.pydantic_object.parse_obj(json_object) File ~/.pyenv/versions/3.9.1/lib/python3.9/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 343 if (cls is None and object_hook is None and 344 parse_int is None and parse_float is None and 345 parse_constant is None and object_pairs_hook is None and not kw): --> 346 return _default_decoder.decode(s) 347 if cls is None: File ~/.pyenv/versions/3.9.1/lib/python3.9/json/decoder.py:337, in JSONDecoder.decode(self, s, _w) 333 """Return the Python representation of ``s`` (a ``str`` instance 334 containing a JSON document). 335 336 """ --> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end()) 338 end = _w(s, end).end() File ~/.pyenv/versions/3.9.1/lib/python3.9/json/decoder.py:353, in JSONDecoder.raw_decode(self, s, idx) 352 try: --> 353 obj, end = self.scan_once(s, idx) 354 except StopIteration as err: JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1) During handling of the above exception, another exception occurred: OutputParserException Traceback (most recent call last) Cell In[6], line 1 ----> 1 parser.parse(misformatted)
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html
9df47a58bbf9-2
Cell In[6], line 1 ----> 1 parser.parse(misformatted) File ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text) 27 name = self.pydantic_object.__name__ 28 msg = f"Failed to parse {name} from completion {text}. Got: {e}" ---> 29 raise OutputParserException(msg) OutputParserException: Failed to parse Actor from completion {'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}. Got: Expecting property name enclosed in double quotes: line 1 column 2 (char 1) Now we can construct and use a OutputFixingParser. This output parser takes as an argument another output parser but also an LLM with which to try to correct any formatting mistakes. from langchain.output_parsers import OutputFixingParser new_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI()) new_parser.parse(misformatted) Actor(name='Tom Hanks', film_names=['Forrest Gump']) previous Enum Output Parser next PydanticOutputParser By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html
1e6052cfcb22-0
.ipynb .pdf Datetime Datetime# This OutputParser shows out to parse LLM output into datetime format. from langchain.prompts import PromptTemplate from langchain.output_parsers import DatetimeOutputParser from langchain.chains import LLMChain from langchain.llms import OpenAI output_parser = DatetimeOutputParser() template = """Answer the users question: {question} {format_instructions}""" prompt = PromptTemplate.from_template(template, partial_variables={"format_instructions": output_parser.get_format_instructions()}) chain = LLMChain(prompt=prompt, llm=OpenAI()) output = chain.run("around when was bitcoin founded?") output '\n\n2008-01-03T18:15:05.000000Z' output_parser.parse(output) datetime.datetime(2008, 1, 3, 18, 15, 5) previous CommaSeparatedListOutputParser next Enum Output Parser By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/datetime.html
bfc704ff1919-0
.ipynb .pdf Similarity ExampleSelector Similarity ExampleSelector# The SemanticSimilarityExampleSelector selects examples based on which examples are most similar to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs. from langchain.prompts.example_selector import SemanticSimilarityExampleSelector from langchain.vectorstores import Chroma from langchain.embeddings import OpenAIEmbeddings from langchain.prompts import FewShotPromptTemplate, PromptTemplate example_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}", ) # These are a lot of examples of a pretend task of creating antonyms. examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"}, ] example_selector = SemanticSimilarityExampleSelector.from_examples( # This is the list of examples available to select from. examples, # This is the embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # This is the VectorStore class that is used to store the embeddings and do a similarity search over. Chroma, # This is the number of examples to produce. k=1 ) similar_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input",
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/similarity.html
bfc704ff1919-1
example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"], ) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. # Input is a feeling, so should select the happy/sad example print(similar_prompt.format(adjective="worried")) Give the antonym of every input Input: happy Output: sad Input: worried Output: # Input is a measurement, so should select the tall/short example print(similar_prompt.format(adjective="fat")) Give the antonym of every input Input: happy Output: sad Input: fat Output: # You can add new examples to the SemanticSimilarityExampleSelector as well similar_prompt.example_selector.add_example({"input": "enthusiastic", "output": "apathetic"}) print(similar_prompt.format(adjective="joyful")) Give the antonym of every input Input: happy Output: sad Input: joyful Output: previous NGram Overlap ExampleSelector next Output Parsers By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/similarity.html
0ca7e382da0b-0
.ipynb .pdf LengthBased ExampleSelector LengthBased ExampleSelector# This ExampleSelector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more. from langchain.prompts import PromptTemplate from langchain.prompts import FewShotPromptTemplate from langchain.prompts.example_selector import LengthBasedExampleSelector # These are a lot of examples of a pretend task of creating antonyms. examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"}, ] example_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}", ) example_selector = LengthBasedExampleSelector( # These are the examples it has available to choose from. examples=examples, # This is the PromptTemplate being used to format the examples. example_prompt=example_prompt, # This is the maximum length that the formatted examples should be. # Length is measured by the get_text_length function below. max_length=25, # This is the function used to get the length of a string, which is used # to determine which examples to include. It is commented out because # it is provided as a default value if none is specified.
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/length_based.html
0ca7e382da0b-1
# it is provided as a default value if none is specified. # get_text_length: Callable[[str], int] = lambda x: len(re.split("\n| ", x)) ) dynamic_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"], ) # An example with small input, so it selects all examples. print(dynamic_prompt.format(adjective="big")) Give the antonym of every input Input: happy Output: sad Input: tall Output: short Input: energetic Output: lethargic Input: sunny Output: gloomy Input: windy Output: calm Input: big Output: # An example with long input, so it selects only one example. long_string = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else" print(dynamic_prompt.format(adjective=long_string)) Give the antonym of every input Input: happy Output: sad Input: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else Output: # You can add an example to an example selector as well. new_example = {"input": "big", "output": "small"} dynamic_prompt.example_selector.add_example(new_example) print(dynamic_prompt.format(adjective="enthusiastic")) Give the antonym of every input Input: happy Output: sad Input: tall Output: short Input: energetic Output: lethargic Input: sunny Output: gloomy Input: windy Output: calm
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/length_based.html
0ca7e382da0b-2
Input: sunny Output: gloomy Input: windy Output: calm Input: big Output: small Input: enthusiastic Output: previous How to create a custom example selector next Maximal Marginal Relevance ExampleSelector By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/length_based.html
97c86bb11810-0
.ipynb .pdf NGram Overlap ExampleSelector NGram Overlap ExampleSelector# The NGramOverlapExampleSelector selects and orders examples based on which examples are most similar to the input, according to an ngram overlap score. The ngram overlap score is a float between 0.0 and 1.0, inclusive. The selector allows for a threshold score to be set. Examples with an ngram overlap score less than or equal to the threshold are excluded. The threshold is set to -1.0, by default, so will not exclude any examples, only reorder them. Setting the threshold to 0.0 will exclude examples that have no ngram overlaps with the input. from langchain.prompts import PromptTemplate from langchain.prompts.example_selector.ngram_overlap import NGramOverlapExampleSelector from langchain.prompts import FewShotPromptTemplate, PromptTemplate example_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}", ) # These are a lot of examples of a pretend task of creating antonyms. examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"}, ] # These are examples of a fictional translation task. examples = [ {"input": "See Spot run.", "output": "Ver correr a Spot."}, {"input": "My dog barks.", "output": "Mi perro ladra."}, {"input": "Spot can run.", "output": "Spot puede correr."},
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html
97c86bb11810-1
{"input": "Spot can run.", "output": "Spot puede correr."}, ] example_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}", ) example_selector = NGramOverlapExampleSelector( # These are the examples it has available to choose from. examples=examples, # This is the PromptTemplate being used to format the examples. example_prompt=example_prompt, # This is the threshold, at which selector stops. # It is set to -1.0 by default. threshold=-1.0, # For negative threshold: # Selector sorts examples by ngram overlap score, and excludes none. # For threshold greater than 1.0: # Selector excludes all examples, and returns an empty list. # For threshold equal to 0.0: # Selector sorts examples by ngram overlap score, # and excludes those with no ngram overlap with input. ) dynamic_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the Spanish translation of every input", suffix="Input: {sentence}\nOutput:", input_variables=["sentence"], ) # An example input with large ngram overlap with "Spot can run." # and no overlap with "My dog barks." print(dynamic_prompt.format(sentence="Spot can run fast.")) Give the Spanish translation of every input Input: Spot can run. Output: Spot puede correr. Input: See Spot run. Output: Ver correr a Spot. Input: My dog barks.
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html
97c86bb11810-2
Output: Ver correr a Spot. Input: My dog barks. Output: Mi perro ladra. Input: Spot can run fast. Output: # You can add examples to NGramOverlapExampleSelector as well. new_example = {"input": "Spot plays fetch.", "output": "Spot juega a buscar."} example_selector.add_example(new_example) print(dynamic_prompt.format(sentence="Spot can run fast.")) Give the Spanish translation of every input Input: Spot can run. Output: Spot puede correr. Input: See Spot run. Output: Ver correr a Spot. Input: Spot plays fetch. Output: Spot juega a buscar. Input: My dog barks. Output: Mi perro ladra. Input: Spot can run fast. Output: # You can set a threshold at which examples are excluded. # For example, setting threshold equal to 0.0 # excludes examples with no ngram overlaps with input. # Since "My dog barks." has no ngram overlaps with "Spot can run fast." # it is excluded. example_selector.threshold=0.0 print(dynamic_prompt.format(sentence="Spot can run fast.")) Give the Spanish translation of every input Input: Spot can run. Output: Spot puede correr. Input: See Spot run. Output: Ver correr a Spot. Input: Spot plays fetch. Output: Spot juega a buscar. Input: Spot can run fast. Output: # Setting small nonzero threshold example_selector.threshold=0.09 print(dynamic_prompt.format(sentence="Spot can play fetch.")) Give the Spanish translation of every input Input: Spot can run. Output: Spot puede correr. Input: Spot plays fetch. Output: Spot juega a buscar.
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html
97c86bb11810-3
Input: Spot plays fetch. Output: Spot juega a buscar. Input: Spot can play fetch. Output: # Setting threshold greater than 1.0 example_selector.threshold=1.0+1e-9 print(dynamic_prompt.format(sentence="Spot can play fetch.")) Give the Spanish translation of every input Input: Spot can play fetch. Output: previous Maximal Marginal Relevance ExampleSelector next Similarity ExampleSelector By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html
f2e5a25bf90e-0
.ipynb .pdf Maximal Marginal Relevance ExampleSelector Maximal Marginal Relevance ExampleSelector# The MaxMarginalRelevanceExampleSelector selects examples based on a combination of which examples are most similar to the inputs, while also optimizing for diversity. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs, and then iteratively adding them while penalizing them for closeness to already selected examples. from langchain.prompts.example_selector import MaxMarginalRelevanceExampleSelector, SemanticSimilarityExampleSelector from langchain.vectorstores import FAISS from langchain.embeddings import OpenAIEmbeddings from langchain.prompts import FewShotPromptTemplate, PromptTemplate example_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}", ) # These are a lot of examples of a pretend task of creating antonyms. examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"}, ] example_selector = MaxMarginalRelevanceExampleSelector.from_examples( # This is the list of examples available to select from. examples, # This is the embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # This is the VectorStore class that is used to store the embeddings and do a similarity search over. FAISS, # This is the number of examples to produce. k=2 )
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/mmr.html
f2e5a25bf90e-1
# This is the number of examples to produce. k=2 ) mmr_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"], ) # Input is a feeling, so should select the happy/sad example as the first one print(mmr_prompt.format(adjective="worried")) Give the antonym of every input Input: happy Output: sad Input: windy Output: calm Input: worried Output: # Let's compare this to what we would just get if we went solely off of similarity, # by using SemanticSimilarityExampleSelector instead of MaxMarginalRelevanceExampleSelector. example_selector = SemanticSimilarityExampleSelector.from_examples( # This is the list of examples available to select from. examples, # This is the embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # This is the VectorStore class that is used to store the embeddings and do a similarity search over. FAISS, # This is the number of examples to produce. k=2 ) similar_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"], ) print(similar_prompt.format(adjective="worried")) Give the antonym of every input Input: happy
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/mmr.html
f2e5a25bf90e-2
Give the antonym of every input Input: happy Output: sad Input: sunny Output: gloomy Input: worried Output: previous LengthBased ExampleSelector next NGram Overlap ExampleSelector By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/mmr.html
a6df159d5512-0
.md .pdf How to create a custom example selector Contents Implement custom example selector Use custom example selector How to create a custom example selector# In this tutorial, we’ll create a custom example selector that selects every alternate example from a given list of examples. An ExampleSelector must implement two methods: An add_example method which takes in an example and adds it into the ExampleSelector A select_examples method which takes in input variables (which are meant to be user input) and returns a list of examples to use in the few shot prompt. Let’s implement a custom ExampleSelector that just selects two examples at random. Note Take a look at the current set of example selector implementations supported in LangChain here. Implement custom example selector# from langchain.prompts.example_selector.base import BaseExampleSelector from typing import Dict, List import numpy as np class CustomExampleSelector(BaseExampleSelector): def __init__(self, examples: List[Dict[str, str]]): self.examples = examples def add_example(self, example: Dict[str, str]) -> None: """Add new example to store for a key.""" self.examples.append(example) def select_examples(self, input_variables: Dict[str, str]) -> List[dict]: """Select which examples to use based on the inputs.""" return np.random.choice(self.examples, size=2, replace=False) Use custom example selector# examples = [ {"foo": "1"}, {"foo": "2"}, {"foo": "3"} ] # Initialize example selector. example_selector = CustomExampleSelector(examples) # Select examples example_selector.select_examples({"foo": "foo"}) # -> array([{'foo': '2'}, {'foo': '3'}], dtype=object)
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/custom_example_selector.html
a6df159d5512-1
# Add new example to the set of examples example_selector.add_example({"foo": "4"}) example_selector.examples # -> [{'foo': '1'}, {'foo': '2'}, {'foo': '3'}, {'foo': '4'}] # Select examples example_selector.select_examples({"foo": "foo"}) # -> array([{'foo': '1'}, {'foo': '4'}], dtype=object) previous Example Selectors next LengthBased ExampleSelector Contents Implement custom example selector Use custom example selector By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/custom_example_selector.html
57b06ce3fc7c-0
.rst .pdf How-To Guides How-To Guides# If you’re new to the library, you may want to start with the Quickstart. The user guide here shows more advanced workflows and how to use the library in different ways. Connecting to a Feature Store How to create a custom prompt template How to create a prompt template that uses few shot examples How to work with partial Prompt Templates Prompt Composition How to serialize prompts previous Getting Started next Connecting to a Feature Store By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/how_to_guides.html
2a46250ea435-0
.md .pdf Getting Started Contents What is a prompt template? Create a prompt template Template formats Validate template Serialize prompt template Pass few shot examples to a prompt template Select examples for a prompt template Getting Started# In this tutorial, we will learn about: what a prompt template is, and why it is needed, how to create a prompt template, how to pass few shot examples to a prompt template, how to select examples for a prompt template. What is a prompt template?# A prompt template refers to a reproducible way to generate a prompt. It contains a text string (“the template”), that can take in a set of parameters from the end user and generate a prompt. The prompt template may contain: instructions to the language model, a set of few shot examples to help the language model generate a better response, a question to the language model. The following code snippet contains an example of a prompt template: from langchain import PromptTemplate template = """ I want you to act as a naming consultant for new companies. What is a good name for a company that makes {product}? """ prompt = PromptTemplate( input_variables=["product"], template=template, ) prompt.format(product="colorful socks") # -> I want you to act as a naming consultant for new companies. # -> What is a good name for a company that makes colorful socks? Create a prompt template# You can create simple hardcoded prompts using the PromptTemplate class. Prompt templates can take any number of input variables, and can be formatted to generate a prompt. from langchain import PromptTemplate # An example prompt with no input variables no_input_prompt = PromptTemplate(input_variables=[], template="Tell me a joke.") no_input_prompt.format() # -> "Tell me a joke."
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html
2a46250ea435-1
no_input_prompt.format() # -> "Tell me a joke." # An example prompt with one input variable one_input_prompt = PromptTemplate(input_variables=["adjective"], template="Tell me a {adjective} joke.") one_input_prompt.format(adjective="funny") # -> "Tell me a funny joke." # An example prompt with multiple input variables multiple_input_prompt = PromptTemplate( input_variables=["adjective", "content"], template="Tell me a {adjective} joke about {content}." ) multiple_input_prompt.format(adjective="funny", content="chickens") # -> "Tell me a funny joke about chickens." If you do not wish to specify input_variables manually, you can also create a PromptTemplate using from_template class method. langchain will automatically infer the input_variables based on the template passed. template = "Tell me a {adjective} joke about {content}." prompt_template = PromptTemplate.from_template(template) prompt_template.input_variables # -> ['adjective', 'content'] prompt_template.format(adjective="funny", content="chickens") # -> Tell me a funny joke about chickens. You can create custom prompt templates that format the prompt in any way you want. For more information, see Custom Prompt Templates. Template formats# By default, PromptTemplate will treat the provided template as a Python f-string. You can specify other template format through template_format argument: # Make sure jinja2 is installed before running this jinja2_template = "Tell me a {{ adjective }} joke about {{ content }}" prompt_template = PromptTemplate.from_template(template=jinja2_template, template_format="jinja2") prompt_template.format(adjective="funny", content="chickens") # -> Tell me a funny joke about chickens.
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html
2a46250ea435-2
# -> Tell me a funny joke about chickens. Currently, PromptTemplate only supports jinja2 and f-string templating format. If there is any other templating format that you would like to use, feel free to open an issue in the Github page. Validate template# By default, PromptTemplate will validate the template string by checking whether the input_variables match the variables defined in template. You can disable this behavior by setting validate_template to False template = "I am learning langchain because {reason}." prompt_template = PromptTemplate(template=template, input_variables=["reason", "foo"]) # ValueError due to extra variables prompt_template = PromptTemplate(template=template, input_variables=["reason", "foo"], validate_template=False) # No error Serialize prompt template# You can save your PromptTemplate into a file in your local filesystem. langchain will automatically infer the file format through the file extension name. Currently, langchain supports saving template to YAML and JSON file. prompt_template.save("awesome_prompt.json") # Save to JSON file from langchain.prompts import load_prompt loaded_prompt = load_prompt("awesome_prompt.json") assert prompt_template == loaded_prompt langchain also supports loading prompt template from LangChainHub, which contains a collection of useful prompts you can use in your project. You can read more about LangChainHub and the prompts available with it here. from langchain.prompts import load_prompt prompt = load_prompt("lc://prompts/conversation/prompt.json") prompt.format(history="", input="What is 1 + 1?") You can learn more about serializing prompt template in How to serialize prompts. Pass few shot examples to a prompt template# Few shot examples are a set of examples that can be used to help the language model generate a better response.
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html
2a46250ea435-3
To generate a prompt with few shot examples, you can use the FewShotPromptTemplate. This class takes in a PromptTemplate and a list of few shot examples. It then formats the prompt template with the few shot examples. In this example, we’ll create a prompt to generate word antonyms. from langchain import PromptTemplate, FewShotPromptTemplate # First, create the list of few shot examples. examples = [ {"word": "happy", "antonym": "sad"}, {"word": "tall", "antonym": "short"}, ] # Next, we specify the template to format the examples we have provided. # We use the `PromptTemplate` class for this. example_formatter_template = """Word: {word} Antonym: {antonym} """ example_prompt = PromptTemplate( input_variables=["word", "antonym"], template=example_formatter_template, ) # Finally, we create the `FewShotPromptTemplate` object. few_shot_prompt = FewShotPromptTemplate( # These are the examples we want to insert into the prompt. examples=examples, # This is how we want to format the examples when we insert them into the prompt. example_prompt=example_prompt, # The prefix is some text that goes before the examples in the prompt. # Usually, this consists of intructions. prefix="Give the antonym of every input\n", # The suffix is some text that goes after the examples in the prompt. # Usually, this is where the user input will go suffix="Word: {input}\nAntonym: ", # The input variables are the variables that the overall prompt expects. input_variables=["input"],
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html
2a46250ea435-4
input_variables=["input"], # The example_separator is the string we will use to join the prefix, examples, and suffix together with. example_separator="\n", ) # We can now generate a prompt using the `format` method. print(few_shot_prompt.format(input="big")) # -> Give the antonym of every input # -> # -> Word: happy # -> Antonym: sad # -> # -> Word: tall # -> Antonym: short # -> # -> Word: big # -> Antonym: Select examples for a prompt template# If you have a large number of examples, you can use the ExampleSelector to select a subset of examples that will be most informative for the Language Model. This will help you generate a prompt that is more likely to generate a good response. Below, we’ll use the LengthBasedExampleSelector, which selects examples based on the length of the input. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more. We’ll continue with the example from the previous section, but this time we’ll use the LengthBasedExampleSelector to select the examples. from langchain.prompts.example_selector import LengthBasedExampleSelector # These are a lot of examples of a pretend task of creating antonyms. examples = [ {"word": "happy", "antonym": "sad"}, {"word": "tall", "antonym": "short"}, {"word": "energetic", "antonym": "lethargic"}, {"word": "sunny", "antonym": "gloomy"}, {"word": "windy", "antonym": "calm"}, ]
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html
2a46250ea435-5
{"word": "windy", "antonym": "calm"}, ] # We'll use the `LengthBasedExampleSelector` to select the examples. example_selector = LengthBasedExampleSelector( # These are the examples is has available to choose from. examples=examples, # This is the PromptTemplate being used to format the examples. example_prompt=example_prompt, # This is the maximum length that the formatted examples should be. # Length is measured by the get_text_length function below. max_length=25 # This is the function used to get the length of a string, which is used # to determine which examples to include. It is commented out because # it is provided as a default value if none is specified. # get_text_length: Callable[[str], int] = lambda x: len(re.split("\n| ", x)) ) # We can now use the `example_selector` to create a `FewShotPromptTemplate`. dynamic_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Word: {input}\nAntonym:", input_variables=["input"], example_separator="\n\n", ) # We can now generate a prompt using the `format` method. print(dynamic_prompt.format(input="big")) # -> Give the antonym of every input # -> # -> Word: happy # -> Antonym: sad # -> # -> Word: tall # -> Antonym: short # -> # -> Word: energetic # -> Antonym: lethargic # -> # -> Word: sunny
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html
2a46250ea435-6
# -> Antonym: lethargic # -> # -> Word: sunny # -> Antonym: gloomy # -> # -> Word: windy # -> Antonym: calm # -> # -> Word: big # -> Antonym: In contrast, if we provide a very long input, the LengthBasedExampleSelector will select fewer examples to include in the prompt. long_string = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else" print(dynamic_prompt.format(input=long_string)) # -> Give the antonym of every input # -> Word: happy # -> Antonym: sad # -> # -> Word: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else # -> Antonym: LangChain comes with a few example selectors that you can use. For more details on how to use them, see Example Selectors. You can create custom example selectors that select examples based on any criteria you want. For more details on how to do this, see Creating a custom example selector. previous Prompt Templates next How-To Guides Contents What is a prompt template? Create a prompt template Template formats Validate template Serialize prompt template Pass few shot examples to a prompt template Select examples for a prompt template By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html
1890c5227cd8-0
.ipynb .pdf How to work with partial Prompt Templates Contents Partial With Strings Partial With Functions How to work with partial Prompt Templates# A prompt template is a class with a .format method which takes in a key-value map and returns a string (a prompt) to pass to the language model. Like other methods, it can make sense to “partial” a prompt template - eg pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values. LangChain supports this in two ways: we allow for partially formatted prompts (1) with string values, (2) with functions that return string values. These two different ways support different use cases. In the documentation below we go over the motivations for both use cases as well as how to do it in LangChain. Partial With Strings# One common use case for wanting to partial a prompt template is if you get some of the variables before others. For example, suppose you have a prompt template that requires two variables, foo and baz. If you get the foo value early on in the chain, but the baz value later, it can be annoying to wait until you have both variables in the same place to pass them to the prompt template. Instead, you can partial the prompt template with the foo value, and then pass the partialed prompt template along and just use that. Below is an example of doing this: from langchain.prompts import PromptTemplate prompt = PromptTemplate(template="{foo}{bar}", input_variables=["foo", "bar"]) partial_prompt = prompt.partial(foo="foo"); print(partial_prompt.format(bar="baz")) foobaz You can also just initialize the prompt with the partialed variables. prompt = PromptTemplate(template="{foo}{bar}", input_variables=["bar"], partial_variables={"foo": "foo"}) print(prompt.format(bar="baz")) foobaz
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/partial.html
1890c5227cd8-1
print(prompt.format(bar="baz")) foobaz Partial With Functions# The other common use is to partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can’t hard code it in the prompt, and passing it along with the other input variables is a bit annoying. In this case, it’s very handy to be able to partial the prompt with a function that always returns the current date. from datetime import datetime def _get_datetime(): now = datetime.now() return now.strftime("%m/%d/%Y, %H:%M:%S") prompt = PromptTemplate( template="Tell me a {adjective} joke about the day {date}", input_variables=["adjective", "date"] ); partial_prompt = prompt.partial(date=_get_datetime) print(partial_prompt.format(adjective="funny")) Tell me a funny joke about the day 02/27/2023, 22:15:16 You can also just initialize the prompt with the partialed variables, which often makes more sense in this workflow. prompt = PromptTemplate( template="Tell me a {adjective} joke about the day {date}", input_variables=["adjective"], partial_variables={"date": _get_datetime} ); print(prompt.format(adjective="funny")) Tell me a funny joke about the day 02/27/2023, 22:15:16 previous How to create a prompt template that uses few shot examples next Prompt Composition Contents Partial With Strings Partial With Functions By Harrison Chase © Copyright 2023, Harrison Chase.
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/partial.html
1890c5227cd8-2
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/partial.html
3177b63d2c25-0
.ipynb .pdf How to serialize prompts Contents PromptTemplate Loading from YAML Loading from JSON Loading Template from a File FewShotPromptTemplate Examples Loading from YAML Loading from JSON Examples in the Config Example Prompt from a File PromptTempalte with OutputParser How to serialize prompts# It is often preferrable to store prompts not as python code but as files. This can make it easy to share, store, and version prompts. This notebook covers how to do that in LangChain, walking through all the different types of prompts and the different serialization options. At a high level, the following design principles are applied to serialization: Both JSON and YAML are supported. We want to support serialization methods that are human readable on disk, and YAML and JSON are two of the most popular methods for that. Note that this rule applies to prompts. For other assets, like Examples, different serialization methods may be supported. We support specifying everything in one file, or storing different components (templates, examples, etc) in different files and referencing them. For some cases, storing everything in file makes the most sense, but for others it is preferrable to split up some of the assets (long templates, large examples, reusable components). LangChain supports both. There is also a single entry point to load prompts from disk, making it easy to load any type of prompt. # All prompts are loaded through the `load_prompt` function. from langchain.prompts import load_prompt PromptTemplate# This section covers examples for loading a PromptTemplate. Loading from YAML# This shows an example of loading a PromptTemplate from YAML. !cat simple_prompt.yaml _type: prompt input_variables: ["adjective", "content"] template: Tell me a {adjective} joke about {content}. prompt = load_prompt("simple_prompt.yaml")
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
3177b63d2c25-1
prompt = load_prompt("simple_prompt.yaml") print(prompt.format(adjective="funny", content="chickens")) Tell me a funny joke about chickens. Loading from JSON# This shows an example of loading a PromptTemplate from JSON. !cat simple_prompt.json { "_type": "prompt", "input_variables": ["adjective", "content"], "template": "Tell me a {adjective} joke about {content}." } prompt = load_prompt("simple_prompt.json") print(prompt.format(adjective="funny", content="chickens")) Tell me a funny joke about chickens. Loading Template from a File# This shows an example of storing the template in a separate file and then referencing it in the config. Notice that the key changes from template to template_path. !cat simple_template.txt Tell me a {adjective} joke about {content}. !cat simple_prompt_with_template_file.json { "_type": "prompt", "input_variables": ["adjective", "content"], "template_path": "simple_template.txt" } prompt = load_prompt("simple_prompt_with_template_file.json") print(prompt.format(adjective="funny", content="chickens")) Tell me a funny joke about chickens. FewShotPromptTemplate# This section covers examples for loading few shot prompt templates. Examples# This shows an example of what examples stored as json might look like. !cat examples.json [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"} ] And here is what the same examples stored as yaml might look like. !cat examples.yaml - input: happy output: sad - input: tall output: short Loading from YAML#
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
3177b63d2c25-2
output: sad - input: tall output: short Loading from YAML# This shows an example of loading a few shot example from YAML. !cat few_shot_prompt.yaml _type: few_shot input_variables: ["adjective"] prefix: Write antonyms for the following words. example_prompt: _type: prompt input_variables: ["input", "output"] template: "Input: {input}\nOutput: {output}" examples: examples.json suffix: "Input: {adjective}\nOutput:" prompt = load_prompt("few_shot_prompt.yaml") print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output: The same would work if you loaded examples from the yaml file. !cat few_shot_prompt_yaml_examples.yaml _type: few_shot input_variables: ["adjective"] prefix: Write antonyms for the following words. example_prompt: _type: prompt input_variables: ["input", "output"] template: "Input: {input}\nOutput: {output}" examples: examples.yaml suffix: "Input: {adjective}\nOutput:" prompt = load_prompt("few_shot_prompt_yaml_examples.yaml") print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output: Loading from JSON# This shows an example of loading a few shot example from JSON. !cat few_shot_prompt.json { "_type": "few_shot",
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
3177b63d2c25-3
!cat few_shot_prompt.json { "_type": "few_shot", "input_variables": ["adjective"], "prefix": "Write antonyms for the following words.", "example_prompt": { "_type": "prompt", "input_variables": ["input", "output"], "template": "Input: {input}\nOutput: {output}" }, "examples": "examples.json", "suffix": "Input: {adjective}\nOutput:" } prompt = load_prompt("few_shot_prompt.json") print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output: Examples in the Config# This shows an example of referencing the examples directly in the config. !cat few_shot_prompt_examples_in.json { "_type": "few_shot", "input_variables": ["adjective"], "prefix": "Write antonyms for the following words.", "example_prompt": { "_type": "prompt", "input_variables": ["input", "output"], "template": "Input: {input}\nOutput: {output}" }, "examples": [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"} ], "suffix": "Input: {adjective}\nOutput:" } prompt = load_prompt("few_shot_prompt_examples_in.json") print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output: Example Prompt from a File#
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
3177b63d2c25-4
Output: short Input: funny Output: Example Prompt from a File# This shows an example of loading the PromptTemplate that is used to format the examples from a separate file. Note that the key changes from example_prompt to example_prompt_path. !cat example_prompt.json { "_type": "prompt", "input_variables": ["input", "output"], "template": "Input: {input}\nOutput: {output}" } !cat few_shot_prompt_example_prompt.json { "_type": "few_shot", "input_variables": ["adjective"], "prefix": "Write antonyms for the following words.", "example_prompt_path": "example_prompt.json", "examples": "examples.json", "suffix": "Input: {adjective}\nOutput:" } prompt = load_prompt("few_shot_prompt_example_prompt.json") print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output: PromptTempalte with OutputParser# This shows an example of loading a prompt along with an OutputParser from a file. ! cat prompt_with_output_parser.json { "input_variables": [ "question", "student_answer" ], "output_parser": { "regex": "(.*?)\\nScore: (.*)", "output_keys": [ "answer", "score" ], "default_output_key": null, "_type": "regex_parser" }, "partial_variables": {},
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
3177b63d2c25-5
"_type": "regex_parser" }, "partial_variables": {}, "template": "Given the following question and student answer, provide a correct answer and score the student answer.\nQuestion: {question}\nStudent Answer: {student_answer}\nCorrect Answer:", "template_format": "f-string", "validate_template": true, "_type": "prompt" } prompt = load_prompt("prompt_with_output_parser.json") prompt.output_parser.parse("George Washington was born in 1732 and died in 1799.\nScore: 1/2") {'answer': 'George Washington was born in 1732 and died in 1799.', 'score': '1/2'} previous Prompt Composition next Prompts Contents PromptTemplate Loading from YAML Loading from JSON Loading Template from a File FewShotPromptTemplate Examples Loading from YAML Loading from JSON Examples in the Config Example Prompt from a File PromptTempalte with OutputParser By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
1daa5b96d7ea-0
.ipynb .pdf How to create a custom prompt template Contents Why are custom prompt templates needed? Creating a Custom Prompt Template Use the custom prompt template How to create a custom prompt template# Let’s suppose we want the LLM to generate English language explanations of a function given its name. To achieve this task, we will create a custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function. Why are custom prompt templates needed?# LangChain provides a set of default prompt templates that can be used to generate prompts for a variety of tasks. However, there may be cases where the default prompt templates do not meet your needs. For example, you may want to create a prompt template with specific dynamic instructions for your language model. In such cases, you can create a custom prompt template. Take a look at the current set of default prompt templates here. Creating a Custom Prompt Template# There are essentially two distinct prompt templates available - string prompt templates and chat prompt templates. String prompt templates provides a simple prompt in string format, while chat prompt templates produces a more structured prompt to be used with a chat API. In this guide, we will create a custom prompt using a string prompt template. To create a custom string prompt template, there are two requirements: It has an input_variables attribute that exposes what input variables the prompt template expects. It exposes a format method that takes in keyword arguments corresponding to the expected input_variables and returns the formatted prompt. We will create a custom prompt template that takes in the function name as input and formats the prompt to provide the source code of the function. To achieve this, let’s first create a function that will return the source code of a function given its name. import inspect def get_source_code(function_name): # Get the source code of the function
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/custom_prompt_template.html
1daa5b96d7ea-1
def get_source_code(function_name): # Get the source code of the function return inspect.getsource(function_name) Next, we’ll create a custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function. from langchain.prompts import StringPromptTemplate from pydantic import BaseModel, validator class FunctionExplainerPromptTemplate(StringPromptTemplate, BaseModel): """ A custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function. """ @validator("input_variables") def validate_input_variables(cls, v): """ Validate that the input variables are correct. """ if len(v) != 1 or "function_name" not in v: raise ValueError("function_name must be the only input_variable.") return v def format(self, **kwargs) -> str: # Get the source code of the function source_code = get_source_code(kwargs["function_name"]) # Generate the prompt to be sent to the language model prompt = f""" Given the function name and source code, generate an English language explanation of the function. Function Name: {kwargs["function_name"].__name__} Source Code: {source_code} Explanation: """ return prompt def _prompt_type(self): return "function-explainer" Use the custom prompt template# Now that we have created a custom prompt template, we can use it to generate prompts for our task. fn_explainer = FunctionExplainerPromptTemplate(input_variables=["function_name"]) # Generate a prompt for the function "get_source_code" prompt = fn_explainer.format(function_name=get_source_code) print(prompt)
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/custom_prompt_template.html
1daa5b96d7ea-2
prompt = fn_explainer.format(function_name=get_source_code) print(prompt) Given the function name and source code, generate an English language explanation of the function. Function Name: get_source_code Source Code: def get_source_code(function_name): # Get the source code of the function return inspect.getsource(function_name) Explanation: previous Connecting to a Feature Store next How to create a prompt template that uses few shot examples Contents Why are custom prompt templates needed? Creating a Custom Prompt Template Use the custom prompt template By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/custom_prompt_template.html
b7e94a670164-0
.ipynb .pdf Connecting to a Feature Store Contents Feast Load Feast Store Prompts Use in a chain Tecton Prerequisites Define and Load Features Prompts Use in a chain Featureform Initialize Featureform Prompts Use in a chain Connecting to a Feature Store# Feature stores are a concept from traditional machine learning that make sure data fed into models is up-to-date and relevant. For more on this, see here. This concept is extremely relevant when considering putting LLM applications in production. In order to personalize LLM applications, you may want to combine LLMs with up-to-date information about particular users. Feature stores can be a great way to keep that data fresh, and LangChain provides an easy way to combine that data with LLMs. In this notebook we will show how to connect prompt templates to feature stores. The basic idea is to call a feature store from inside a prompt template to retrieve values that are then formatted into the prompt. Feast# To start, we will use the popular open source feature store framework Feast. This assumes you have already run the steps in the README around getting started. We will build of off that example in getting started, and create and LLMChain to write a note to a specific driver regarding their up-to-date statistics. Load Feast Store# Again, this should be set up according to the instructions in the Feast README from feast import FeatureStore # You may need to update the path depending on where you stored it feast_repo_path = "../../../../../my_feature_repo/feature_repo/" store = FeatureStore(repo_path=feast_repo_path) Prompts# Here we will set up a custom FeastPromptTemplate. This prompt template will take in a driver id, look up their stats, and format those stats into a prompt.
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
b7e94a670164-1
Note that the input to this prompt template is just driver_id, since that is the only user defined piece (all other variables are looked up inside the prompt template). from langchain.prompts import PromptTemplate, StringPromptTemplate template = """Given the driver's up to date stats, write them note relaying those stats to them. If they have a conversation rate above .5, give them a compliment. Otherwise, make a silly joke about chickens at the end to make them feel better Here are the drivers stats: Conversation rate: {conv_rate} Acceptance rate: {acc_rate} Average Daily Trips: {avg_daily_trips} Your response:""" prompt = PromptTemplate.from_template(template) class FeastPromptTemplate(StringPromptTemplate): def format(self, **kwargs) -> str: driver_id = kwargs.pop("driver_id") feature_vector = store.get_online_features( features=[ 'driver_hourly_stats:conv_rate', 'driver_hourly_stats:acc_rate', 'driver_hourly_stats:avg_daily_trips' ], entity_rows=[{"driver_id": driver_id}] ).to_dict() kwargs["conv_rate"] = feature_vector["conv_rate"][0] kwargs["acc_rate"] = feature_vector["acc_rate"][0] kwargs["avg_daily_trips"] = feature_vector["avg_daily_trips"][0] return prompt.format(**kwargs) prompt_template = FeastPromptTemplate(input_variables=["driver_id"]) print(prompt_template.format(driver_id=1001)) Given the driver's up to date stats, write them note relaying those stats to them. If they have a conversation rate above .5, give them a compliment. Otherwise, make a silly joke about chickens at the end to make them feel better Here are the drivers stats:
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
b7e94a670164-2
Here are the drivers stats: Conversation rate: 0.4745151400566101 Acceptance rate: 0.055561766028404236 Average Daily Trips: 936 Your response: Use in a chain# We can now use this in a chain, successfully creating a chain that achieves personalization backed by a feature store from langchain.chat_models import ChatOpenAI from langchain.chains import LLMChain chain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template) chain.run(1001) "Hi there! I wanted to update you on your current stats. Your acceptance rate is 0.055561766028404236 and your average daily trips are 936. While your conversation rate is currently 0.4745151400566101, I have no doubt that with a little extra effort, you'll be able to exceed that .5 mark! Keep up the great work! And remember, even chickens can't always cross the road, but they still give it their best shot." Tecton# Above, we showed how you could use Feast, a popular open source and self-managed feature store, with LangChain. Our examples below will show a similar integration using Tecton. Tecton is a fully managed feature platform built to orchestrate the complete ML feature lifecycle, from transformation to online serving, with enterprise-grade SLAs. Prerequisites# Tecton Deployment (sign up at https://tecton.ai) TECTON_API_KEY environment variable set to a valid Service Account key Define and Load Features# We will use the user_transaction_counts Feature View from the Tecton tutorial as part of a Feature Service. For simplicity, we are only using a single Feature View; however, more sophisticated applications may require more feature views to retrieve the features needed for its prompt. user_transaction_metrics = FeatureService(
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
b7e94a670164-3
user_transaction_metrics = FeatureService( name = "user_transaction_metrics", features = [user_transaction_counts] ) The above Feature Service is expected to be applied to a live workspace. For this example, we will be using the “prod” workspace. import tecton workspace = tecton.get_workspace("prod") feature_service = workspace.get_feature_service("user_transaction_metrics") Prompts# Here we will set up a custom TectonPromptTemplate. This prompt template will take in a user_id , look up their stats, and format those stats into a prompt. Note that the input to this prompt template is just user_id, since that is the only user defined piece (all other variables are looked up inside the prompt template). from langchain.prompts import PromptTemplate, StringPromptTemplate template = """Given the vendor's up to date transaction stats, write them a note based on the following rules: 1. If they had a transaction in the last day, write a short congratulations message on their recent sales 2. If no transaction in the last day, but they had a transaction in the last 30 days, playfully encourage them to sell more. 3. Always add a silly joke about chickens at the end Here are the vendor's stats: Number of Transactions Last Day: {transaction_count_1d} Number of Transactions Last 30 Days: {transaction_count_30d} Your response:""" prompt = PromptTemplate.from_template(template) class TectonPromptTemplate(StringPromptTemplate): def format(self, **kwargs) -> str: user_id = kwargs.pop("user_id") feature_vector = feature_service.get_online_features(join_keys={"user_id": user_id}).to_dict() kwargs["transaction_count_1d"] = feature_vector["user_transaction_counts.transaction_count_1d_1d"]
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
b7e94a670164-4
kwargs["transaction_count_30d"] = feature_vector["user_transaction_counts.transaction_count_30d_1d"] return prompt.format(**kwargs) prompt_template = TectonPromptTemplate(input_variables=["user_id"]) print(prompt_template.format(user_id="user_469998441571")) Given the vendor's up to date transaction stats, write them a note based on the following rules: 1. If they had a transaction in the last day, write a short congratulations message on their recent sales 2. If no transaction in the last day, but they had a transaction in the last 30 days, playfully encourage them to sell more. 3. Always add a silly joke about chickens at the end Here are the vendor's stats: Number of Transactions Last Day: 657 Number of Transactions Last 30 Days: 20326 Your response: Use in a chain# We can now use this in a chain, successfully creating a chain that achieves personalization backed by the Tecton Feature Platform from langchain.chat_models import ChatOpenAI from langchain.chains import LLMChain chain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template) chain.run("user_469998441571") 'Wow, congratulations on your recent sales! Your business is really soaring like a chicken on a hot air balloon! Keep up the great work!' Featureform# Finally, we will use Featureform an open-source and enterprise-grade feature store to run the same example. Featureform allows you to work with your infrastructure like Spark or locally to define your feature transformations. Initialize Featureform# You can follow in the instructions in the README to initialize your transformations and features in Featureform. import featureform as ff client = ff.Client(host="demo.featureform.com") Prompts#
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
b7e94a670164-5
client = ff.Client(host="demo.featureform.com") Prompts# Here we will set up a custom FeatureformPromptTemplate. This prompt template will take in the average amount a user pays per transactions. Note that the input to this prompt template is just avg_transaction, since that is the only user defined piece (all other variables are looked up inside the prompt template). from langchain.prompts import PromptTemplate, StringPromptTemplate template = """Given the amount a user spends on average per transaction, let them know if they are a high roller. Otherwise, make a silly joke about chickens at the end to make them feel better Here are the user's stats: Average Amount per Transaction: ${avg_transcation} Your response:""" prompt = PromptTemplate.from_template(template) class FeatureformPromptTemplate(StringPromptTemplate): def format(self, **kwargs) -> str: user_id = kwargs.pop("user_id") fpf = client.features([("avg_transactions", "quickstart")], {"user": user_id}) return prompt.format(**kwargs) prompt_template = FeatureformPrompTemplate(input_variables=["user_id"]) print(prompt_template.format(user_id="C1410926")) Use in a chain# We can now use this in a chain, successfully creating a chain that achieves personalization backed by the Featureform Feature Platform from langchain.chat_models import ChatOpenAI from langchain.chains import LLMChain chain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template) chain.run("C1410926") previous How-To Guides next How to create a custom prompt template Contents Feast Load Feast Store Prompts Use in a chain Tecton Prerequisites Define and Load Features Prompts Use in a chain Featureform Initialize Featureform
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
b7e94a670164-6
Define and Load Features Prompts Use in a chain Featureform Initialize Featureform Prompts Use in a chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
1b7b947ce3ef-0
.ipynb .pdf How to create a prompt template that uses few shot examples Contents Use Case Using an example set Create the example set Create a formatter for the few shot examples Feed examples and formatter to FewShotPromptTemplate Using an example selector Feed examples into ExampleSelector Feed example selector into FewShotPromptTemplate How to create a prompt template that uses few shot examples# In this tutorial, we’ll learn how to create a prompt template that uses few shot examples. We’ll use the FewShotPromptTemplate class to create a prompt template that uses few shot examples. This class either takes in a set of examples, or an ExampleSelector object. In this tutorial, we’ll go over both options. Use Case# In this tutorial, we’ll configure few shot examples for self-ask with search. Using an example set# Create the example set# To get started, create a list of few shot examples. Each example should be a dictionary with the keys being the input variables and the values being the values for those input variables. from langchain.prompts.few_shot import FewShotPromptTemplate from langchain.prompts.prompt import PromptTemplate examples = [ { "question": "Who lived longer, Muhammad Ali or Alan Turing?", "answer": """ Are follow up questions needed here: Yes. Follow up: How old was Muhammad Ali when he died? Intermediate answer: Muhammad Ali was 74 years old when he died. Follow up: How old was Alan Turing when he died? Intermediate answer: Alan Turing was 41 years old when he died. So the final answer is: Muhammad Ali """ }, { "question": "When was the founder of craigslist born?", "answer": """ Are follow up questions needed here: Yes.
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
1b7b947ce3ef-1
"answer": """ Are follow up questions needed here: Yes. Follow up: Who was the founder of craigslist? Intermediate answer: Craigslist was founded by Craig Newmark. Follow up: When was Craig Newmark born? Intermediate answer: Craig Newmark was born on December 6, 1952. So the final answer is: December 6, 1952 """ }, { "question": "Who was the maternal grandfather of George Washington?", "answer": """ Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph Ball """ }, { "question": "Are both the directors of Jaws and Casino Royale from the same country?", "answer": """ Are follow up questions needed here: Yes. Follow up: Who is the director of Jaws? Intermediate Answer: The director of Jaws is Steven Spielberg. Follow up: Where is Steven Spielberg from? Intermediate Answer: The United States. Follow up: Who is the director of Casino Royale? Intermediate Answer: The director of Casino Royale is Martin Campbell. Follow up: Where is Martin Campbell from? Intermediate Answer: New Zealand. So the final answer is: No """ } ] Create a formatter for the few shot examples# Configure a formatter that will format the few shot examples into a string. This formatter should be a PromptTemplate object. example_prompt = PromptTemplate(input_variables=["question", "answer"], template="Question: {question}\n{answer}")
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
1b7b947ce3ef-2
print(example_prompt.format(**examples[0])) Question: Who lived longer, Muhammad Ali or Alan Turing? Are follow up questions needed here: Yes. Follow up: How old was Muhammad Ali when he died? Intermediate answer: Muhammad Ali was 74 years old when he died. Follow up: How old was Alan Turing when he died? Intermediate answer: Alan Turing was 41 years old when he died. So the final answer is: Muhammad Ali Feed examples and formatter to FewShotPromptTemplate# Finally, create a FewShotPromptTemplate object. This object takes in the few shot examples and the formatter for the few shot examples. prompt = FewShotPromptTemplate( examples=examples, example_prompt=example_prompt, suffix="Question: {input}", input_variables=["input"] ) print(prompt.format(input="Who was the father of Mary Ball Washington?")) Question: Who lived longer, Muhammad Ali or Alan Turing? Are follow up questions needed here: Yes. Follow up: How old was Muhammad Ali when he died? Intermediate answer: Muhammad Ali was 74 years old when he died. Follow up: How old was Alan Turing when he died? Intermediate answer: Alan Turing was 41 years old when he died. So the final answer is: Muhammad Ali Question: When was the founder of craigslist born? Are follow up questions needed here: Yes. Follow up: Who was the founder of craigslist? Intermediate answer: Craigslist was founded by Craig Newmark. Follow up: When was Craig Newmark born? Intermediate answer: Craig Newmark was born on December 6, 1952. So the final answer is: December 6, 1952 Question: Who was the maternal grandfather of George Washington? Are follow up questions needed here: Yes.
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
1b7b947ce3ef-3
Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph Ball Question: Are both the directors of Jaws and Casino Royale from the same country? Are follow up questions needed here: Yes. Follow up: Who is the director of Jaws? Intermediate Answer: The director of Jaws is Steven Spielberg. Follow up: Where is Steven Spielberg from? Intermediate Answer: The United States. Follow up: Who is the director of Casino Royale? Intermediate Answer: The director of Casino Royale is Martin Campbell. Follow up: Where is Martin Campbell from? Intermediate Answer: New Zealand. So the final answer is: No Question: Who was the father of Mary Ball Washington? Using an example selector# Feed examples into ExampleSelector# We will reuse the example set and the formatter from the previous section. However, instead of feeding the examples directly into the FewShotPromptTemplate object, we will feed them into an ExampleSelector object. In this tutorial, we will use the SemanticSimilarityExampleSelector class. This class selects few shot examples based on their similarity to the input. It uses an embedding model to compute the similarity between the input and the few shot examples, as well as a vector store to perform the nearest neighbor search. from langchain.prompts.example_selector import SemanticSimilarityExampleSelector from langchain.vectorstores import Chroma from langchain.embeddings import OpenAIEmbeddings example_selector = SemanticSimilarityExampleSelector.from_examples( # This is the list of examples available to select from. examples,
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
1b7b947ce3ef-4
# This is the list of examples available to select from. examples, # This is the embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # This is the VectorStore class that is used to store the embeddings and do a similarity search over. Chroma, # This is the number of examples to produce. k=1 ) # Select the most similar example to the input. question = "Who was the father of Mary Ball Washington?" selected_examples = example_selector.select_examples({"question": question}) print(f"Examples most similar to the input: {question}") for example in selected_examples: print("\n") for k, v in example.items(): print(f"{k}: {v}") Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. Examples most similar to the input: Who was the father of Mary Ball Washington? question: Who was the maternal grandfather of George Washington? answer: Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph Ball Feed example selector into FewShotPromptTemplate# Finally, create a FewShotPromptTemplate object. This object takes in the example selector and the formatter for the few shot examples. prompt = FewShotPromptTemplate( example_selector=example_selector, example_prompt=example_prompt, suffix="Question: {input}", input_variables=["input"] )
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
1b7b947ce3ef-5
suffix="Question: {input}", input_variables=["input"] ) print(prompt.format(input="Who was the father of Mary Ball Washington?")) Question: Who was the maternal grandfather of George Washington? Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph Ball Question: Who was the father of Mary Ball Washington? previous How to create a custom prompt template next How to work with partial Prompt Templates Contents Use Case Using an example set Create the example set Create a formatter for the few shot examples Feed examples and formatter to FewShotPromptTemplate Using an example selector Feed examples into ExampleSelector Feed example selector into FewShotPromptTemplate By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
6f36e18c6ec6-0
.ipynb .pdf Prompt Composition Prompt Composition# This notebook goes over how to compose multiple prompts together. This can be useful when you want to reuse parts of prompts. This can be done with a PipelinePrompt. A PipelinePrompt consists of two main parts: final_prompt: This is the final prompt that is returned pipeline_prompts: This is a list of tuples, consisting of a string (name) and a Prompt Template. Each PromptTemplate will be formatted and then passed to future prompt templates as a variable with the same name as name from langchain.prompts.pipeline import PipelinePromptTemplate from langchain.prompts.prompt import PromptTemplate full_template = """{introduction} {example} {start}""" full_prompt = PromptTemplate.from_template(full_template) introduction_template = """You are impersonating {person}.""" introduction_prompt = PromptTemplate.from_template(introduction_template) example_template = """Here's an example of an interaction: Q: {example_q} A: {example_a}""" example_prompt = PromptTemplate.from_template(example_template) start_template = """Now, do this for real! Q: {input} A:""" start_prompt = PromptTemplate.from_template(start_template) input_prompts = [ ("introduction", introduction_prompt), ("example", example_prompt), ("start", start_prompt) ] pipeline_prompt = PipelinePromptTemplate(final_prompt=full_prompt, pipeline_prompts=input_prompts) pipeline_prompt.input_variables ['example_a', 'person', 'example_q', 'input'] print(pipeline_prompt.format( person="Elon Musk", example_q="What's your favorite car?", example_a="Telsa", input="What's your favorite social media site?" )) You are impersonating Elon Musk.
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_composition.html
6f36e18c6ec6-1
)) You are impersonating Elon Musk. Here's an example of an interaction: Q: What's your favorite car? A: Telsa Now, do this for real! Q: What's your favorite social media site? A: previous How to work with partial Prompt Templates next How to serialize prompts By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_composition.html
dd86c5c6d2c1-0
.ipynb .pdf Plan and Execute Contents Plan and Execute Imports Tools Planner, Executor, and Agent Run Example Plan and Execute# Plan and execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by BabyAGI and then the “Plan-and-Solve” paper. The planning is almost always done by an LLM. The execution is usually done by a separate agent (equipped with tools). Imports# from langchain.chat_models import ChatOpenAI from langchain.experimental.plan_and_execute import PlanAndExecute, load_agent_executor, load_chat_planner from langchain.llms import OpenAI from langchain import SerpAPIWrapper from langchain.agents.tools import Tool from langchain import LLMMathChain Tools# search = SerpAPIWrapper() llm = OpenAI(temperature=0) llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True) tools = [ Tool( name = "Search", func=search.run, description="useful for when you need to answer questions about current events" ), Tool( name="Calculator", func=llm_math_chain.run, description="useful for when you need to answer questions about math" ), ] Planner, Executor, and Agent# model = ChatOpenAI(temperature=0) planner = load_chat_planner(model) executor = load_agent_executor(model, tools, verbose=True) agent = PlanAndExecute(planner=planner, executor=executor, verbose=True) Run Example# agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
https://python.langchain.com/en/latest/modules/agents/plan_and_execute.html
dd86c5c6d2c1-1
> Entering new PlanAndExecute chain... steps=[Step(value="Search for Leo DiCaprio's girlfriend on the internet."), Step(value='Find her current age.'), Step(value='Raise her current age to the 0.43 power using a calculator or programming language.'), Step(value='Output the result.'), Step(value="Given the above steps taken, respond to the user's original question.\n\n")] > Entering new AgentExecutor chain... Action: ``` { "action": "Search", "action_input": "Who is Leo DiCaprio's girlfriend?" } ``` Observation: DiCaprio broke up with girlfriend Camila Morrone, 25, in the summer of 2022, after dating for four years. He's since been linked to another famous supermodel – Gigi Hadid. The power couple were first supposedly an item in September after being spotted getting cozy during a party at New York Fashion Week. Thought:Based on the previous observation, I can provide the answer to the current objective. Action: ``` { "action": "Final Answer", "action_input": "Leo DiCaprio is currently linked to Gigi Hadid." } ``` > Finished chain. ***** Step: Search for Leo DiCaprio's girlfriend on the internet. Response: Leo DiCaprio is currently linked to Gigi Hadid. > Entering new AgentExecutor chain... Action: ``` { "action": "Search", "action_input": "What is Gigi Hadid's current age?" } ``` Observation: 28 years Thought:Previous steps: steps=[(Step(value="Search for Leo DiCaprio's girlfriend on the internet."), StepResponse(response='Leo DiCaprio is currently linked to Gigi Hadid.'))]
https://python.langchain.com/en/latest/modules/agents/plan_and_execute.html
dd86c5c6d2c1-2
Current objective: value='Find her current age.' Action: ``` { "action": "Search", "action_input": "What is Gigi Hadid's current age?" } ``` Observation: 28 years Thought:Previous steps: steps=[(Step(value="Search for Leo DiCaprio's girlfriend on the internet."), StepResponse(response='Leo DiCaprio is currently linked to Gigi Hadid.')), (Step(value='Find her current age.'), StepResponse(response='28 years'))] Current objective: None Action: ``` { "action": "Final Answer", "action_input": "Gigi Hadid's current age is 28 years." } ``` > Finished chain. ***** Step: Find her current age. Response: Gigi Hadid's current age is 28 years. > Entering new AgentExecutor chain... Action: ``` { "action": "Calculator", "action_input": "28 ** 0.43" } ``` > Entering new LLMMathChain chain... 28 ** 0.43 ```text 28 ** 0.43 ``` ...numexpr.evaluate("28 ** 0.43")... Answer: 4.1906168361987195 > Finished chain. Observation: Answer: 4.1906168361987195 Thought:The next step is to provide the answer to the user's question. Action: ``` { "action": "Final Answer", "action_input": "Gigi Hadid's current age raised to the 0.43 power is approximately 4.19." } ``` > Finished chain. ***** Step: Raise her current age to the 0.43 power using a calculator or programming language.
https://python.langchain.com/en/latest/modules/agents/plan_and_execute.html
dd86c5c6d2c1-3
Response: Gigi Hadid's current age raised to the 0.43 power is approximately 4.19. > Entering new AgentExecutor chain... Action: ``` { "action": "Final Answer", "action_input": "The result is approximately 4.19." } ``` > Finished chain. ***** Step: Output the result. Response: The result is approximately 4.19. > Entering new AgentExecutor chain... Action: ``` { "action": "Final Answer", "action_input": "Gigi Hadid's current age raised to the 0.43 power is approximately 4.19." } ``` > Finished chain. ***** Step: Given the above steps taken, respond to the user's original question. Response: Gigi Hadid's current age raised to the 0.43 power is approximately 4.19. > Finished chain. "Gigi Hadid's current age raised to the 0.43 power is approximately 4.19." previous How to add SharedMemory to an Agent and its Tools next Callbacks Contents Plan and Execute Imports Tools Planner, Executor, and Agent Run Example By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/agents/plan_and_execute.html
b53d0428648f-0
.rst .pdf Agent Executors Agent Executors# Note Conceptual Guide Agent executors take an agent and tools and use the agent to decide which tools to call and in what order. In this part of the documentation we cover other related functionality to agent executors How to combine agents and vectorstores How to use the async API for Agents How to create ChatGPT Clone Handle Parsing Errors How to access intermediate steps How to cap the max number of iterations How to use a timeout for the agent How to add SharedMemory to an Agent and its Tools previous Vectorstore Agent next How to combine agents and vectorstores By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/agents/agent_executors.html
607a616893ef-0
.rst .pdf Tools Tools# Note Conceptual Guide Tools are ways that an agent can use to interact with the outside world. For an overview of what a tool is, how to use them, and a full list of examples, please see the getting started documentation Getting Started Next, we have some examples of customizing and generically working with tools Defining Custom Tools Multi-Input Tools Tool Input Schema In this documentation we cover generic tooling functionality (eg how to create your own) as well as examples of tools and how to use them. Apify ArXiv API Tool AWS Lambda API Shell Tool Bing Search Brave Search ChatGPT Plugins DuckDuckGo Search File System Tools Google Places Google Search Google Serper API Gradio Tools GraphQL tool HuggingFace Tools Human as a tool IFTTT WebHooks Metaphor Search Call the API Use Metaphor as a tool OpenWeatherMap API PubMed Tool Python REPL Requests SceneXplain Search Tools SearxNG Search API SerpAPI Twilio Wikipedia Wolfram Alpha YouTubeSearchTool Zapier Natural Language Actions API Example with SimpleSequentialChain previous Getting Started next Getting Started By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/agents/tools.html
29aaf7f21702-0
.ipynb .pdf Getting Started Getting Started# Agents use an LLM to determine which actions to take and in what order. An action can either be using a tool and observing its output, or returning to the user. When used correctly agents can be extremely powerful. The purpose of this notebook is to show you how to easily use agents through the simplest, highest level API. In order to load agents, you should understand the following concepts: Tool: A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. The interface for a tool is currently a function that is expected to have a string as an input, with a string as an output. LLM: The language model powering the agent. Agent: The agent to use. This should be a string that references a support agent class. Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see the documentation for custom agents. Agents: For a list of supported agents and their specifications, see here. Tools: For a list of predefined tools and their specifications, see here. from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.llms import OpenAI First, let’s load the language model we’re going to use to control the agent. llm = OpenAI(temperature=0) Next, let’s load some tools to use. Note that the llm-math tool uses an LLM, so we need to pass that in. tools = load_tools(["serpapi", "llm-math"], llm=llm) Finally, let’s initialize an agent with the tools, the language model, and the type of agent we want to use.
https://python.langchain.com/en/latest/modules/agents/getting_started.html
29aaf7f21702-1
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) Now let’s test it out! agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?") > Entering new AgentExecutor chain... I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power. Action: Search Action Input: "Leo DiCaprio girlfriend" Observation: Camila Morrone Thought: I need to find out Camila Morrone's age Action: Search Action Input: "Camila Morrone age" Observation: 25 years Thought: I need to calculate 25 raised to the 0.43 power Action: Calculator Action Input: 25^0.43 Observation: Answer: 3.991298452658078 Thought: I now know the final answer Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078. > Finished chain. "Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078." previous Agents next Tools By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/agents/getting_started.html
675a03de8342-0
.rst .pdf Toolkits Toolkits# Note Conceptual Guide This section of documentation covers agents with toolkits - eg an agent applied to a particular use case. See below for a full list of agent toolkits Azure Cognitive Services Toolkit CSV Agent Gmail Toolkit Jira JSON Agent OpenAPI agents Natural Language APIs Pandas Dataframe Agent PlayWright Browser Toolkit PowerBI Dataset Agent Python Agent Spark Dataframe Agent Spark SQL Agent SQL Database Agent Vectorstore Agent previous Structured Tool Chat Agent next Azure Cognitive Services Toolkit By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/agents/toolkits.html
c7a45baf1149-0
.rst .pdf Agents Agents# Note Conceptual Guide In this part of the documentation we cover the different types of agents, disregarding which specific tools they are used with. For a high level overview of the different types of agents, see the below documentation. Agent Types For documentation on how to create a custom agent, see the below. Custom Agent Custom LLM Agent Custom LLM Agent (with a ChatModel) Custom MRKL Agent Custom MultiAction Agent Custom Agent with Tool Retrieval We also have documentation for an in-depth dive into each agent type. Conversation Agent (for Chat Models) Conversation Agent MRKL MRKL Chat ReAct Self Ask With Search Structured Tool Chat Agent previous Zapier Natural Language Actions API next Agent Types By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/agents/agents.html
62c7ae8034a4-0
.ipynb .pdf Defining Custom Tools Contents Completely New Tools - String Input and Output Tool dataclass Subclassing the BaseTool class Using the tool decorator Custom Structured Tools StructuredTool dataclass Subclassing the BaseTool Using the decorator Modify existing tools Defining the priorities among Tools Using tools to return directly Handling Tool Errors Defining Custom Tools# When constructing your own agent, you will need to provide it with a list of Tools that it can use. Besides the actual function that is called, the Tool consists of several components: name (str), is required and must be unique within a set of tools provided to an agent description (str), is optional but recommended, as it is used by an agent to determine tool use return_direct (bool), defaults to False args_schema (Pydantic BaseModel), is optional but recommended, can be used to provide more information (e.g., few-shot examples) or validation for expected parameters. There are two main ways to define a tool, we will cover both in the example below. # Import things that are needed generically from langchain import LLMMathChain, SerpAPIWrapper from langchain.agents import AgentType, initialize_agent from langchain.chat_models import ChatOpenAI from langchain.tools import BaseTool, StructuredTool, Tool, tool Initialize the LLM to use for the agent. llm = ChatOpenAI(temperature=0) Completely New Tools - String Input and Output# The simplest tools accept a single query string and return a string output. If your tool function requires multiple arguments, you might want to skip down to the StructuredTool section below. There are two ways to do this: either by using the Tool dataclass, or by subclassing the BaseTool class. Tool dataclass#
https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html
62c7ae8034a4-1
Tool dataclass# The ‘Tool’ dataclass wraps functions that accept a single string input and returns a string output. # Load the tool configs that are needed. search = SerpAPIWrapper() llm_math_chain = LLMMathChain(llm=llm, verbose=True) tools = [ Tool.from_function( func=search.run, name = "Search", description="useful for when you need to answer questions about current events" # coroutine= ... <- you can specify an async method if desired as well ), ] /Users/wfh/code/lc/lckg/langchain/chains/llm_math/base.py:50: UserWarning: Directly instantiating an LLMMathChain with an llm is deprecated. Please instantiate with llm_chain argument or using the from_llm class method. warnings.warn( You can also define a custom `args_schema`` to provide more information about inputs. from pydantic import BaseModel, Field class CalculatorInput(BaseModel): question: str = Field() tools.append( Tool.from_function( func=llm_math_chain.run, name="Calculator", description="useful for when you need to answer questions about math", args_schema=CalculatorInput # coroutine= ... <- you can specify an async method if desired as well ) ) # Construct the agent. We will use the default agent type here. # See documentation for a full list of options. agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?") > Entering new AgentExecutor chain...
https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html
62c7ae8034a4-2
> Entering new AgentExecutor chain... I need to find out Leo DiCaprio's girlfriend's name and her age Action: Search Action Input: "Leo DiCaprio girlfriend" Observation: After rumours of a romance with Gigi Hadid, the Oscar winner has seemingly moved on. First being linked to the television personality in September 2022, it appears as if his "age bracket" has moved up. This follows his rumoured relationship with mere 19-year-old Eden Polani. Thought:I still need to find out his current girlfriend's name and age Action: Search Action Input: "Leo DiCaprio current girlfriend" Observation: Just Jared on Instagram: “Leonardo DiCaprio & girlfriend Camila Morrone couple up for a lunch date! Thought:Now that I know his girlfriend's name is Camila Morrone, I need to find her current age Action: Search Action Input: "Camila Morrone age" Observation: 25 years Thought:Now that I have her age, I need to calculate her age raised to the 0.43 power Action: Calculator Action Input: 25^(0.43) > Entering new LLMMathChain chain... 25^(0.43)```text 25**(0.43) ``` ...numexpr.evaluate("25**(0.43)")... Answer: 3.991298452658078 > Finished chain. Observation: Answer: 3.991298452658078 Thought:I now know the final answer Final Answer: Camila Morrone's current age raised to the 0.43 power is approximately 3.99. > Finished chain. "Camila Morrone's current age raised to the 0.43 power is approximately 3.99." Subclassing the BaseTool class#
https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html
62c7ae8034a4-3
Subclassing the BaseTool class# You can also directly subclass BaseTool. This is useful if you want more control over the instance variables or if you want to propagate callbacks to nested chains or other tools. from typing import Optional, Type from langchain.callbacks.manager import AsyncCallbackManagerForToolRun, CallbackManagerForToolRun class CustomSearchTool(BaseTool): name = "custom_search" description = "useful for when you need to answer questions about current events" def _run(self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None) -> str: """Use the tool.""" return search.run(query) async def _arun(self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> str: """Use the tool asynchronously.""" raise NotImplementedError("custom_search does not support async") class CustomCalculatorTool(BaseTool): name = "Calculator" description = "useful for when you need to answer questions about math" args_schema: Type[BaseModel] = CalculatorInput def _run(self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None) -> str: """Use the tool.""" return llm_math_chain.run(query) async def _arun(self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> str: """Use the tool asynchronously.""" raise NotImplementedError("Calculator does not support async") tools = [CustomSearchTool(), CustomCalculatorTool()] agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html
62c7ae8034a4-4
agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?") > Entering new AgentExecutor chain... I need to use custom_search to find out who Leo DiCaprio's girlfriend is, and then use the Calculator to raise her age to the 0.43 power. Action: custom_search Action Input: "Leo DiCaprio girlfriend" Observation: After rumours of a romance with Gigi Hadid, the Oscar winner has seemingly moved on. First being linked to the television personality in September 2022, it appears as if his "age bracket" has moved up. This follows his rumoured relationship with mere 19-year-old Eden Polani. Thought:I need to find out the current age of Eden Polani. Action: custom_search Action Input: "Eden Polani age" Observation: 19 years old Thought:Now I can use the Calculator to raise her age to the 0.43 power. Action: Calculator Action Input: 19 ^ 0.43 > Entering new LLMMathChain chain... 19 ^ 0.43```text 19 ** 0.43 ``` ...numexpr.evaluate("19 ** 0.43")... Answer: 3.547023357958959 > Finished chain. Observation: Answer: 3.547023357958959 Thought:I now know the final answer. Final Answer: 3.547023357958959 > Finished chain. '3.547023357958959' Using the tool decorator#
https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html
62c7ae8034a4-5
'3.547023357958959' Using the tool decorator# To make it easier to define custom tools, a @tool decorator is provided. This decorator can be used to quickly create a Tool from a simple function. The decorator uses the function name as the tool name by default, but this can be overridden by passing a string as the first argument. Additionally, the decorator will use the function’s docstring as the tool’s description. from langchain.tools import tool @tool def search_api(query: str) -> str: """Searches the API for the query.""" return f"Results for query {query}" search_api You can also provide arguments like the tool name and whether to return directly. @tool("search", return_direct=True) def search_api(query: str) -> str: """Searches the API for the query.""" return "Results" search_api Tool(name='search', description='search(query: str) -> str - Searches the API for the query.', args_schema=<class 'pydantic.main.SearchApi'>, return_direct=True, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x12748c4c0>, func=<function search_api at 0x16bd66310>, coroutine=None) You can also provide args_schema to provide more information about the argument class SearchInput(BaseModel): query: str = Field(description="should be a search query") @tool("search", return_direct=True, args_schema=SearchInput) def search_api(query: str) -> str: """Searches the API for the query.""" return "Results" search_api
https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html
62c7ae8034a4-6
"""Searches the API for the query.""" return "Results" search_api Tool(name='search', description='search(query: str) -> str - Searches the API for the query.', args_schema=<class '__main__.SearchInput'>, return_direct=True, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x12748c4c0>, func=<function search_api at 0x16bcf0ee0>, coroutine=None) Custom Structured Tools# If your functions require more structured arguments, you can use the StructuredTool class directly, or still subclass the BaseTool class. StructuredTool dataclass# To dynamically generate a structured tool from a given function, the fastest way to get started is with StructuredTool.from_function(). import requests from langchain.tools import StructuredTool def post_message(url: str, body: dict, parameters: Optional[dict] = None) -> str: """Sends a POST request to the given url with the given body and parameters.""" result = requests.post(url, json=body, params=parameters) return f"Status: {result.status_code} - {result.text}" tool = StructuredTool.from_function(post_message) Subclassing the BaseTool# The BaseTool automatically infers the schema from the _run method’s signature. from typing import Optional, Type from langchain.callbacks.manager import AsyncCallbackManagerForToolRun, CallbackManagerForToolRun class CustomSearchTool(BaseTool): name = "custom_search" description = "useful for when you need to answer questions about current events"
https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html
62c7ae8034a4-7
description = "useful for when you need to answer questions about current events" def _run(self, query: str, engine: str = "google", gl: str = "us", hl: str = "en", run_manager: Optional[CallbackManagerForToolRun] = None) -> str: """Use the tool.""" search_wrapper = SerpAPIWrapper(params={"engine": engine, "gl": gl, "hl": hl}) return search_wrapper.run(query) async def _arun(self, query: str, engine: str = "google", gl: str = "us", hl: str = "en", run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> str: """Use the tool asynchronously.""" raise NotImplementedError("custom_search does not support async") # You can provide a custom args schema to add descriptions or custom validation class SearchSchema(BaseModel): query: str = Field(description="should be a search query") engine: str = Field(description="should be a search engine") gl: str = Field(description="should be a country code") hl: str = Field(description="should be a language code") class CustomSearchTool(BaseTool): name = "custom_search" description = "useful for when you need to answer questions about current events" args_schema: Type[SearchSchema] = SearchSchema def _run(self, query: str, engine: str = "google", gl: str = "us", hl: str = "en", run_manager: Optional[CallbackManagerForToolRun] = None) -> str: """Use the tool.""" search_wrapper = SerpAPIWrapper(params={"engine": engine, "gl": gl, "hl": hl}) return search_wrapper.run(query)
https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html
62c7ae8034a4-8
return search_wrapper.run(query) async def _arun(self, query: str, engine: str = "google", gl: str = "us", hl: str = "en", run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> str: """Use the tool asynchronously.""" raise NotImplementedError("custom_search does not support async") Using the decorator# The tool decorator creates a structured tool automatically if the signature has multiple arguments. import requests from langchain.tools import tool @tool def post_message(url: str, body: dict, parameters: Optional[dict] = None) -> str: """Sends a POST request to the given url with the given body and parameters.""" result = requests.post(url, json=body, params=parameters) return f"Status: {result.status_code} - {result.text}" Modify existing tools# Now, we show how to load existing tools and modify them directly. In the example below, we do something really simple and change the Search tool to have the name Google Search. from langchain.agents import load_tools tools = load_tools(["serpapi", "llm-math"], llm=llm) tools[0].name = "Google Search" agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?") > Entering new AgentExecutor chain... I need to find out Leo DiCaprio's girlfriend's name and her age. Action: Google Search Action Input: "Leo DiCaprio girlfriend"
https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html
62c7ae8034a4-9
Action: Google Search Action Input: "Leo DiCaprio girlfriend" Observation: After rumours of a romance with Gigi Hadid, the Oscar winner has seemingly moved on. First being linked to the television personality in September 2022, it appears as if his "age bracket" has moved up. This follows his rumoured relationship with mere 19-year-old Eden Polani. Thought:I still need to find out his current girlfriend's name and her age. Action: Google Search Action Input: "Leo DiCaprio current girlfriend age" Observation: Leonardo DiCaprio has been linked with 19-year-old model Eden Polani, continuing the rumour that he doesn't date any women over the age of ... Thought:I need to find out the age of Eden Polani. Action: Calculator Action Input: 19^(0.43) Observation: Answer: 3.547023357958959 Thought:I now know the final answer. Final Answer: The age of Leo DiCaprio's girlfriend raised to the 0.43 power is approximately 3.55. > Finished chain. "The age of Leo DiCaprio's girlfriend raised to the 0.43 power is approximately 3.55." Defining the priorities among Tools# When you made a Custom tool, you may want the Agent to use the custom tool more than normal tools. For example, you made a custom tool, which gets information on music from your database. When a user wants information on songs, You want the Agent to use the custom tool more than the normal Search tool. But the Agent might prioritize a normal Search tool. This can be accomplished by adding a statement such as Use this more than the normal search if the question is about Music, like 'who is the singer of yesterday?' or 'what is the most popular song in 2022?' to the description.
https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html
62c7ae8034a4-10
An example is below. # Import things that are needed generically from langchain.agents import initialize_agent, Tool from langchain.agents import AgentType from langchain.llms import OpenAI from langchain import LLMMathChain, SerpAPIWrapper search = SerpAPIWrapper() tools = [ Tool( name = "Search", func=search.run, description="useful for when you need to answer questions about current events" ), Tool( name="Music Search", func=lambda x: "'All I Want For Christmas Is You' by Mariah Carey.", #Mock Function description="A Music search engine. Use this more than the normal search if the question is about Music, like 'who is the singer of yesterday?' or 'what is the most popular song in 2022?'", ) ] agent = initialize_agent(tools, OpenAI(temperature=0), agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) agent.run("what is the most famous song of christmas") > Entering new AgentExecutor chain... I should use a music search engine to find the answer Action: Music Search Action Input: most famous song of christmas'All I Want For Christmas Is You' by Mariah Carey. I now know the final answer Final Answer: 'All I Want For Christmas Is You' by Mariah Carey. > Finished chain. "'All I Want For Christmas Is You' by Mariah Carey." Using tools to return directly# Often, it can be desirable to have a tool output returned directly to the user, if it’s called. You can do this easily with LangChain by setting the return_direct flag for a tool to be True. llm_math_chain = LLMMathChain(llm=llm) tools = [
https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html
62c7ae8034a4-11
llm_math_chain = LLMMathChain(llm=llm) tools = [ Tool( name="Calculator", func=llm_math_chain.run, description="useful for when you need to answer questions about math", return_direct=True ) ] llm = OpenAI(temperature=0) agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) agent.run("whats 2**.12") > Entering new AgentExecutor chain... I need to calculate this Action: Calculator Action Input: 2**.12Answer: 1.086734862526058 > Finished chain. 'Answer: 1.086734862526058' Handling Tool Errors# When a tool encounters an error and the exception is not caught, the agent will stop executing. If you want the agent to continue execution, you can raise a ToolException and set handle_tool_error accordingly. When ToolException is thrown, the agent will not stop working, but will handle the exception according to the handle_tool_error variable of the tool, and the processing result will be returned to the agent as observation, and printed in red. You can set handle_tool_error to True, set it a unified string value, or set it as a function. If it’s set as a function, the function should take a ToolException as a parameter and return a str value. Please note that only raising a ToolException won’t be effective. You need to first set the handle_tool_error of the tool because its default value is False. from langchain.schema import ToolException from langchain import SerpAPIWrapper from langchain.agents import AgentType, initialize_agent from langchain.chat_models import ChatOpenAI from langchain.tools import Tool
https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html
62c7ae8034a4-12
from langchain.chat_models import ChatOpenAI from langchain.tools import Tool from langchain.chat_models import ChatOpenAI def _handle_error(error:ToolException) -> str: return "The following errors occurred during tool execution:" + error.args[0]+ "Please try another tool." def search_tool1(s: str):raise ToolException("The search tool1 is not available.") def search_tool2(s: str):raise ToolException("The search tool2 is not available.") search_tool3 = SerpAPIWrapper() description="useful for when you need to answer questions about current events.You should give priority to using it." tools = [ Tool.from_function( func=search_tool1, name="Search_tool1", description=description, handle_tool_error=True, ), Tool.from_function( func=search_tool2, name="Search_tool2", description=description, handle_tool_error=_handle_error, ), Tool.from_function( func=search_tool3.run, name="Search_tool3", description="useful for when you need to answer questions about current events", ), ] agent = initialize_agent( tools, ChatOpenAI(temperature=0), agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, ) agent.run("Who is Leo DiCaprio's girlfriend?") > Entering new AgentExecutor chain... I should use Search_tool1 to find recent news articles about Leo DiCaprio's personal life. Action: Search_tool1 Action Input: "Leo DiCaprio girlfriend" Observation: The search tool1 is not available. Thought:I should try using Search_tool2 instead. Action: Search_tool2
https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html
62c7ae8034a4-13
Thought:I should try using Search_tool2 instead. Action: Search_tool2 Action Input: "Leo DiCaprio girlfriend" Observation: The following errors occurred during tool execution:The search tool2 is not available.Please try another tool. Thought:I should try using Search_tool3 as a last resort. Action: Search_tool3 Action Input: "Leo DiCaprio girlfriend" Observation: Leonardo DiCaprio and Gigi Hadid were recently spotted at a pre-Oscars party, sparking interest once again in their rumored romance. The Revenant actor and the model first made headlines when they were spotted together at a New York Fashion Week afterparty in September 2022. Thought:Based on the information from Search_tool3, it seems that Gigi Hadid is currently rumored to be Leo DiCaprio's girlfriend. Final Answer: Gigi Hadid is currently rumored to be Leo DiCaprio's girlfriend. > Finished chain. "Gigi Hadid is currently rumored to be Leo DiCaprio's girlfriend." previous Getting Started next Multi-Input Tools Contents Completely New Tools - String Input and Output Tool dataclass Subclassing the BaseTool class Using the tool decorator Custom Structured Tools StructuredTool dataclass Subclassing the BaseTool Using the decorator Modify existing tools Defining the priorities among Tools Using tools to return directly Handling Tool Errors By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html
b773db4d7129-0
.ipynb .pdf Multi-Input Tools Contents Multi-Input Tools with a string format Multi-Input Tools# This notebook shows how to use a tool that requires multiple inputs with an agent. The recommended way to do so is with the StructuredTool class. import os os.environ["LANGCHAIN_TRACING"] = "true" from langchain import OpenAI from langchain.agents import initialize_agent, AgentType llm = OpenAI(temperature=0) from langchain.tools import StructuredTool def multiplier(a: float, b: float) -> float: """Multiply the provided floats.""" return a * b tool = StructuredTool.from_function(multiplier) # Structured tools are compatible with the STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION agent type. agent_executor = initialize_agent([tool], llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True) agent_executor.run("What is 3 times 4") > Entering new AgentExecutor chain... Thought: I need to multiply 3 and 4 Action: ``` { "action": "multiplier", "action_input": {"a": 3, "b": 4} } ``` Observation: 12 Thought: I know what to respond Action: ``` { "action": "Final Answer", "action_input": "3 times 4 is 12" } ``` > Finished chain. '3 times 4 is 12' Multi-Input Tools with a string format#
https://python.langchain.com/en/latest/modules/agents/tools/multi_input_tool.html
b773db4d7129-1
'3 times 4 is 12' Multi-Input Tools with a string format# An alternative to the structured tool would be to use the regular Tool class and accept a single string. The tool would then have to handle the parsing logic to extract the relavent values from the text, which tightly couples the tool representation to the agent prompt. This is still useful if the underlying language model can’t reliabl generate structured schema. Let’s take the multiplication function as an example. In order to use this, we will tell the agent to generate the “Action Input” as a comma-separated list of length two. We will then write a thin wrapper that takes a string, splits it into two around a comma, and passes both parsed sides as integers to the multiplication function. from langchain.llms import OpenAI from langchain.agents import initialize_agent, Tool from langchain.agents import AgentType Here is the multiplication function, as well as a wrapper to parse a string as input. def multiplier(a, b): return a * b def parsing_multiplier(string): a, b = string.split(",") return multiplier(int(a), int(b)) llm = OpenAI(temperature=0) tools = [ Tool( name = "Multiplier", func=parsing_multiplier, description="useful for when you need to multiply two numbers together. The input to this tool should be a comma separated list of numbers of length two, representing the two numbers you want to multiply together. For example, `1,2` would be the input if you wanted to multiply 1 by 2." ) ] mrkl = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) mrkl.run("What is 3 times 4") > Entering new AgentExecutor chain...
https://python.langchain.com/en/latest/modules/agents/tools/multi_input_tool.html
b773db4d7129-2
> Entering new AgentExecutor chain... I need to multiply two numbers Action: Multiplier Action Input: 3,4 Observation: 12 Thought: I now know the final answer Final Answer: 3 times 4 is 12 > Finished chain. '3 times 4 is 12' previous Defining Custom Tools next Tool Input Schema Contents Multi-Input Tools with a string format By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/agents/tools/multi_input_tool.html
e38ef613c09c-0
.md .pdf Getting Started Contents List of Tools Getting Started# Tools are functions that agents can use to interact with the world. These tools can be generic utilities (e.g. search), other chains, or even other agents. Currently, tools can be loaded with the following snippet: from langchain.agents import load_tools tool_names = [...] tools = load_tools(tool_names) Some tools (e.g. chains, agents) may require a base LLM to use to initialize them. In that case, you can pass in an LLM as well: from langchain.agents import load_tools tool_names = [...] llm = ... tools = load_tools(tool_names, llm=llm) Below is a list of all supported tools and relevant information: Tool Name: The name the LLM refers to the tool by. Tool Description: The description of the tool that is passed to the LLM. Notes: Notes about the tool that are NOT passed to the LLM. Requires LLM: Whether this tool requires an LLM to be initialized. (Optional) Extra Parameters: What extra parameters are required to initialize this tool. List of Tools# python_repl Tool Name: Python REPL Tool Description: A Python shell. Use this to execute python commands. Input should be a valid python command. If you expect output it should be printed out. Notes: Maintains state. Requires LLM: No serpapi Tool Name: Search Tool Description: A search engine. Useful for when you need to answer questions about current events. Input should be a search query. Notes: Calls the Serp API and then parses results. Requires LLM: No wolfram-alpha Tool Name: Wolfram Alpha
https://python.langchain.com/en/latest/modules/agents/tools/getting_started.html
e38ef613c09c-1
Requires LLM: No wolfram-alpha Tool Name: Wolfram Alpha Tool Description: A wolfram alpha search engine. Useful for when you need to answer questions about Math, Science, Technology, Culture, Society and Everyday Life. Input should be a search query. Notes: Calls the Wolfram Alpha API and then parses results. Requires LLM: No Extra Parameters: wolfram_alpha_appid: The Wolfram Alpha app id. requests Tool Name: Requests Tool Description: A portal to the internet. Use this when you need to get specific content from a site. Input should be a specific url, and the output will be all the text on that page. Notes: Uses the Python requests module. Requires LLM: No terminal Tool Name: Terminal Tool Description: Executes commands in a terminal. Input should be valid commands, and the output will be any output from running that command. Notes: Executes commands with subprocess. Requires LLM: No pal-math Tool Name: PAL-MATH Tool Description: A language model that is excellent at solving complex word math problems. Input should be a fully worded hard word math problem. Notes: Based on this paper. Requires LLM: Yes pal-colored-objects Tool Name: PAL-COLOR-OBJ Tool Description: A language model that is wonderful at reasoning about position and the color attributes of objects. Input should be a fully worded hard reasoning problem. Make sure to include all information about the objects AND the final question you want to answer. Notes: Based on this paper. Requires LLM: Yes llm-math Tool Name: Calculator Tool Description: Useful for when you need to answer questions about math. Notes: An instance of the LLMMath chain. Requires LLM: Yes open-meteo-api Tool Name: Open Meteo API
https://python.langchain.com/en/latest/modules/agents/tools/getting_started.html
e38ef613c09c-2
Requires LLM: Yes open-meteo-api Tool Name: Open Meteo API Tool Description: Useful for when you want to get weather information from the OpenMeteo API. The input should be a question in natural language that this API can answer. Notes: A natural language connection to the Open Meteo API (https://api.open-meteo.com/), specifically the /v1/forecast endpoint. Requires LLM: Yes news-api Tool Name: News API Tool Description: Use this when you want to get information about the top headlines of current news stories. The input should be a question in natural language that this API can answer. Notes: A natural language connection to the News API (https://newsapi.org), specifically the /v2/top-headlines endpoint. Requires LLM: Yes Extra Parameters: news_api_key (your API key to access this endpoint) tmdb-api Tool Name: TMDB API Tool Description: Useful for when you want to get information from The Movie Database. The input should be a question in natural language that this API can answer. Notes: A natural language connection to the TMDB API (https://api.themoviedb.org/3), specifically the /search/movie endpoint. Requires LLM: Yes Extra Parameters: tmdb_bearer_token (your Bearer Token to access this endpoint - note that this is different from the API key) google-search Tool Name: Search Tool Description: A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query. Notes: Uses the Google Custom Search API Requires LLM: No Extra Parameters: google_api_key, google_cse_id For more information on this, see this page searx-search Tool Name: Search
https://python.langchain.com/en/latest/modules/agents/tools/getting_started.html
e38ef613c09c-3
For more information on this, see this page searx-search Tool Name: Search Tool Description: A wrapper around SearxNG meta search engine. Input should be a search query. Notes: SearxNG is easy to deploy self-hosted. It is a good privacy friendly alternative to Google Search. Uses the SearxNG API. Requires LLM: No Extra Parameters: searx_host google-serper Tool Name: Search Tool Description: A low-cost Google Search API. Useful for when you need to answer questions about current events. Input should be a search query. Notes: Calls the serper.dev Google Search API and then parses results. Requires LLM: No Extra Parameters: serper_api_key For more information on this, see this page wikipedia Tool Name: Wikipedia Tool Description: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, historical events, or other subjects. Input should be a search query. Notes: Uses the wikipedia Python package to call the MediaWiki API and then parses results. Requires LLM: No Extra Parameters: top_k_results podcast-api Tool Name: Podcast API Tool Description: Use the Listen Notes Podcast API to search all podcasts or episodes. The input should be a question in natural language that this API can answer. Notes: A natural language connection to the Listen Notes Podcast API (https://www.PodcastAPI.com), specifically the /search/ endpoint. Requires LLM: Yes Extra Parameters: listen_api_key (your api key to access this endpoint) openweathermap-api Tool Name: OpenWeatherMap Tool Description: A wrapper around OpenWeatherMap API. Useful for fetching current weather information for a specified location. Input should be a location string (e.g. London,GB).
https://python.langchain.com/en/latest/modules/agents/tools/getting_started.html
e38ef613c09c-4
Notes: A connection to the OpenWeatherMap API (https://api.openweathermap.org), specifically the /data/2.5/weather endpoint. Requires LLM: No Extra Parameters: openweathermap_api_key (your API key to access this endpoint) previous Tools next Defining Custom Tools Contents List of Tools By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/agents/tools/getting_started.html
d97c8164b6c7-0
.ipynb .pdf Tool Input Schema Tool Input Schema# By default, tools infer the argument schema by inspecting the function signature. For more strict requirements, custom input schema can be specified, along with custom validation logic. from typing import Any, Dict from langchain.agents import AgentType, initialize_agent from langchain.llms import OpenAI from langchain.tools.requests.tool import RequestsGetTool, TextRequestsWrapper from pydantic import BaseModel, Field, root_validator llm = OpenAI(temperature=0) !pip install tldextract > /dev/null [notice] A new release of pip is available: 23.0.1 -> 23.1 [notice] To update, run: pip install --upgrade pip import tldextract _APPROVED_DOMAINS = { "langchain", "wikipedia", } class ToolInputSchema(BaseModel): url: str = Field(...) @root_validator def validate_query(cls, values: Dict[str, Any]) -> Dict: url = values["url"] domain = tldextract.extract(url).domain if domain not in _APPROVED_DOMAINS: raise ValueError(f"Domain {domain} is not on the approved list:" f" {sorted(_APPROVED_DOMAINS)}") return values tool = RequestsGetTool(args_schema=ToolInputSchema, requests_wrapper=TextRequestsWrapper()) agent = initialize_agent([tool], llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=False) # This will succeed, since there aren't any arguments that will be triggered during validation answer = agent.run("What's the main title on langchain.com?") print(answer)
https://python.langchain.com/en/latest/modules/agents/tools/tool_input_validation.html
d97c8164b6c7-1
print(answer) The main title of langchain.com is "LANG CHAIN 🦜️🔗 Official Home Page" agent.run("What's the main title on google.com?") --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) Cell In[7], line 1 ----> 1 agent.run("What's the main title on google.com?") File ~/code/lc/lckg/langchain/chains/base.py:213, in Chain.run(self, *args, **kwargs) 211 if len(args) != 1: 212 raise ValueError("`run` supports only one positional argument.") --> 213 return self(args[0])[self.output_keys[0]] 215 if kwargs and not args: 216 return self(kwargs)[self.output_keys[0]] File ~/code/lc/lckg/langchain/chains/base.py:116, in Chain.__call__(self, inputs, return_only_outputs) 114 except (KeyboardInterrupt, Exception) as e: 115 self.callback_manager.on_chain_error(e, verbose=self.verbose) --> 116 raise e 117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose) 118 return self.prep_outputs(inputs, outputs, return_only_outputs) File ~/code/lc/lckg/langchain/chains/base.py:113, in Chain.__call__(self, inputs, return_only_outputs) 107 self.callback_manager.on_chain_start( 108 {"name": self.__class__.__name__}, 109 inputs, 110 verbose=self.verbose, 111 ) 112 try: --> 113 outputs = self._call(inputs) 114 except (KeyboardInterrupt, Exception) as e:
https://python.langchain.com/en/latest/modules/agents/tools/tool_input_validation.html
d97c8164b6c7-2
114 except (KeyboardInterrupt, Exception) as e: 115 self.callback_manager.on_chain_error(e, verbose=self.verbose) File ~/code/lc/lckg/langchain/agents/agent.py:792, in AgentExecutor._call(self, inputs) 790 # We now enter the agent loop (until it returns something). 791 while self._should_continue(iterations, time_elapsed): --> 792 next_step_output = self._take_next_step( 793 name_to_tool_map, color_mapping, inputs, intermediate_steps 794 ) 795 if isinstance(next_step_output, AgentFinish): 796 return self._return(next_step_output, intermediate_steps) File ~/code/lc/lckg/langchain/agents/agent.py:695, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps) 693 tool_run_kwargs["llm_prefix"] = "" 694 # We then call the tool on the tool input to get an observation --> 695 observation = tool.run( 696 agent_action.tool_input, 697 verbose=self.verbose, 698 color=color, 699 **tool_run_kwargs, 700 ) 701 else: 702 tool_run_kwargs = self.agent.tool_run_logging_kwargs() File ~/code/lc/lckg/langchain/tools/base.py:110, in BaseTool.run(self, tool_input, verbose, start_color, color, **kwargs) 101 def run( 102 self, 103 tool_input: Union[str, Dict], (...) 107 **kwargs: Any, 108 ) -> str:
https://python.langchain.com/en/latest/modules/agents/tools/tool_input_validation.html
d97c8164b6c7-3
107 **kwargs: Any, 108 ) -> str: 109 """Run the tool.""" --> 110 run_input = self._parse_input(tool_input) 111 if not self.verbose and verbose is not None: 112 verbose_ = verbose File ~/code/lc/lckg/langchain/tools/base.py:71, in BaseTool._parse_input(self, tool_input) 69 if issubclass(input_args, BaseModel): 70 key_ = next(iter(input_args.__fields__.keys())) ---> 71 input_args.parse_obj({key_: tool_input}) 72 # Passing as a positional argument is more straightforward for 73 # backwards compatability 74 return tool_input File ~/code/lc/lckg/.venv/lib/python3.11/site-packages/pydantic/main.py:526, in pydantic.main.BaseModel.parse_obj() File ~/code/lc/lckg/.venv/lib/python3.11/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for ToolInputSchema __root__ Domain google is not on the approved list: ['langchain', 'wikipedia'] (type=value_error) previous Multi-Input Tools next Apify By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/agents/tools/tool_input_validation.html
63efb9853a17-0
.ipynb .pdf Wikipedia Wikipedia# Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history. First, you need to install wikipedia python package. !pip install wikipedia from langchain.utilities import WikipediaAPIWrapper wikipedia = WikipediaAPIWrapper() wikipedia.run('HUNTER X HUNTER')
https://python.langchain.com/en/latest/modules/agents/tools/examples/wikipedia.html
63efb9853a17-1
'Page: Hunter × Hunter\nSummary: Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced "hunter hunter") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\'s shōnen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tankōbon volumes as of November 2022. The story focuses on a young boy named Gon Freecss who discovers that his father, who left him at a young age, is actually a world-renowned Hunter, a licensed professional who specializes in fantastical pursuits such as locating rare or unidentified animal species, treasure hunting, surveying unexplored enclaves, or hunting down lawless individuals. Gon departs on a journey to become a Hunter and eventually find his father. Along the way, Gon meets various other Hunters and encounters the paranormal.\nHunter × Hunter was adapted into a 62-episode anime television series produced by Nippon Animation and directed by Kazuhiro Furuhashi, which ran on Fuji Television from October 1999 to March 2001. Three separate original video animations (OVAs) totaling 30 episodes were subsequently produced by Nippon Animation and released in Japan from 2002 to 2004. A second anime television series by Madhouse aired on Nippon Television from October 2011 to September 2014, totaling 148 episodes, with two animated theatrical films released in 2013. There are also numerous audio albums, video games, musicals, and other media based on Hunter × Hunter.\nThe manga has been translated into English and released in North America by Viz Media since April 2005. Both television series have been also licensed by Viz Media, with the first series having aired on the Funimation Channel in 2009
https://python.langchain.com/en/latest/modules/agents/tools/examples/wikipedia.html
63efb9853a17-2
by Viz Media, with the first series having aired on the Funimation Channel in 2009 and the second series broadcast on Adult Swim\'s Toonami programming block from April 2016 to June 2019.\nHunter × Hunter has been a huge critical and financial success and has become one of the best-selling manga series of all time, having over 84 million copies in circulation by July 2022.\n\nPage: Hunter × Hunter (2011 TV series)\nSummary: Hunter × Hunter is an anime television series that aired from 2011 to 2014 based on Yoshihiro Togashi\'s manga series Hunter × Hunter. The story begins with a young boy named Gon Freecss, who one day discovers that the father who he thought was dead, is in fact alive and well. He learns that his father, Ging, is a legendary "Hunter", an individual who has proven themselves an elite member of humanity. Despite the fact that Ging left his son with his relatives in order to pursue his own dreams, Gon becomes determined to follow in his father\'s footsteps, pass the rigorous "Hunter Examination", and eventually find his father to become a Hunter in his own right.\nThis new Hunter × Hunter anime was announced on July 24, 2011. It is a complete reboot of the anime adaptation starting from the beginning of the manga, with no connections to the first anime from 1999. Produced by Nippon TV, VAP, Shueisha and Madhouse, the series is directed by Hiroshi Kōjina, with Atsushi Maekawa and Tsutomu Kamishiro handling series composition, Takahiro Yoshimatsu designing the characters and Yoshihisa Hirano composing the music. Instead of having the old cast reprise their roles for the new adaptation, the series features an entirely new cast to voice the characters. The new series premiered airing weekly on Nippon TV and the nationwide
https://python.langchain.com/en/latest/modules/agents/tools/examples/wikipedia.html