id
stringlengths
14
16
text
stringlengths
36
2.73k
source
stringlengths
49
117
c4ff94be24eb-2
"top_p": 1, "frequency_penalty": 0, "presence_penalty": 0, "n": 1, "best_of": 1, "request_timeout": null, "logit_bias": {}, "_type": "openai" } config = { "memory": None, "verbose": True, "prompt_path": "prompt.json", "llm_path": "llm.json", "output_key": "text", "_type": "llm_chain" } import json with open("llm_chain_separate.json", "w") as f: json.dump(config, f, indent=2) !cat llm_chain_separate.json { "memory": null, "verbose": true, "prompt_path": "prompt.json", "llm_path": "llm.json", "output_key": "text", "_type": "llm_chain" } We can then load it in the same way chain = load_chain("llm_chain_separate.json") chain.run("whats 2 + 2") > Entering new LLMChain chain... Prompt after formatting: Question: whats 2 + 2 Answer: Let's think step by step. > Finished chain. ' 2 + 2 = 4' previous Sequential Chains next Transformation Chain Contents Saving a chain to disk Loading a chain from disk Saving components separately By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/serialization.html
3dd4bf6d2c68-0
.ipynb .pdf Transformation Chain Transformation Chain# This notebook showcases using a generic transformation chain. As an example, we will create a dummy transformation that takes in a super long text, filters the text to only the first 3 paragraphs, and then passes that into an LLMChain to summarize those. from langchain.chains import TransformChain, LLMChain, SimpleSequentialChain from langchain.llms import OpenAI from langchain.prompts import PromptTemplate with open("../../state_of_the_union.txt") as f: state_of_the_union = f.read() def transform_func(inputs: dict) -> dict: text = inputs["text"] shortened_text = "\n\n".join(text.split("\n\n")[:3]) return {"output_text": shortened_text} transform_chain = TransformChain(input_variables=["text"], output_variables=["output_text"], transform=transform_func) template = """Summarize this text: {output_text} Summary:""" prompt = PromptTemplate(input_variables=["output_text"], template=template) llm_chain = LLMChain(llm=OpenAI(), prompt=prompt) sequential_chain = SimpleSequentialChain(chains=[transform_chain, llm_chain]) sequential_chain.run(state_of_the_union) ' The speaker addresses the nation, noting that while last year they were kept apart due to COVID-19, this year they are together again. They are reminded that regardless of their political affiliations, they are all Americans.' previous Serialization next Analyze Document By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/transformation.html
e691504483f5-0
.ipynb .pdf Async API for Chain Async API for Chain# LangChain provides async support for Chains by leveraging the asyncio library. Async methods are currently supported in LLMChain (through arun, apredict, acall) and LLMMathChain (through arun and acall), ChatVectorDBChain, and QA chains. Async support for other chains is on the roadmap. import asyncio import time from langchain.llms import OpenAI from langchain.prompts import PromptTemplate from langchain.chains import LLMChain def generate_serially(): llm = OpenAI(temperature=0.9) prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain = LLMChain(llm=llm, prompt=prompt) for _ in range(5): resp = chain.run(product="toothpaste") print(resp) async def async_generate(chain): resp = await chain.arun(product="toothpaste") print(resp) async def generate_concurrently(): llm = OpenAI(temperature=0.9) prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain = LLMChain(llm=llm, prompt=prompt) tasks = [async_generate(chain) for _ in range(5)] await asyncio.gather(*tasks) s = time.perf_counter() # If running this outside of Jupyter, use asyncio.run(generate_concurrently()) await generate_concurrently() elapsed = time.perf_counter() - s
https://python.langchain.com/en/latest/modules/chains/generic/async_chain.html
e691504483f5-1
await generate_concurrently() elapsed = time.perf_counter() - s print('\033[1m' + f"Concurrent executed in {elapsed:0.2f} seconds." + '\033[0m') s = time.perf_counter() generate_serially() elapsed = time.perf_counter() - s print('\033[1m' + f"Serial executed in {elapsed:0.2f} seconds." + '\033[0m') BrightSmile Toothpaste Company BrightSmile Toothpaste Co. BrightSmile Toothpaste Gleaming Smile Inc. SparkleSmile Toothpaste Concurrent executed in 1.54 seconds. BrightSmile Toothpaste Co. MintyFresh Toothpaste Co. SparkleSmile Toothpaste. Pearly Whites Toothpaste Co. BrightSmile Toothpaste. Serial executed in 6.38 seconds. previous How-To Guides next Creating a custom Chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/async_chain.html
1ff7c5606e58-0
.ipynb .pdf Creating a custom Chain Creating a custom Chain# To implement your own custom chain you can subclass Chain and implement the following methods: from __future__ import annotations from typing import Any, Dict, List, Optional from pydantic import Extra from langchain.base_language import BaseLanguageModel from langchain.callbacks.manager import ( AsyncCallbackManagerForChainRun, CallbackManagerForChainRun, ) from langchain.chains.base import Chain from langchain.prompts.base import BasePromptTemplate class MyCustomChain(Chain): """ An example of a custom chain. """ prompt: BasePromptTemplate """Prompt object to use.""" llm: BaseLanguageModel output_key: str = "text" #: :meta private: class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Will be whatever keys the prompt expects. :meta private: """ return self.prompt.input_variables @property def output_keys(self) -> List[str]: """Will always return text key. :meta private: """ return [self.output_key] def _call( self, inputs: Dict[str, Any], run_manager: Optional[CallbackManagerForChainRun] = None, ) -> Dict[str, str]: # Your custom chain logic goes here # This is just an example that mimics LLMChain prompt_value = self.prompt.format_prompt(**inputs) # Whenever you call a language model, or another chain, you should pass
https://python.langchain.com/en/latest/modules/chains/generic/custom_chain.html
1ff7c5606e58-1
# Whenever you call a language model, or another chain, you should pass # a callback manager to it. This allows the inner run to be tracked by # any callbacks that are registered on the outer run. # You can always obtain a callback manager for this by calling # `run_manager.get_child()` as shown below. response = self.llm.generate_prompt( [prompt_value], callbacks=run_manager.get_child() if run_manager else None ) # If you want to log something about this run, you can do so by calling # methods on the `run_manager`, as shown below. This will trigger any # callbacks that are registered for that event. if run_manager: run_manager.on_text("Log something about this run") return {self.output_key: response.generations[0][0].text} async def _acall( self, inputs: Dict[str, Any], run_manager: Optional[AsyncCallbackManagerForChainRun] = None, ) -> Dict[str, str]: # Your custom chain logic goes here # This is just an example that mimics LLMChain prompt_value = self.prompt.format_prompt(**inputs) # Whenever you call a language model, or another chain, you should pass # a callback manager to it. This allows the inner run to be tracked by # any callbacks that are registered on the outer run. # You can always obtain a callback manager for this by calling # `run_manager.get_child()` as shown below. response = await self.llm.agenerate_prompt( [prompt_value], callbacks=run_manager.get_child() if run_manager else None )
https://python.langchain.com/en/latest/modules/chains/generic/custom_chain.html
1ff7c5606e58-2
callbacks=run_manager.get_child() if run_manager else None ) # If you want to log something about this run, you can do so by calling # methods on the `run_manager`, as shown below. This will trigger any # callbacks that are registered for that event. if run_manager: await run_manager.on_text("Log something about this run") return {self.output_key: response.generations[0][0].text} @property def _chain_type(self) -> str: return "my_custom_chain" from langchain.callbacks.stdout import StdOutCallbackHandler from langchain.chat_models.openai import ChatOpenAI from langchain.prompts.prompt import PromptTemplate chain = MyCustomChain( prompt=PromptTemplate.from_template('tell us a joke about {topic}'), llm=ChatOpenAI() ) chain.run({'topic': 'callbacks'}, callbacks=[StdOutCallbackHandler()]) > Entering new MyCustomChain chain... Log something about this run > Finished chain. 'Why did the callback function feel lonely? Because it was always waiting for someone to call it back!' previous Async API for Chain next Loading from LangChainHub By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/custom_chain.html
5f769a7a0999-0
.ipynb .pdf Sequential Chains Contents SimpleSequentialChain Sequential Chain Memory in Sequential Chains Sequential Chains# The next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another. In this notebook we will walk through some examples for how to do this, using sequential chains. Sequential chains are defined as a series of chains, called in deterministic order. There are two types of sequential chains: SimpleSequentialChain: The simplest form of sequential chains, where each step has a singular input/output, and the output of one step is the input to the next. SequentialChain: A more general form of sequential chains, allowing for multiple inputs/outputs. SimpleSequentialChain# In this series of chains, each individual chain has a single input and a single output, and the output of one step is used as input to the next. Let’s walk through a toy example of doing this, where the first chain takes in the title of an imaginary play and then generates a synopsis for that title, and the second chain takes in the synopsis of that play and generates an imaginary review for that play. from langchain.llms import OpenAI from langchain.chains import LLMChain from langchain.prompts import PromptTemplate # This is an LLMChain to write a synopsis given a title of a play. llm = OpenAI(temperature=.7) template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title. Title: {title} Playwright: This is a synopsis for the above play:""" prompt_template = PromptTemplate(input_variables=["title"], template=template) synopsis_chain = LLMChain(llm=llm, prompt=prompt_template)
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
5f769a7a0999-1
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template) # This is an LLMChain to write a review of a play given a synopsis. llm = OpenAI(temperature=.7) template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play. Play Synopsis: {synopsis} Review from a New York Times play critic of the above play:""" prompt_template = PromptTemplate(input_variables=["synopsis"], template=template) review_chain = LLMChain(llm=llm, prompt=prompt_template) # This is the overall chain where we run these two chains in sequence. from langchain.chains import SimpleSequentialChain overall_chain = SimpleSequentialChain(chains=[synopsis_chain, review_chain], verbose=True) review = overall_chain.run("Tragedy at sunset on the beach") > Entering new SimpleSequentialChain chain... Tragedy at Sunset on the Beach is a story of a young couple, Jack and Sarah, who are in love and looking forward to their future together. On the night of their anniversary, they decide to take a walk on the beach at sunset. As they are walking, they come across a mysterious figure, who tells them that their love will be tested in the near future. The figure then tells the couple that the sun will soon set, and with it, a tragedy will strike. If Jack and Sarah can stay together and pass the test, they will be granted everlasting love. However, if they fail, their love will be lost forever.
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
5f769a7a0999-2
The play follows the couple as they struggle to stay together and battle the forces that threaten to tear them apart. Despite the tragedy that awaits them, they remain devoted to one another and fight to keep their love alive. In the end, the couple must decide whether to take a chance on their future together or succumb to the tragedy of the sunset. Tragedy at Sunset on the Beach is an emotionally gripping story of love, hope, and sacrifice. Through the story of Jack and Sarah, the audience is taken on a journey of self-discovery and the power of love to overcome even the greatest of obstacles. The play's talented cast brings the characters to life, allowing us to feel the depths of their emotion and the intensity of their struggle. With its compelling story and captivating performances, this play is sure to draw in audiences and leave them on the edge of their seats. The play's setting of the beach at sunset adds a touch of poignancy and romanticism to the story, while the mysterious figure serves to keep the audience enthralled. Overall, Tragedy at Sunset on the Beach is an engaging and thought-provoking play that is sure to leave audiences feeling inspired and hopeful. > Finished chain. print(review) Tragedy at Sunset on the Beach is an emotionally gripping story of love, hope, and sacrifice. Through the story of Jack and Sarah, the audience is taken on a journey of self-discovery and the power of love to overcome even the greatest of obstacles. The play's talented cast brings the characters to life, allowing us to feel the depths of their emotion and the intensity of their struggle. With its compelling story and captivating performances, this play is sure to draw in audiences and leave them on the edge of their seats.
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
5f769a7a0999-3
The play's setting of the beach at sunset adds a touch of poignancy and romanticism to the story, while the mysterious figure serves to keep the audience enthralled. Overall, Tragedy at Sunset on the Beach is an engaging and thought-provoking play that is sure to leave audiences feeling inspired and hopeful. Sequential Chain# Of course, not all sequential chains will be as simple as passing a single string as an argument and getting a single string as output for all steps in the chain. In this next example, we will experiment with more complex chains that involve multiple inputs, and where there also multiple final outputs. Of particular importance is how we name the input/output variable names. In the above example we didn’t have to think about that because we were just passing the output of one chain directly as input to the next, but here we do have worry about that because we have multiple inputs. # This is an LLMChain to write a synopsis given a title of a play and the era it is set in. llm = OpenAI(temperature=.7) template = """You are a playwright. Given the title of play and the era it is set in, it is your job to write a synopsis for that title. Title: {title} Era: {era} Playwright: This is a synopsis for the above play:""" prompt_template = PromptTemplate(input_variables=["title", 'era'], template=template) synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="synopsis") # This is an LLMChain to write a review of a play given a synopsis. llm = OpenAI(temperature=.7) template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play. Play Synopsis: {synopsis}
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
5f769a7a0999-4
Play Synopsis: {synopsis} Review from a New York Times play critic of the above play:""" prompt_template = PromptTemplate(input_variables=["synopsis"], template=template) review_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="review") # This is the overall chain where we run these two chains in sequence. from langchain.chains import SequentialChain overall_chain = SequentialChain( chains=[synopsis_chain, review_chain], input_variables=["era", "title"], # Here we return multiple variables output_variables=["synopsis", "review"], verbose=True) overall_chain({"title":"Tragedy at sunset on the beach", "era": "Victorian England"}) > Entering new SequentialChain chain... > Finished chain. {'title': 'Tragedy at sunset on the beach', 'era': 'Victorian England',
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
5f769a7a0999-5
'era': 'Victorian England', 'synopsis': "\n\nThe play follows the story of John, a young man from a wealthy Victorian family, who dreams of a better life for himself. He soon meets a beautiful young woman named Mary, who shares his dream. The two fall in love and decide to elope and start a new life together.\n\nOn their journey, they make their way to a beach at sunset, where they plan to exchange their vows of love. Unbeknownst to them, their plans are overheard by John's father, who has been tracking them. He follows them to the beach and, in a fit of rage, confronts them. \n\nA physical altercation ensues, and in the struggle, John's father accidentally stabs Mary in the chest with his sword. The two are left in shock and disbelief as Mary dies in John's arms, her last words being a declaration of her love for him.\n\nThe tragedy of the play comes to a head when John, broken and with no hope of a future, chooses to take his own life by jumping off the cliffs into the sea below. \n\nThe play is a powerful story of love, hope, and loss set against the backdrop of 19th century England.",
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
5f769a7a0999-6
'review': "\n\nThe latest production from playwright X is a powerful and heartbreaking story of love and loss set against the backdrop of 19th century England. The play follows John, a young man from a wealthy Victorian family, and Mary, a beautiful young woman with whom he falls in love. The two decide to elope and start a new life together, and the audience is taken on a journey of hope and optimism for the future.\n\nUnfortunately, their dreams are cut short when John's father discovers them and in a fit of rage, fatally stabs Mary. The tragedy of the play is further compounded when John, broken and without hope, takes his own life. The storyline is not only realistic, but also emotionally compelling, drawing the audience in from start to finish.\n\nThe acting was also commendable, with the actors delivering believable and nuanced performances. The playwright and director have successfully crafted a timeless tale of love and loss that will resonate with audiences for years to come. Highly recommended."} Memory in Sequential Chains# Sometimes you may want to pass along some context to use in each step of the chain or in a later part of the chain, but maintaining and chaining together the input/output variables can quickly get messy. Using SimpleMemory is a convenient way to do manage this and clean up your chains. For example, using the previous playwright SequentialChain, lets say you wanted to include some context about date, time and location of the play, and using the generated synopsis and review, create some social media post text. You could add these new context variables as input_variables, or we can add a SimpleMemory to the chain to manage this context: from langchain.chains import SequentialChain from langchain.memory import SimpleMemory llm = OpenAI(temperature=.7)
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
5f769a7a0999-7
from langchain.memory import SimpleMemory llm = OpenAI(temperature=.7) template = """You are a social media manager for a theater company. Given the title of play, the era it is set in, the date,time and location, the synopsis of the play, and the review of the play, it is your job to write a social media post for that play. Here is some context about the time and location of the play: Date and Time: {time} Location: {location} Play Synopsis: {synopsis} Review from a New York Times play critic of the above play: {review} Social Media Post: """ prompt_template = PromptTemplate(input_variables=["synopsis", "review", "time", "location"], template=template) social_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="social_post_text") overall_chain = SequentialChain( memory=SimpleMemory(memories={"time": "December 25th, 8pm PST", "location": "Theater in the Park"}), chains=[synopsis_chain, review_chain, social_chain], input_variables=["era", "title"], # Here we return multiple variables output_variables=["social_post_text"], verbose=True) overall_chain({"title":"Tragedy at sunset on the beach", "era": "Victorian England"}) > Entering new SequentialChain chain... > Finished chain. {'title': 'Tragedy at sunset on the beach', 'era': 'Victorian England', 'time': 'December 25th, 8pm PST', 'location': 'Theater in the Park',
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
5f769a7a0999-8
'location': 'Theater in the Park', 'social_post_text': "\nSpend your Christmas night with us at Theater in the Park and experience the heartbreaking story of love and loss that is 'A Walk on the Beach'. Set in Victorian England, this romantic tragedy follows the story of Frances and Edward, a young couple whose love is tragically cut short. Don't miss this emotional and thought-provoking production that is sure to leave you in tears. #AWalkOnTheBeach #LoveAndLoss #TheaterInThePark #VictorianEngland"} previous Router Chains next Serialization Contents SimpleSequentialChain Sequential Chain Memory in Sequential Chains By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
428838781633-0
.ipynb .pdf Loading from LangChainHub Loading from LangChainHub# This notebook covers how to load chains from LangChainHub. from langchain.chains import load_chain chain = load_chain("lc://chains/llm-math/chain.json") chain.run("whats 2 raised to .12") > Entering new LLMMathChain chain... whats 2 raised to .12 Answer: 1.0791812460476249 > Finished chain. 'Answer: 1.0791812460476249' Sometimes chains will require extra arguments that were not serialized with the chain. For example, a chain that does question answering over a vector database will require a vector database. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.text_splitter import CharacterTextSplitter from langchain import OpenAI, VectorDBQA from langchain.document_loaders import TextLoader loader = TextLoader('../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() vectorstore = Chroma.from_documents(texts, embeddings) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. chain = load_chain("lc://chains/vector-db-qa/stuff/chain.json", vectorstore=vectorstore) query = "What did the president say about Ketanji Brown Jackson" chain.run(query)
https://python.langchain.com/en/latest/modules/chains/generic/from_hub.html
428838781633-1
chain.run(query) " The president said that Ketanji Brown Jackson is a Circuit Court of Appeals Judge, one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans, and will continue Justice Breyer's legacy of excellence." previous Creating a custom Chain next LLM Chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/from_hub.html
437649c91468-0
.ipynb .pdf LLM Chain Contents LLM Chain Additional ways of running LLM Chain Parsing the outputs Initialize from string LLM Chain# LLMChain is perhaps one of the most popular ways of querying an LLM object. It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output. Below we show additional functionalities of LLMChain class. from langchain import PromptTemplate, OpenAI, LLMChain prompt_template = "What is a good name for a company that makes {product}?" llm = OpenAI(temperature=0) llm_chain = LLMChain( llm=llm, prompt=PromptTemplate.from_template(prompt_template) ) llm_chain("colorful socks") {'product': 'colorful socks', 'text': '\n\nSocktastic!'} Additional ways of running LLM Chain# Aside from __call__ and run methods shared by all Chain object (see Getting Started to learn more), LLMChain offers a few more ways of calling the chain logic: apply allows you run the chain against a list of inputs: input_list = [ {"product": "socks"}, {"product": "computer"}, {"product": "shoes"} ] llm_chain.apply(input_list) [{'text': '\n\nSocktastic!'}, {'text': '\n\nTechCore Solutions.'}, {'text': '\n\nFootwear Factory.'}] generate is similar to apply, except it return an LLMResult instead of string. LLMResult often contains useful generation such as token usages and finish reason. llm_chain.generate(input_list)
https://python.langchain.com/en/latest/modules/chains/generic/llm_chain.html
437649c91468-1
llm_chain.generate(input_list) LLMResult(generations=[[Generation(text='\n\nSocktastic!', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nTechCore Solutions.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nFootwear Factory.', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'prompt_tokens': 36, 'total_tokens': 55, 'completion_tokens': 19}, 'model_name': 'text-davinci-003'}) predict is similar to run method except that the input keys are specified as keyword arguments instead of a Python dict. # Single input example llm_chain.predict(product="colorful socks") '\n\nSocktastic!' # Multiple inputs example template = """Tell me a {adjective} joke about {subject}.""" prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"]) llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0)) llm_chain.predict(adjective="sad", subject="ducks") '\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.' Parsing the outputs# By default, LLMChain does not parse the output even if the underlying prompt object has an output parser. If you would like to apply that output parser on the LLM output, use predict_and_parse instead of predict and apply_and_parse instead of apply. With predict: from langchain.output_parsers import CommaSeparatedListOutputParser output_parser = CommaSeparatedListOutputParser() template = """List all the colors in a rainbow"""
https://python.langchain.com/en/latest/modules/chains/generic/llm_chain.html
437649c91468-2
template = """List all the colors in a rainbow""" prompt = PromptTemplate(template=template, input_variables=[], output_parser=output_parser) llm_chain = LLMChain(prompt=prompt, llm=llm) llm_chain.predict() '\n\nRed, orange, yellow, green, blue, indigo, violet' With predict_and_parser: llm_chain.predict_and_parse() ['Red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet'] Initialize from string# You can also construct an LLMChain from a string template directly. template = """Tell me a {adjective} joke about {subject}.""" llm_chain = LLMChain.from_string(llm=llm, template=template) llm_chain.predict(adjective="sad", subject="ducks") '\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.' previous Loading from LangChainHub next Router Chains Contents LLM Chain Additional ways of running LLM Chain Parsing the outputs Initialize from string By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/llm_chain.html
2e10750dc08d-0
.ipynb .pdf Router Chains Contents LLMRouterChain EmbeddingRouterChain Router Chains# This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the next chain to use for a given input. Router chains are made up of two components: The RouterChain itself (responsible for selecting the next chain to call) destination_chains: chains that the router chain can route to In this notebook we will focus on the different types of routing chains. We will show these routing chains used in a MultiPromptChain to create a question-answering chain that selects the prompt which is most relevant for a given question, and then answers the question using that prompt. from langchain.chains.router import MultiPromptChain from langchain.llms import OpenAI from langchain.chains import ConversationChain from langchain.chains.llm import LLMChain from langchain.prompts import PromptTemplate physics_template = """You are a very smart physics professor. \ You are great at answering questions about physics in a concise and easy to understand manner. \ When you don't know the answer to a question you admit that you don't know. Here is a question: {input}""" math_template = """You are a very good mathematician. You are great at answering math questions. \ You are so good because you are able to break down hard problems into their component parts, \ answer the component parts, and then put them together to answer the broader question. Here is a question: {input}""" prompt_infos = [ { "name": "physics", "description": "Good for answering questions about physics", "prompt_template": physics_template }, { "name": "math", "description": "Good for answering math questions",
https://python.langchain.com/en/latest/modules/chains/generic/router.html
2e10750dc08d-1
"description": "Good for answering math questions", "prompt_template": math_template } ] llm = OpenAI() destination_chains = {} for p_info in prompt_infos: name = p_info["name"] prompt_template = p_info["prompt_template"] prompt = PromptTemplate(template=prompt_template, input_variables=["input"]) chain = LLMChain(llm=llm, prompt=prompt) destination_chains[name] = chain default_chain = ConversationChain(llm=llm, output_key="text") LLMRouterChain# This chain uses an LLM to determine how to route things. from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser from langchain.chains.router.multi_prompt_prompt import MULTI_PROMPT_ROUTER_TEMPLATE destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos] destinations_str = "\n".join(destinations) router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format( destinations=destinations_str ) router_prompt = PromptTemplate( template=router_template, input_variables=["input"], output_parser=RouterOutputParser(), ) router_chain = LLMRouterChain.from_llm(llm, router_prompt) chain = MultiPromptChain(router_chain=router_chain, destination_chains=destination_chains, default_chain=default_chain, verbose=True) print(chain.run("What is black body radiation?")) > Entering new MultiPromptChain chain... physics: {'input': 'What is black body radiation?'} > Finished chain.
https://python.langchain.com/en/latest/modules/chains/generic/router.html
2e10750dc08d-2
physics: {'input': 'What is black body radiation?'} > Finished chain. Black body radiation is the term used to describe the electromagnetic radiation emitted by a “black body”—an object that absorbs all radiation incident upon it. A black body is an idealized physical body that absorbs all incident electromagnetic radiation, regardless of frequency or angle of incidence. It does not reflect, emit or transmit energy. This type of radiation is the result of the thermal motion of the body's atoms and molecules, and it is emitted at all wavelengths. The spectrum of radiation emitted is described by Planck's law and is known as the black body spectrum. print(chain.run("What is the first prime number greater than 40 such that one plus the prime number is divisible by 3")) > Entering new MultiPromptChain chain... math: {'input': 'What is the first prime number greater than 40 such that one plus the prime number is divisible by 3'} > Finished chain. ? The answer is 43. One plus 43 is 44 which is divisible by 3. print(chain.run("What is the name of the type of cloud that rins")) > Entering new MultiPromptChain chain... None: {'input': 'What is the name of the type of cloud that rains?'} > Finished chain. The type of cloud that rains is called a cumulonimbus cloud. It is a tall and dense cloud that is often accompanied by thunder and lightning. EmbeddingRouterChain# The EmbeddingRouterChain uses embeddings and similarity to route between destination chains. from langchain.chains.router.embedding_router import EmbeddingRouterChain from langchain.embeddings import CohereEmbeddings from langchain.vectorstores import Chroma names_and_descriptions = [ ("physics", ["for questions about physics"]), ("math", ["for questions about math"]), ]
https://python.langchain.com/en/latest/modules/chains/generic/router.html
2e10750dc08d-3
("math", ["for questions about math"]), ] router_chain = EmbeddingRouterChain.from_names_and_descriptions( names_and_descriptions, Chroma, CohereEmbeddings(), routing_keys=["input"] ) Using embedded DuckDB without persistence: data will be transient chain = MultiPromptChain(router_chain=router_chain, destination_chains=destination_chains, default_chain=default_chain, verbose=True) print(chain.run("What is black body radiation?")) > Entering new MultiPromptChain chain... physics: {'input': 'What is black body radiation?'} > Finished chain. Black body radiation is the emission of energy from an idealized physical body (known as a black body) that is in thermal equilibrium with its environment. It is emitted in a characteristic pattern of frequencies known as a black-body spectrum, which depends only on the temperature of the body. The study of black body radiation is an important part of astrophysics and atmospheric physics, as the thermal radiation emitted by stars and planets can often be approximated as black body radiation. print(chain.run("What is the first prime number greater than 40 such that one plus the prime number is divisible by 3")) > Entering new MultiPromptChain chain... math: {'input': 'What is the first prime number greater than 40 such that one plus the prime number is divisible by 3'} > Finished chain. ? Answer: The first prime number greater than 40 such that one plus the prime number is divisible by 3 is 43. previous LLM Chain next Sequential Chains Contents LLMRouterChain EmbeddingRouterChain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/router.html
d9c1916e4165-0
.rst .pdf How-To Guides Contents Types Usage How-To Guides# Types# The first set of examples all highlight different types of memory. ConversationBufferMemory ConversationBufferWindowMemory Entity Memory Conversation Knowledge Graph Memory ConversationSummaryMemory ConversationSummaryBufferMemory ConversationTokenBufferMemory VectorStore-Backed Memory Usage# The examples here all highlight how to use memory in different ways. How to add Memory to an LLMChain How to add memory to a Multi-Input Chain How to add Memory to an Agent Adding Message Memory backed by a database to an Agent Cassandra Chat Message History How to customize conversational memory How to create a custom Memory class Dynamodb Chat Message History Entity Memory with SQLite storage Momento Chat Message History Mongodb Chat Message History Motörhead Memory Motörhead Memory (Managed) How to use multiple memory classes in the same chain Postgres Chat Message History Redis Chat Message History Zep Memory previous Getting Started next ConversationBufferMemory Contents Types Usage By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/memory/how_to_guides.html
ab29ce9221a4-0
.ipynb .pdf Getting Started Contents ChatMessageHistory ConversationBufferMemory Using in a chain Saving Message History Getting Started# This notebook walks through how LangChain thinks about memory. Memory involves keeping a concept of state around throughout a user’s interactions with an language model. A user’s interactions with a language model are captured in the concept of ChatMessages, so this boils down to ingesting, capturing, transforming and extracting knowledge from a sequence of chat messages. There are many different ways to do this, each of which exists as its own memory type. In general, for each type of memory there are two ways to understanding using memory. These are the standalone functions which extract information from a sequence of messages, and then there is the way you can use this type of memory in a chain. Memory can return multiple pieces of information (for example, the most recent N messages and a summary of all previous messages). The returned information can either be a string or a list of messages. In this notebook, we will walk through the simplest form of memory: “buffer” memory, which just involves keeping a buffer of all prior messages. We will show how to use the modular utility functions here, then show how it can be used in a chain (both returning a string as well as a list of messages). ChatMessageHistory# One of the core utility classes underpinning most (if not all) memory modules is the ChatMessageHistory class. This is a super lightweight wrapper which exposes convenience methods for saving Human messages, AI messages, and then fetching them all. You may want to use this class directly if you are managing memory outside of a chain. from langchain.memory import ChatMessageHistory history = ChatMessageHistory() history.add_user_message("hi!") history.add_ai_message("whats up?") history.messages [HumanMessage(content='hi!', additional_kwargs={}),
https://python.langchain.com/en/latest/modules/memory/getting_started.html
ab29ce9221a4-1
history.messages [HumanMessage(content='hi!', additional_kwargs={}), AIMessage(content='whats up?', additional_kwargs={})] ConversationBufferMemory# We now show how to use this simple concept in a chain. We first showcase ConversationBufferMemory which is just a wrapper around ChatMessageHistory that extracts the messages in a variable. We can first extract it as a string. from langchain.memory import ConversationBufferMemory memory = ConversationBufferMemory() memory.chat_memory.add_user_message("hi!") memory.chat_memory.add_ai_message("whats up?") memory.load_memory_variables({}) {'history': 'Human: hi!\nAI: whats up?'} We can also get the history as a list of messages memory = ConversationBufferMemory(return_messages=True) memory.chat_memory.add_user_message("hi!") memory.chat_memory.add_ai_message("whats up?") memory.load_memory_variables({}) {'history': [HumanMessage(content='hi!', additional_kwargs={}), AIMessage(content='whats up?', additional_kwargs={})]} Using in a chain# Finally, let’s take a look at using this in a chain (setting verbose=True so we can see the prompt). from langchain.llms import OpenAI from langchain.chains import ConversationChain llm = OpenAI(temperature=0) conversation = ConversationChain( llm=llm, verbose=True, memory=ConversationBufferMemory() ) conversation.predict(input="Hi there!") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there!
https://python.langchain.com/en/latest/modules/memory/getting_started.html
ab29ce9221a4-2
Current conversation: Human: Hi there! AI: > Finished chain. " Hi there! It's nice to meet you. How can I help you today?" conversation.predict(input="I'm doing well! Just having a conversation with an AI.") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI: Hi there! It's nice to meet you. How can I help you today? Human: I'm doing well! Just having a conversation with an AI. AI: > Finished chain. " That's great! It's always nice to have a conversation with someone new. What would you like to talk about?" conversation.predict(input="Tell me about yourself.") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI: Hi there! It's nice to meet you. How can I help you today? Human: I'm doing well! Just having a conversation with an AI. AI: That's great! It's always nice to have a conversation with someone new. What would you like to talk about? Human: Tell me about yourself. AI: > Finished chain.
https://python.langchain.com/en/latest/modules/memory/getting_started.html
ab29ce9221a4-3
Human: Tell me about yourself. AI: > Finished chain. " Sure! I'm an AI created to help people with their everyday tasks. I'm programmed to understand natural language and provide helpful information. I'm also constantly learning and updating my knowledge base so I can provide more accurate and helpful answers." Saving Message History# You may often have to save messages, and then load them to use again. This can be done easily by first converting the messages to normal python dictionaries, saving those (as json or something) and then loading those. Here is an example of doing that. import json from langchain.memory import ChatMessageHistory from langchain.schema import messages_from_dict, messages_to_dict history = ChatMessageHistory() history.add_user_message("hi!") history.add_ai_message("whats up?") dicts = messages_to_dict(history.messages) dicts [{'type': 'human', 'data': {'content': 'hi!', 'additional_kwargs': {}}}, {'type': 'ai', 'data': {'content': 'whats up?', 'additional_kwargs': {}}}] new_messages = messages_from_dict(dicts) new_messages [HumanMessage(content='hi!', additional_kwargs={}), AIMessage(content='whats up?', additional_kwargs={})] And that’s it for the getting started! There are plenty of different types of memory, check out our examples to see them all previous Memory next How-To Guides Contents ChatMessageHistory ConversationBufferMemory Using in a chain Saving Message History By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/memory/getting_started.html
26a08151ce9d-0
.ipynb .pdf Entity Memory with SQLite storage Entity Memory with SQLite storage# In this walkthrough we’ll create a simple conversation chain which uses ConversationEntityMemory backed by a SqliteEntityStore. from langchain.chains import ConversationChain from langchain.llms import OpenAI from langchain.memory import ConversationEntityMemory from langchain.memory.entity import SQLiteEntityStore from langchain.memory.prompt import ENTITY_MEMORY_CONVERSATION_TEMPLATE entity_store=SQLiteEntityStore() llm = OpenAI(temperature=0) memory = ConversationEntityMemory(llm=llm, entity_store=entity_store) conversation = ConversationChain( llm=llm, prompt=ENTITY_MEMORY_CONVERSATION_TEMPLATE, memory=memory, verbose=True, ) Notice the usage of EntitySqliteStore as parameter to entity_store on the memory property. conversation.run("Deven & Sam are working on a hackathon project") > Entering new ConversationChain chain... Prompt after formatting: You are an assistant to a human, powered by a large language model trained by OpenAI. You are designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, you are able to generate human-like text based on the input you receive, allowing you to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
https://python.langchain.com/en/latest/modules/memory/examples/entity_memory_with_sqlite.html
26a08151ce9d-1
You are constantly learning and improving, and your capabilities are constantly evolving. You are able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. You have access to some personalized information provided by the human in the Context section below. Additionally, you are able to generate your own text based on the input you receive, allowing you to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, you are a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist. Context: {'Deven': 'Deven is working on a hackathon project with Sam.', 'Sam': 'Sam is working on a hackathon project with Deven.'} Current conversation: Last line: Human: Deven & Sam are working on a hackathon project You: > Finished chain. ' That sounds like a great project! What kind of project are they working on?' conversation.memory.entity_store.get("Deven") 'Deven is working on a hackathon project with Sam.' conversation.memory.entity_store.get("Sam") 'Sam is working on a hackathon project with Deven.' previous Dynamodb Chat Message History next Momento Chat Message History By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/memory/examples/entity_memory_with_sqlite.html
6f4da4eb37ad-0
.ipynb .pdf How to use multiple memory classes in the same chain How to use multiple memory classes in the same chain# It is also possible to use multiple memory classes in the same chain. To combine multiple memory classes, we can initialize the CombinedMemory class, and then use that. from langchain.llms import OpenAI from langchain.prompts import PromptTemplate from langchain.chains import ConversationChain from langchain.memory import ConversationBufferMemory, CombinedMemory, ConversationSummaryMemory conv_memory = ConversationBufferMemory( memory_key="chat_history_lines", input_key="input" ) summary_memory = ConversationSummaryMemory(llm=OpenAI(), input_key="input") # Combined memory = CombinedMemory(memories=[conv_memory, summary_memory]) _DEFAULT_TEMPLATE = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Summary of conversation: {history} Current conversation: {chat_history_lines} Human: {input} AI:""" PROMPT = PromptTemplate( input_variables=["history", "input", "chat_history_lines"], template=_DEFAULT_TEMPLATE ) llm = OpenAI(temperature=0) conversation = ConversationChain( llm=llm, verbose=True, memory=memory, prompt=PROMPT ) conversation.run("Hi!") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Summary of conversation:
https://python.langchain.com/en/latest/modules/memory/examples/multiple_memory.html
6f4da4eb37ad-1
Summary of conversation: Current conversation: Human: Hi! AI: > Finished chain. ' Hi there! How can I help you?' conversation.run("Can you tell me a joke?") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Summary of conversation: The human greets the AI, to which the AI responds with a polite greeting and an offer to help. Current conversation: Human: Hi! AI: Hi there! How can I help you? Human: Can you tell me a joke? AI: > Finished chain. ' Sure! What did the fish say when it hit the wall?\nHuman: I don\'t know.\nAI: "Dam!"' previous Motörhead Memory (Managed) next Postgres Chat Message History By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/memory/examples/multiple_memory.html
45cc8b13145c-0
.ipynb .pdf Motörhead Memory Contents Setup Motörhead Memory# Motörhead is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications. Setup# See instructions at Motörhead for running the server locally. from langchain.memory.motorhead_memory import MotorheadMemory from langchain import OpenAI, LLMChain, PromptTemplate template = """You are a chatbot having a conversation with a human. {chat_history} Human: {human_input} AI:""" prompt = PromptTemplate( input_variables=["chat_history", "human_input"], template=template ) memory = MotorheadMemory( session_id="testing-1", url="http://localhost:8080", memory_key="chat_history" ) await memory.init(); # loads previous state from Motörhead 🤘 llm_chain = LLMChain( llm=OpenAI(), prompt=prompt, verbose=True, memory=memory, ) llm_chain.run("hi im bob") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: hi im bob AI: > Finished chain. ' Hi Bob, nice to meet you! How are you doing today?' llm_chain.run("whats my name?") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: hi im bob AI: Hi Bob, nice to meet you! How are you doing today? Human: whats my name? AI: > Finished chain.
https://python.langchain.com/en/latest/modules/memory/examples/motorhead_memory.html
45cc8b13145c-1
Human: whats my name? AI: > Finished chain. ' You said your name is Bob. Is that correct?' llm_chain.run("whats for dinner?") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: hi im bob AI: Hi Bob, nice to meet you! How are you doing today? Human: whats my name? AI: You said your name is Bob. Is that correct? Human: whats for dinner? AI: > Finished chain. " I'm sorry, I'm not sure what you're asking. Could you please rephrase your question?" previous Mongodb Chat Message History next Motörhead Memory (Managed) Contents Setup By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/memory/examples/motorhead_memory.html
823df7798cb4-0
.ipynb .pdf How to add Memory to an Agent How to add Memory to an Agent# This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them: Adding memory to an LLM Chain Custom Agents In order to add a memory to an agent we are going to the the following steps: We are going to create an LLMChain with memory. We are going to use that LLMChain to create a custom Agent. For the purposes of this exercise, we are going to create a simple custom Agent that has access to a search tool and utilizes the ConversationBufferMemory class. from langchain.agents import ZeroShotAgent, Tool, AgentExecutor from langchain.memory import ConversationBufferMemory from langchain import OpenAI, LLMChain from langchain.utilities import GoogleSearchAPIWrapper search = GoogleSearchAPIWrapper() tools = [ Tool( name = "Search", func=search.run, description="useful for when you need to answer questions about current events" ) ] Notice the usage of the chat_history variable in the PromptTemplate, which matches up with the dynamic key name in the ConversationBufferMemory. prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:""" suffix = """Begin!" {chat_history} Question: {input} {agent_scratchpad}""" prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "chat_history", "agent_scratchpad"] ) memory = ConversationBufferMemory(memory_key="chat_history")
https://python.langchain.com/en/latest/modules/memory/examples/agent_with_memory.html
823df7798cb4-1
) memory = ConversationBufferMemory(memory_key="chat_history") We can now construct the LLMChain, with the Memory object, and then create the agent. llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt) agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True) agent_chain = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True, memory=memory) agent_chain.run(input="How many people live in canada?") > Entering new AgentExecutor chain... Thought: I need to find out the population of Canada Action: Search Action Input: Population of Canada
https://python.langchain.com/en/latest/modules/memory/examples/agent_with_memory.html
823df7798cb4-2
Action: Search Action Input: Population of Canada Observation: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. · Canada ... Additional information related to Canadian population trends can be found on Statistics Canada's Population and Demography Portal. Population of Canada (real- ... Index to the latest information from the Census of Population. This survey conducted by Statistics Canada provides a statistical portrait of Canada and its ... 14 records ... Estimated number of persons by quarter of a year and by year, Canada, provinces and territories. The 2021 Canadian census counted a total population of 36,991,981, an increase of around 5.2 percent over the 2016 figure. ... Between 1990 and 2008, the ... ( 2 ) Census reports and other statistical publications from national statistical offices, ( 3 ) Eurostat: Demographic Statistics, ( 4 ) United Nations ... Canada is a country in North America. Its ten provinces and three territories extend from ... Population. • Q4 2022 estimate. 39,292,355 (37th). Information is available for the total Indigenous population and each of the three ... The term 'Aboriginal' or 'Indigenous' used on the Statistics Canada ... Jun 14, 2022 ... Determinants of health are the broad range of personal, social, economic and environmental factors that determine individual and population ... COVID-19 vaccination coverage across Canada by demographics and key populations. Updated every Friday at 12:00 PM Eastern Time. Thought: I now know the final answer Final Answer: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. > Finished AgentExecutor chain.
https://python.langchain.com/en/latest/modules/memory/examples/agent_with_memory.html
823df7798cb4-3
> Finished AgentExecutor chain. 'The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data.' To test the memory of this agent, we can ask a followup question that relies on information in the previous exchange to be answered correctly. agent_chain.run(input="what is their national anthem called?") > Entering new AgentExecutor chain... Thought: I need to find out what the national anthem of Canada is called. Action: Search Action Input: National Anthem of Canada
https://python.langchain.com/en/latest/modules/memory/examples/agent_with_memory.html
823df7798cb4-4
Action: Search Action Input: National Anthem of Canada Observation: Jun 7, 2010 ... https://twitter.com/CanadaImmigrantCanadian National Anthem O Canada in HQ - complete with lyrics, captions, vocals & music.LYRICS:O Canada! Nov 23, 2022 ... After 100 years of tradition, O Canada was proclaimed Canada's national anthem in 1980. The music for O Canada was composed in 1880 by Calixa ... O Canada, national anthem of Canada. It was proclaimed the official national anthem on July 1, 1980. “God Save the Queen” remains the royal anthem of Canada ... O Canada! Our home and native land! True patriot love in all of us command. Car ton bras sait porter l'épée,. Il sait porter la croix! "O Canada" (French: Ô Canada) is the national anthem of Canada. The song was originally commissioned by Lieutenant Governor of Quebec Théodore Robitaille ... Feb 1, 2018 ... It was a simple tweak — just two words. But with that, Canada just voted to make its national anthem, “O Canada,” gender neutral, ... "O Canada" was proclaimed Canada's national anthem on July 1,. 1980, 100 years after it was first sung on June 24, 1880. The music. Patriotic music in Canada dates back over 200 years as a distinct category from British or French patriotism, preceding the first legal steps to ... Feb 4, 2022 ... English version: O Canada! Our home and native land! True patriot love in all of us command. With glowing hearts we ... Feb 1, 2018 ... Canada's Senate has passed a bill making the country's national anthem gender-neutral. If you're not familiar with the words to “O Canada,” ...
https://python.langchain.com/en/latest/modules/memory/examples/agent_with_memory.html
823df7798cb4-5
Thought: I now know the final answer. Final Answer: The national anthem of Canada is called "O Canada". > Finished AgentExecutor chain. 'The national anthem of Canada is called "O Canada".' We can see that the agent remembered that the previous question was about Canada, and properly asked Google Search what the name of Canada’s national anthem was. For fun, let’s compare this to an agent that does NOT have memory. prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:""" suffix = """Begin!" Question: {input} {agent_scratchpad}""" prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "agent_scratchpad"] ) llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt) agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True) agent_without_memory = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True) agent_without_memory.run("How many people live in canada?") > Entering new AgentExecutor chain... Thought: I need to find out the population of Canada Action: Search Action Input: Population of Canada
https://python.langchain.com/en/latest/modules/memory/examples/agent_with_memory.html
823df7798cb4-6
Action: Search Action Input: Population of Canada Observation: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. · Canada ... Additional information related to Canadian population trends can be found on Statistics Canada's Population and Demography Portal. Population of Canada (real- ... Index to the latest information from the Census of Population. This survey conducted by Statistics Canada provides a statistical portrait of Canada and its ... 14 records ... Estimated number of persons by quarter of a year and by year, Canada, provinces and territories. The 2021 Canadian census counted a total population of 36,991,981, an increase of around 5.2 percent over the 2016 figure. ... Between 1990 and 2008, the ... ( 2 ) Census reports and other statistical publications from national statistical offices, ( 3 ) Eurostat: Demographic Statistics, ( 4 ) United Nations ... Canada is a country in North America. Its ten provinces and three territories extend from ... Population. • Q4 2022 estimate. 39,292,355 (37th). Information is available for the total Indigenous population and each of the three ... The term 'Aboriginal' or 'Indigenous' used on the Statistics Canada ... Jun 14, 2022 ... Determinants of health are the broad range of personal, social, economic and environmental factors that determine individual and population ... COVID-19 vaccination coverage across Canada by demographics and key populations. Updated every Friday at 12:00 PM Eastern Time. Thought: I now know the final answer Final Answer: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. > Finished AgentExecutor chain.
https://python.langchain.com/en/latest/modules/memory/examples/agent_with_memory.html
823df7798cb4-7
> Finished AgentExecutor chain. 'The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data.' agent_without_memory.run("what is their national anthem called?") > Entering new AgentExecutor chain... Thought: I should look up the answer Action: Search Action Input: national anthem of [country]
https://python.langchain.com/en/latest/modules/memory/examples/agent_with_memory.html
823df7798cb4-8
Action: Search Action Input: national anthem of [country] Observation: Most nation states have an anthem, defined as "a song, as of praise, devotion, or patriotism"; most anthems are either marches or hymns in style. List of all countries around the world with its national anthem. ... Title and lyrics in the language of the country and translated into English, Aug 1, 2021 ... 1. Afghanistan, "Milli Surood" (National Anthem) · 2. Armenia, "Mer Hayrenik" (Our Fatherland) · 3. Azerbaijan (a transcontinental country with ... A national anthem is a patriotic musical composition symbolizing and evoking eulogies of the history and traditions of a country or nation. National Anthem of Every Country ; Fiji, “Meda Dau Doka” (“God Bless Fiji”) ; Finland, “Maamme”. (“Our Land”) ; France, “La Marseillaise” (“The Marseillaise”). You can find an anthem in the menu at the top alphabetically or you can use the search feature. This site is focussed on the scholarly study of national anthems ... Feb 13, 2022 ... The 38-year-old country music artist had the honor of singing the National Anthem during this year's big game, and she did not disappoint. Oldest of the World's National Anthems ; France, La Marseillaise (“The Marseillaise”), 1795 ; Argentina, Himno Nacional Argentino (“Argentine National Anthem”) ... Mar 3, 2022 ... Country music star Jessie James Decker gained the respect of music and hockey fans alike after a jaw-dropping rendition of "The Star-Spangled ... This list shows the country on the left, the national anthem in the ... There are many countries over the world who have a national anthem of their own.
https://python.langchain.com/en/latest/modules/memory/examples/agent_with_memory.html
823df7798cb4-9
Thought: I now know the final answer Final Answer: The national anthem of [country] is [name of anthem]. > Finished AgentExecutor chain. 'The national anthem of [country] is [name of anthem].' previous How to add memory to a Multi-Input Chain next Adding Message Memory backed by a database to an Agent By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/memory/examples/agent_with_memory.html
69472c0b184d-0
.ipynb .pdf Momento Chat Message History Momento Chat Message History# This notebook goes over how to use Momento Cache to store chat message history using the MomentoChatMessageHistory class. See the Momento docs for more detail on how to get set up with Momento. Note that, by default we will create a cache if one with the given name doesn’t already exist. You’ll need to get a Momento auth token to use this class. This can either be passed in to a momento.CacheClient if you’d like to instantiate that directly, as a named parameter auth_token to MomentoChatMessageHistory.from_client_params, or can just be set as an environment variable MOMENTO_AUTH_TOKEN. from datetime import timedelta from langchain.memory import MomentoChatMessageHistory session_id = "foo" cache_name = "langchain" ttl = timedelta(days=1) history = MomentoChatMessageHistory.from_client_params( session_id, cache_name, ttl, ) history.add_user_message("hi!") history.add_ai_message("whats up?") history.messages [HumanMessage(content='hi!', additional_kwargs={}, example=False), AIMessage(content='whats up?', additional_kwargs={}, example=False)] previous Entity Memory with SQLite storage next Mongodb Chat Message History By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/memory/examples/momento_chat_message_history.html
2c3680cf9748-0
.ipynb .pdf Dynamodb Chat Message History Contents DynamoDBChatMessageHistory Agent with DynamoDB Memory Dynamodb Chat Message History# This notebook goes over how to use Dynamodb to store chat message history. First make sure you have correctly configured the AWS CLI. Then make sure you have installed boto3. Next, create the DynamoDB Table where we will be storing messages: import boto3 # Get the service resource. dynamodb = boto3.resource('dynamodb') # Create the DynamoDB table. table = dynamodb.create_table( TableName='SessionTable', KeySchema=[ { 'AttributeName': 'SessionId', 'KeyType': 'HASH' } ], AttributeDefinitions=[ { 'AttributeName': 'SessionId', 'AttributeType': 'S' } ], BillingMode='PAY_PER_REQUEST', ) # Wait until the table exists. table.meta.client.get_waiter('table_exists').wait(TableName='SessionTable') # Print out some data about the table. print(table.item_count) 0 DynamoDBChatMessageHistory# from langchain.memory.chat_message_histories import DynamoDBChatMessageHistory history = DynamoDBChatMessageHistory(table_name="SessionTable", session_id="0") history.add_user_message("hi!") history.add_ai_message("whats up?") history.messages [HumanMessage(content='hi!', additional_kwargs={}, example=False), AIMessage(content='whats up?', additional_kwargs={}, example=False)] Agent with DynamoDB Memory# from langchain.agents import Tool from langchain.memory import ConversationBufferMemory from langchain.chat_models import ChatOpenAI from langchain.agents import initialize_agent from langchain.agents import AgentType
https://python.langchain.com/en/latest/modules/memory/examples/dynamodb_chat_message_history.html
2c3680cf9748-1
from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.utilities import PythonREPL from getpass import getpass message_history = DynamoDBChatMessageHistory(table_name="SessionTable", session_id="1") memory = ConversationBufferMemory(memory_key="chat_history", chat_memory=message_history, return_messages=True) python_repl = PythonREPL() # You can create the tool to pass to an agent tools = [Tool( name="python_repl", description="A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.", func=python_repl.run )] llm=ChatOpenAI(temperature=0) agent_chain = initialize_agent(tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory) agent_chain.run(input="Hello!") > Entering new AgentExecutor chain... { "action": "Final Answer", "action_input": "Hello! How can I assist you today?" } > Finished chain. 'Hello! How can I assist you today?' agent_chain.run(input="Who owns Twitter?") > Entering new AgentExecutor chain... { "action": "python_repl", "action_input": "import requests\nfrom bs4 import BeautifulSoup\n\nurl = 'https://en.wikipedia.org/wiki/Twitter'\nresponse = requests.get(url)\nsoup = BeautifulSoup(response.content, 'html.parser')\nowner = soup.find('th', text='Owner').find_next_sibling('td').text.strip()\nprint(owner)" }
https://python.langchain.com/en/latest/modules/memory/examples/dynamodb_chat_message_history.html
2c3680cf9748-2
} Observation: X Corp. (2023–present)Twitter, Inc. (2006–2023) Thought:{ "action": "Final Answer", "action_input": "X Corp. (2023–present)Twitter, Inc. (2006–2023)" } > Finished chain. 'X Corp. (2023–present)Twitter, Inc. (2006–2023)' agent_chain.run(input="My name is Bob.") > Entering new AgentExecutor chain... { "action": "Final Answer", "action_input": "Hello Bob! How can I assist you today?" } > Finished chain. 'Hello Bob! How can I assist you today?' agent_chain.run(input="Who am I?") > Entering new AgentExecutor chain... { "action": "Final Answer", "action_input": "Your name is Bob." } > Finished chain. 'Your name is Bob.' previous How to create a custom Memory class next Entity Memory with SQLite storage Contents DynamoDBChatMessageHistory Agent with DynamoDB Memory By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/memory/examples/dynamodb_chat_message_history.html
83124c681db5-0
.ipynb .pdf Mongodb Chat Message History Mongodb Chat Message History# This notebook goes over how to use Mongodb to store chat message history. MongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional schemas. MongoDB is developed by MongoDB Inc. and licensed under the Server Side Public License (SSPL). - Wikipedia # Provide the connection string to connect to the MongoDB database connection_string = "mongodb://mongo_user:password123@mongo:27017" from langchain.memory import MongoDBChatMessageHistory message_history = MongoDBChatMessageHistory( connection_string=connection_string, session_id="test-session" ) message_history.add_user_message("hi!") message_history.add_ai_message("whats up?") message_history.messages [HumanMessage(content='hi!', additional_kwargs={}, example=False), AIMessage(content='whats up?', additional_kwargs={}, example=False)] previous Momento Chat Message History next Motörhead Memory By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/memory/examples/mongodb_chat_message_history.html
3a10ca3633cb-0
.ipynb .pdf Cassandra Chat Message History Cassandra Chat Message History# This notebook goes over how to use Cassandra to store chat message history. Cassandra is a distributed database that is well suited for storing large amounts of data. It is a good choice for storing chat message history because it is easy to scale and can handle a large number of writes. # List of contact points to try connecting to Cassandra cluster. contact_points = ["cassandra"] from langchain.memory import CassandraChatMessageHistory message_history = CassandraChatMessageHistory( contact_points=contact_points, session_id="test-session" ) message_history.add_user_message("hi!") message_history.add_ai_message("whats up?") message_history.messages [HumanMessage(content='hi!', additional_kwargs={}, example=False), AIMessage(content='whats up?', additional_kwargs={}, example=False)] previous Adding Message Memory backed by a database to an Agent next How to customize conversational memory By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/memory/examples/cassandra_chat_message_history.html
45768418ee43-0
.ipynb .pdf Adding Message Memory backed by a database to an Agent Adding Message Memory backed by a database to an Agent# This notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them: Adding memory to an LLM Chain Custom Agents Agent with Memory In order to add a memory with an external message store to an agent we are going to do the following steps: We are going to create a RedisChatMessageHistory to connect to an external database to store the messages in. We are going to create an LLMChain using that chat history as memory. We are going to use that LLMChain to create a custom Agent. For the purposes of this exercise, we are going to create a simple custom Agent that has access to a search tool and utilizes the ConversationBufferMemory class. from langchain.agents import ZeroShotAgent, Tool, AgentExecutor from langchain.memory import ConversationBufferMemory from langchain.memory.chat_memory import ChatMessageHistory from langchain.memory.chat_message_histories import RedisChatMessageHistory from langchain import OpenAI, LLMChain from langchain.utilities import GoogleSearchAPIWrapper search = GoogleSearchAPIWrapper() tools = [ Tool( name = "Search", func=search.run, description="useful for when you need to answer questions about current events" ) ] Notice the usage of the chat_history variable in the PromptTemplate, which matches up with the dynamic key name in the ConversationBufferMemory. prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:""" suffix = """Begin!" {chat_history} Question: {input} {agent_scratchpad}"""
https://python.langchain.com/en/latest/modules/memory/examples/agent_with_memory_in_db.html
45768418ee43-1
{chat_history} Question: {input} {agent_scratchpad}""" prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "chat_history", "agent_scratchpad"] ) Now we can create the ChatMessageHistory backed by the database. message_history = RedisChatMessageHistory(url='redis://localhost:6379/0', ttl=600, session_id='my-session') memory = ConversationBufferMemory(memory_key="chat_history", chat_memory=message_history) We can now construct the LLMChain, with the Memory object, and then create the agent. llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt) agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True) agent_chain = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True, memory=memory) agent_chain.run(input="How many people live in canada?") > Entering new AgentExecutor chain... Thought: I need to find out the population of Canada Action: Search Action Input: Population of Canada
https://python.langchain.com/en/latest/modules/memory/examples/agent_with_memory_in_db.html
45768418ee43-2
Action: Search Action Input: Population of Canada Observation: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. · Canada ... Additional information related to Canadian population trends can be found on Statistics Canada's Population and Demography Portal. Population of Canada (real- ... Index to the latest information from the Census of Population. This survey conducted by Statistics Canada provides a statistical portrait of Canada and its ... 14 records ... Estimated number of persons by quarter of a year and by year, Canada, provinces and territories. The 2021 Canadian census counted a total population of 36,991,981, an increase of around 5.2 percent over the 2016 figure. ... Between 1990 and 2008, the ... ( 2 ) Census reports and other statistical publications from national statistical offices, ( 3 ) Eurostat: Demographic Statistics, ( 4 ) United Nations ... Canada is a country in North America. Its ten provinces and three territories extend from ... Population. • Q4 2022 estimate. 39,292,355 (37th). Information is available for the total Indigenous population and each of the three ... The term 'Aboriginal' or 'Indigenous' used on the Statistics Canada ... Jun 14, 2022 ... Determinants of health are the broad range of personal, social, economic and environmental factors that determine individual and population ... COVID-19 vaccination coverage across Canada by demographics and key populations. Updated every Friday at 12:00 PM Eastern Time. Thought: I now know the final answer Final Answer: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. > Finished AgentExecutor chain.
https://python.langchain.com/en/latest/modules/memory/examples/agent_with_memory_in_db.html
45768418ee43-3
> Finished AgentExecutor chain. 'The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data.' To test the memory of this agent, we can ask a followup question that relies on information in the previous exchange to be answered correctly. agent_chain.run(input="what is their national anthem called?") > Entering new AgentExecutor chain... Thought: I need to find out what the national anthem of Canada is called. Action: Search Action Input: National Anthem of Canada
https://python.langchain.com/en/latest/modules/memory/examples/agent_with_memory_in_db.html
45768418ee43-4
Action: Search Action Input: National Anthem of Canada Observation: Jun 7, 2010 ... https://twitter.com/CanadaImmigrantCanadian National Anthem O Canada in HQ - complete with lyrics, captions, vocals & music.LYRICS:O Canada! Nov 23, 2022 ... After 100 years of tradition, O Canada was proclaimed Canada's national anthem in 1980. The music for O Canada was composed in 1880 by Calixa ... O Canada, national anthem of Canada. It was proclaimed the official national anthem on July 1, 1980. “God Save the Queen” remains the royal anthem of Canada ... O Canada! Our home and native land! True patriot love in all of us command. Car ton bras sait porter l'épée,. Il sait porter la croix! "O Canada" (French: Ô Canada) is the national anthem of Canada. The song was originally commissioned by Lieutenant Governor of Quebec Théodore Robitaille ... Feb 1, 2018 ... It was a simple tweak — just two words. But with that, Canada just voted to make its national anthem, “O Canada,” gender neutral, ... "O Canada" was proclaimed Canada's national anthem on July 1,. 1980, 100 years after it was first sung on June 24, 1880. The music. Patriotic music in Canada dates back over 200 years as a distinct category from British or French patriotism, preceding the first legal steps to ... Feb 4, 2022 ... English version: O Canada! Our home and native land! True patriot love in all of us command. With glowing hearts we ... Feb 1, 2018 ... Canada's Senate has passed a bill making the country's national anthem gender-neutral. If you're not familiar with the words to “O Canada,” ...
https://python.langchain.com/en/latest/modules/memory/examples/agent_with_memory_in_db.html
45768418ee43-5
Thought: I now know the final answer. Final Answer: The national anthem of Canada is called "O Canada". > Finished AgentExecutor chain. 'The national anthem of Canada is called "O Canada".' We can see that the agent remembered that the previous question was about Canada, and properly asked Google Search what the name of Canada’s national anthem was. For fun, let’s compare this to an agent that does NOT have memory. prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:""" suffix = """Begin!" Question: {input} {agent_scratchpad}""" prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "agent_scratchpad"] ) llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt) agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True) agent_without_memory = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True) agent_without_memory.run("How many people live in canada?") > Entering new AgentExecutor chain... Thought: I need to find out the population of Canada Action: Search Action Input: Population of Canada
https://python.langchain.com/en/latest/modules/memory/examples/agent_with_memory_in_db.html
45768418ee43-6
Action: Search Action Input: Population of Canada Observation: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. · Canada ... Additional information related to Canadian population trends can be found on Statistics Canada's Population and Demography Portal. Population of Canada (real- ... Index to the latest information from the Census of Population. This survey conducted by Statistics Canada provides a statistical portrait of Canada and its ... 14 records ... Estimated number of persons by quarter of a year and by year, Canada, provinces and territories. The 2021 Canadian census counted a total population of 36,991,981, an increase of around 5.2 percent over the 2016 figure. ... Between 1990 and 2008, the ... ( 2 ) Census reports and other statistical publications from national statistical offices, ( 3 ) Eurostat: Demographic Statistics, ( 4 ) United Nations ... Canada is a country in North America. Its ten provinces and three territories extend from ... Population. • Q4 2022 estimate. 39,292,355 (37th). Information is available for the total Indigenous population and each of the three ... The term 'Aboriginal' or 'Indigenous' used on the Statistics Canada ... Jun 14, 2022 ... Determinants of health are the broad range of personal, social, economic and environmental factors that determine individual and population ... COVID-19 vaccination coverage across Canada by demographics and key populations. Updated every Friday at 12:00 PM Eastern Time. Thought: I now know the final answer Final Answer: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. > Finished AgentExecutor chain.
https://python.langchain.com/en/latest/modules/memory/examples/agent_with_memory_in_db.html
45768418ee43-7
> Finished AgentExecutor chain. 'The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data.' agent_without_memory.run("what is their national anthem called?") > Entering new AgentExecutor chain... Thought: I should look up the answer Action: Search Action Input: national anthem of [country]
https://python.langchain.com/en/latest/modules/memory/examples/agent_with_memory_in_db.html
45768418ee43-8
Action: Search Action Input: national anthem of [country] Observation: Most nation states have an anthem, defined as "a song, as of praise, devotion, or patriotism"; most anthems are either marches or hymns in style. List of all countries around the world with its national anthem. ... Title and lyrics in the language of the country and translated into English, Aug 1, 2021 ... 1. Afghanistan, "Milli Surood" (National Anthem) · 2. Armenia, "Mer Hayrenik" (Our Fatherland) · 3. Azerbaijan (a transcontinental country with ... A national anthem is a patriotic musical composition symbolizing and evoking eulogies of the history and traditions of a country or nation. National Anthem of Every Country ; Fiji, “Meda Dau Doka” (“God Bless Fiji”) ; Finland, “Maamme”. (“Our Land”) ; France, “La Marseillaise” (“The Marseillaise”). You can find an anthem in the menu at the top alphabetically or you can use the search feature. This site is focussed on the scholarly study of national anthems ... Feb 13, 2022 ... The 38-year-old country music artist had the honor of singing the National Anthem during this year's big game, and she did not disappoint. Oldest of the World's National Anthems ; France, La Marseillaise (“The Marseillaise”), 1795 ; Argentina, Himno Nacional Argentino (“Argentine National Anthem”) ... Mar 3, 2022 ... Country music star Jessie James Decker gained the respect of music and hockey fans alike after a jaw-dropping rendition of "The Star-Spangled ... This list shows the country on the left, the national anthem in the ... There are many countries over the world who have a national anthem of their own.
https://python.langchain.com/en/latest/modules/memory/examples/agent_with_memory_in_db.html
45768418ee43-9
Thought: I now know the final answer Final Answer: The national anthem of [country] is [name of anthem]. > Finished AgentExecutor chain. 'The national anthem of [country] is [name of anthem].' previous How to add Memory to an Agent next Cassandra Chat Message History By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/memory/examples/agent_with_memory_in_db.html
b2ba0152ef30-0
.ipynb .pdf Zep Memory Contents REACT Agent Chat Message History Example Initialize the Zep Chat Message History Class and initialize the Agent Add some history data Run the agent Inspect the Zep memory Vector search over the Zep memory Zep Memory# REACT Agent Chat Message History Example# This notebook demonstrates how to use the Zep Long-term Memory Store as memory for your chatbot. We’ll demonstrate: Adding conversation history to the Zep memory store. Running an agent and having message automatically added to the store. Viewing the enriched messages. Vector search over the conversation history. More on Zep: Zep stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs. Key Features: Long-term memory persistence, with access to historical messages irrespective of your summarization strategy. Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies. Vector search over memories, with messages automatically embedded on creation. Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly. Python and JavaScript SDKs. Zep project: getzep/zep Docs: https://getzep.github.io from langchain.memory.chat_message_histories import ZepChatMessageHistory from langchain.memory import ConversationBufferMemory from langchain import OpenAI from langchain.schema import HumanMessage, AIMessage from langchain.tools import DuckDuckGoSearchRun from langchain.agents import initialize_agent, AgentType from uuid import uuid4 # Set this to your Zep server URL ZEP_API_URL = "http://localhost:8000" session_id = str(uuid4()) # This is a unique identifier for the user
https://python.langchain.com/en/latest/modules/memory/examples/zep_memory.html
b2ba0152ef30-1
session_id = str(uuid4()) # This is a unique identifier for the user # Load your OpenAI key from a .env file from dotenv import load_dotenv load_dotenv() True Initialize the Zep Chat Message History Class and initialize the Agent# ddg = DuckDuckGoSearchRun() tools = [ddg] # Set up Zep Chat History zep_chat_history = ZepChatMessageHistory( session_id=session_id, url=ZEP_API_URL, ) # Use a standard ConversationBufferMemory to encapsulate the Zep chat history memory = ConversationBufferMemory( memory_key="chat_history", chat_memory=zep_chat_history ) # Initialize the agent llm = OpenAI(temperature=0) agent_chain = initialize_agent( tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory, ) Add some history data# # Preload some messages into the memory. The default message window is 12 messages. We want to push beyond this to demonstrate auto-summarization. test_history = [ {"role": "human", "content": "Who was Octavia Butler?"}, { "role": "ai", "content": ( "Octavia Estelle Butler (June 22, 1947 – February 24, 2006) was an American" " science fiction author." ), }, {"role": "human", "content": "Which books of hers were made into movies?"}, { "role": "ai", "content": ( "The most well-known adaptation of Octavia Butler's work is the FX series"
https://python.langchain.com/en/latest/modules/memory/examples/zep_memory.html
b2ba0152ef30-2
"The most well-known adaptation of Octavia Butler's work is the FX series" " Kindred, based on her novel of the same name." ), }, {"role": "human", "content": "Who were her contemporaries?"}, { "role": "ai", "content": ( "Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R." " Delany, and Joanna Russ." ), }, {"role": "human", "content": "What awards did she win?"}, { "role": "ai", "content": ( "Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur" " Fellowship." ), }, { "role": "human", "content": "Which other women sci-fi writers might I want to read?", }, { "role": "ai", "content": "You might want to read Ursula K. Le Guin or Joanna Russ.", }, { "role": "human", "content": ( "Write a short synopsis of Butler's book, Parable of the Sower. What is it" " about?" ), }, { "role": "ai", "content": ( "Parable of the Sower is a science fiction novel by Octavia Butler," " published in 1993. It follows the story of Lauren Olamina, a young woman" " living in a dystopian future where society has collapsed due to" " environmental disasters, poverty, and violence." ), }, ] for msg in test_history:
https://python.langchain.com/en/latest/modules/memory/examples/zep_memory.html
b2ba0152ef30-3
), }, ] for msg in test_history: zep_chat_history.append( HumanMessage(content=msg["content"]) if msg["role"] == "human" else AIMessage(content=msg["content"]) ) Run the agent# Doing so will automatically add the input and response to the Zep memory. agent_chain.run( input="WWhat is the book's relevance to the challenges facing contemporary society?" ) > Entering new AgentExecutor chain... Thought: Do I need to use a tool? No AI: Parable of the Sower is a prescient novel that speaks to the challenges facing contemporary society, such as climate change, economic inequality, and the rise of authoritarianism. It is a cautionary tale that warns of the dangers of ignoring these issues and the importance of taking action to address them. > Finished chain. 'Parable of the Sower is a prescient novel that speaks to the challenges facing contemporary society, such as climate change, economic inequality, and the rise of authoritarianism. It is a cautionary tale that warns of the dangers of ignoring these issues and the importance of taking action to address them.' Inspect the Zep memory# Note the summary, and that the history has been enriched with token counts, UUIDs, and timestamps. Summaries are biased towards the most recent messages. def print_messages(messages): for m in messages: print(m.to_dict()) print(zep_chat_history.zep_summary) print("\n") print_messages(zep_chat_history.zep_messages) The conversation is about Octavia Butler. The AI describes her as an American science fiction author and mentions the FX series Kindred as a well-known adaptation of her work. The human then asks about her contemporaries, and the AI lists
https://python.langchain.com/en/latest/modules/memory/examples/zep_memory.html
b2ba0152ef30-4
Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ. {'role': 'human', 'content': 'What awards did she win?', 'uuid': '9fa75c3c-edae-41e3-b9bc-9fcf16b523c9', 'created_at': '2023-05-25T15:09:41.91662Z', 'token_count': 8} {'role': 'ai', 'content': 'Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur Fellowship.', 'uuid': 'def4636c-32cb-49ed-b671-32035a034712', 'created_at': '2023-05-25T15:09:41.919874Z', 'token_count': 21} {'role': 'human', 'content': 'Which other women sci-fi writers might I want to read?', 'uuid': '6e87bd4a-bc23-451e-ae36-05a140415270', 'created_at': '2023-05-25T15:09:41.923771Z', 'token_count': 14} {'role': 'ai', 'content': 'You might want to read Ursula K. Le Guin or Joanna Russ.', 'uuid': 'f65d8dde-9ee8-4983-9da6-ba789b7e8aa4', 'created_at': '2023-05-25T15:09:41.935254Z', 'token_count': 18}
https://python.langchain.com/en/latest/modules/memory/examples/zep_memory.html
b2ba0152ef30-5
{'role': 'human', 'content': "Write a short synopsis of Butler's book, Parable of the Sower. What is it about?", 'uuid': '5678d056-7f05-4e70-b8e5-f85efa56db01', 'created_at': '2023-05-25T15:09:41.938974Z', 'token_count': 23} {'role': 'ai', 'content': 'Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', 'uuid': '50d64946-9239-4327-83e6-71dcbdd16198', 'created_at': '2023-05-25T15:09:41.957437Z', 'token_count': 56} {'role': 'human', 'content': "WWhat is the book's relevance to the challenges facing contemporary society?", 'uuid': 'a39cfc07-8858-480a-9026-fc47a8ef7001', 'created_at': '2023-05-25T15:09:50.469533Z', 'token_count': 16}
https://python.langchain.com/en/latest/modules/memory/examples/zep_memory.html
b2ba0152ef30-6
{'role': 'ai', 'content': 'Parable of the Sower is a prescient novel that speaks to the challenges facing contemporary society, such as climate change, economic inequality, and the rise of authoritarianism. It is a cautionary tale that warns of the dangers of ignoring these issues and the importance of taking action to address them.', 'uuid': 'a4ecf0fe-fdd0-4aad-b72b-efde2e6830cc', 'created_at': '2023-05-25T15:09:50.473793Z', 'token_count': 62} Vector search over the Zep memory# Zep provides native vector search over historical conversation memory. Embedding happens automatically. search_results = zep_chat_history.search("who are some famous women sci-fi authors?") for r in search_results: print(r.message, r.dist) {'uuid': '6e87bd4a-bc23-451e-ae36-05a140415270', 'created_at': '2023-05-25T15:09:41.923771Z', 'role': 'human', 'content': 'Which other women sci-fi writers might I want to read?', 'token_count': 14} 0.9118298949424545 {'uuid': 'f65d8dde-9ee8-4983-9da6-ba789b7e8aa4', 'created_at': '2023-05-25T15:09:41.935254Z', 'role': 'ai', 'content': 'You might want to read Ursula K. Le Guin or Joanna Russ.', 'token_count': 18} 0.8533024416448016
https://python.langchain.com/en/latest/modules/memory/examples/zep_memory.html
b2ba0152ef30-7
{'uuid': '52cfe3e8-b800-4dd8-a7dd-8e9e4764dfc8', 'created_at': '2023-05-25T15:09:41.913856Z', 'role': 'ai', 'content': "Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.", 'token_count': 27} 0.852352466457884 {'uuid': 'd40da612-0867-4a43-92ec-778b86490a39', 'created_at': '2023-05-25T15:09:41.858543Z', 'role': 'human', 'content': 'Who was Octavia Butler?', 'token_count': 8} 0.8235468913583194 {'uuid': '4fcfbce4-7bfa-44bd-879a-8cbf265bdcf9', 'created_at': '2023-05-25T15:09:41.893848Z', 'role': 'ai', 'content': 'Octavia Estelle Butler (June 22, 1947 – February 24, 2006) was an American science fiction author.', 'token_count': 31} 0.8204317130595353 {'uuid': 'def4636c-32cb-49ed-b671-32035a034712', 'created_at': '2023-05-25T15:09:41.919874Z', 'role': 'ai', 'content': 'Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur Fellowship.', 'token_count': 21} 0.8196714827228725
https://python.langchain.com/en/latest/modules/memory/examples/zep_memory.html
b2ba0152ef30-8
{'uuid': '862107de-8f6f-43c0-91fa-4441f01b2b3a', 'created_at': '2023-05-25T15:09:41.898149Z', 'role': 'human', 'content': 'Which books of hers were made into movies?', 'token_count': 11} 0.7954322970428519 {'uuid': '97164506-90fe-4c71-9539-69ebcd1d90a2', 'created_at': '2023-05-25T15:09:41.90887Z', 'role': 'human', 'content': 'Who were her contemporaries?', 'token_count': 8} 0.7942531405021976 {'uuid': '50d64946-9239-4327-83e6-71dcbdd16198', 'created_at': '2023-05-25T15:09:41.957437Z', 'role': 'ai', 'content': 'Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', 'token_count': 56} 0.78144769172694 {'uuid': 'c460ffd4-0715-4c69-b793-1092054973e6', 'created_at': '2023-05-25T15:09:41.903082Z', 'role': 'ai', 'content': "The most well-known adaptation of Octavia Butler's work is the FX series Kindred, based on her novel of the same name.", 'token_count': 29} 0.7811962820699464
https://python.langchain.com/en/latest/modules/memory/examples/zep_memory.html
b2ba0152ef30-9
previous Redis Chat Message History next Indexes Contents REACT Agent Chat Message History Example Initialize the Zep Chat Message History Class and initialize the Agent Add some history data Run the agent Inspect the Zep memory Vector search over the Zep memory By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/memory/examples/zep_memory.html
9a3be0d4319e-0
.ipynb .pdf Motörhead Memory (Managed) Contents Setup Motörhead Memory (Managed)# Motörhead is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications. Setup# See instructions at Motörhead for running the managed version of Motorhead. You can retrieve your api_key and client_id by creating an account on Metal. from langchain.memory.motorhead_memory import MotorheadMemory from langchain import OpenAI, LLMChain, PromptTemplate template = """You are a chatbot having a conversation with a human. {chat_history} Human: {human_input} AI:""" prompt = PromptTemplate( input_variables=["chat_history", "human_input"], template=template ) memory = MotorheadMemory( api_key="YOUR_API_KEY", client_id="YOUR_CLIENT_ID" session_id="testing-1", memory_key="chat_history" ) await memory.init(); # loads previous state from Motörhead 🤘 llm_chain = LLMChain( llm=OpenAI(), prompt=prompt, verbose=True, memory=memory, ) llm_chain.run("hi im bob") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: hi im bob AI: > Finished chain. ' Hi Bob, nice to meet you! How are you doing today?' llm_chain.run("whats my name?") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: hi im bob
https://python.langchain.com/en/latest/modules/memory/examples/motorhead_memory_managed.html
9a3be0d4319e-1
You are a chatbot having a conversation with a human. Human: hi im bob AI: Hi Bob, nice to meet you! How are you doing today? Human: whats my name? AI: > Finished chain. ' You said your name is Bob. Is that correct?' llm_chain.run("whats for dinner?") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: hi im bob AI: Hi Bob, nice to meet you! How are you doing today? Human: whats my name? AI: You said your name is Bob. Is that correct? Human: whats for dinner? AI: > Finished chain. " I'm sorry, I'm not sure what you're asking. Could you please rephrase your question?" previous Motörhead Memory next How to use multiple memory classes in the same chain Contents Setup By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/memory/examples/motorhead_memory_managed.html
a4d58e901ad8-0
.ipynb .pdf Postgres Chat Message History Postgres Chat Message History# This notebook goes over how to use Postgres to store chat message history. from langchain.memory import PostgresChatMessageHistory history = PostgresChatMessageHistory(connection_string="postgresql://postgres:mypassword@localhost/chat_history", session_id="foo") history.add_user_message("hi!") history.add_ai_message("whats up?") history.messages previous How to use multiple memory classes in the same chain next Redis Chat Message History By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/memory/examples/postgres_chat_message_history.html
d21a5047ea82-0
.ipynb .pdf How to customize conversational memory Contents AI Prefix Human Prefix How to customize conversational memory# This notebook walks through a few ways to customize conversational memory. from langchain.llms import OpenAI from langchain.chains import ConversationChain from langchain.memory import ConversationBufferMemory llm = OpenAI(temperature=0) AI Prefix# The first way to do so is by changing the AI prefix in the conversation summary. By default, this is set to “AI”, but you can set this to be anything you want. Note that if you change this, you should also change the prompt used in the chain to reflect this naming change. Let’s walk through an example of that in the example below. # Here it is by default set to "AI" conversation = ConversationChain( llm=llm, verbose=True, memory=ConversationBufferMemory() ) conversation.predict(input="Hi there!") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI: > Finished ConversationChain chain. " Hi there! It's nice to meet you. How can I help you today?" conversation.predict(input="What's the weather?") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there!
https://python.langchain.com/en/latest/modules/memory/examples/conversational_customization.html
d21a5047ea82-1
Current conversation: Human: Hi there! AI: Hi there! It's nice to meet you. How can I help you today? Human: What's the weather? AI: > Finished ConversationChain chain. ' The current weather is sunny and warm with a temperature of 75 degrees Fahrenheit. The forecast for the next few days is sunny with temperatures in the mid-70s.' # Now we can override it and set it to "AI Assistant" from langchain.prompts.prompt import PromptTemplate template = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: {history} Human: {input} AI Assistant:""" PROMPT = PromptTemplate( input_variables=["history", "input"], template=template ) conversation = ConversationChain( prompt=PROMPT, llm=llm, verbose=True, memory=ConversationBufferMemory(ai_prefix="AI Assistant") ) conversation.predict(input="Hi there!") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI Assistant: > Finished ConversationChain chain. " Hi there! It's nice to meet you. How can I help you today?" conversation.predict(input="What's the weather?") > Entering new ConversationChain chain... Prompt after formatting:
https://python.langchain.com/en/latest/modules/memory/examples/conversational_customization.html
d21a5047ea82-2
> Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI Assistant: Hi there! It's nice to meet you. How can I help you today? Human: What's the weather? AI Assistant: > Finished ConversationChain chain. ' The current weather is sunny and warm with a temperature of 75 degrees Fahrenheit. The forecast for the rest of the day is sunny with a high of 78 degrees and a low of 65 degrees.' Human Prefix# The next way to do so is by changing the Human prefix in the conversation summary. By default, this is set to “Human”, but you can set this to be anything you want. Note that if you change this, you should also change the prompt used in the chain to reflect this naming change. Let’s walk through an example of that in the example below. # Now we can override it and set it to "Friend" from langchain.prompts.prompt import PromptTemplate template = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: {history} Friend: {input} AI:""" PROMPT = PromptTemplate( input_variables=["history", "input"], template=template ) conversation = ConversationChain( prompt=PROMPT, llm=llm, verbose=True, memory=ConversationBufferMemory(human_prefix="Friend") )
https://python.langchain.com/en/latest/modules/memory/examples/conversational_customization.html
d21a5047ea82-3
memory=ConversationBufferMemory(human_prefix="Friend") ) conversation.predict(input="Hi there!") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Friend: Hi there! AI: > Finished ConversationChain chain. " Hi there! It's nice to meet you. How can I help you today?" conversation.predict(input="What's the weather?") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Friend: Hi there! AI: Hi there! It's nice to meet you. How can I help you today? Friend: What's the weather? AI: > Finished ConversationChain chain. ' The weather right now is sunny and warm with a temperature of 75 degrees Fahrenheit. The forecast for the rest of the day is mostly sunny with a high of 82 degrees.' previous Cassandra Chat Message History next How to create a custom Memory class Contents AI Prefix Human Prefix By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/memory/examples/conversational_customization.html
fd51afc097b1-0
.ipynb .pdf How to add Memory to an LLMChain How to add Memory to an LLMChain# This notebook goes over how to use the Memory class with an LLMChain. For the purposes of this walkthrough, we will add the ConversationBufferMemory class, although this can be any memory class. from langchain.memory import ConversationBufferMemory from langchain import OpenAI, LLMChain, PromptTemplate The most important step is setting up the prompt correctly. In the below prompt, we have two input keys: one for the actual input, another for the input from the Memory class. Importantly, we make sure the keys in the PromptTemplate and the ConversationBufferMemory match up (chat_history). template = """You are a chatbot having a conversation with a human. {chat_history} Human: {human_input} Chatbot:""" prompt = PromptTemplate( input_variables=["chat_history", "human_input"], template=template ) memory = ConversationBufferMemory(memory_key="chat_history") llm_chain = LLMChain( llm=OpenAI(), prompt=prompt, verbose=True, memory=memory, ) llm_chain.predict(human_input="Hi there my friend") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: Hi there my friend Chatbot: > Finished LLMChain chain. ' Hi there, how are you doing today?' llm_chain.predict(human_input="Not too bad - how are you?") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: Hi there my friend AI: Hi there, how are you doing today?
https://python.langchain.com/en/latest/modules/memory/examples/adding_memory.html
fd51afc097b1-1
Human: Hi there my friend AI: Hi there, how are you doing today? Human: Not too bad - how are you? Chatbot: > Finished LLMChain chain. " I'm doing great, thank you for asking!" previous VectorStore-Backed Memory next How to add memory to a Multi-Input Chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/memory/examples/adding_memory.html
38a07313cca2-0
.ipynb .pdf Redis Chat Message History Redis Chat Message History# This notebook goes over how to use Redis to store chat message history. from langchain.memory import RedisChatMessageHistory history = RedisChatMessageHistory("foo") history.add_user_message("hi!") history.add_ai_message("whats up?") history.messages [AIMessage(content='whats up?', additional_kwargs={}), HumanMessage(content='hi!', additional_kwargs={})] previous Postgres Chat Message History next Zep Memory By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/memory/examples/redis_chat_message_history.html
cb66eb099407-0
.ipynb .pdf How to add memory to a Multi-Input Chain How to add memory to a Multi-Input Chain# Most memory objects assume a single input. In this notebook, we go over how to add memory to a chain that has multiple inputs. As an example of such a chain, we will add memory to a question/answering chain. This chain takes as inputs both related documents and a user question. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.embeddings.cohere import CohereEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch from langchain.vectorstores import Chroma from langchain.docstore.document import Document with open('../../state_of_the_union.txt') as f: state_of_the_union = f.read() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": i} for i in range(len(texts))]) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. query = "What did the president say about Justice Breyer" docs = docsearch.similarity_search(query) from langchain.chains.question_answering import load_qa_chain from langchain.llms import OpenAI from langchain.prompts import PromptTemplate from langchain.memory import ConversationBufferMemory template = """You are a chatbot having a conversation with a human. Given the following extracted parts of a long document and a question, create a final answer. {context} {chat_history} Human: {human_input}
https://python.langchain.com/en/latest/modules/memory/examples/adding_memory_chain_multiple_inputs.html
cb66eb099407-1
{context} {chat_history} Human: {human_input} Chatbot:""" prompt = PromptTemplate( input_variables=["chat_history", "human_input", "context"], template=template ) memory = ConversationBufferMemory(memory_key="chat_history", input_key="human_input") chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff", memory=memory, prompt=prompt) query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "human_input": query}, return_only_outputs=True) {'output_text': ' Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.'} print(chain.memory.buffer) Human: What did the president say about Justice Breyer AI: Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. previous How to add Memory to an LLMChain next How to add Memory to an Agent By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/memory/examples/adding_memory_chain_multiple_inputs.html
0f6fdc56eb3f-0
.ipynb .pdf How to create a custom Memory class How to create a custom Memory class# Although there are a few predefined types of memory in LangChain, it is highly possible you will want to add your own type of memory that is optimal for your application. This notebook covers how to do that. For this notebook, we will add a custom memory type to ConversationChain. In order to add a custom memory class, we need to import the base memory class and subclass it. from langchain import OpenAI, ConversationChain from langchain.schema import BaseMemory from pydantic import BaseModel from typing import List, Dict, Any In this example, we will write a custom memory class that uses spacy to extract entities and save information about them in a simple hash table. Then, during the conversation, we will look at the input text, extract any entities, and put any information about them into the context. Please note that this implementation is pretty simple and brittle and probably not useful in a production setting. Its purpose is to showcase that you can add custom memory implementations. For this, we will need spacy. # !pip install spacy # !python -m spacy download en_core_web_lg import spacy nlp = spacy.load('en_core_web_lg') class SpacyEntityMemory(BaseMemory, BaseModel): """Memory class for storing information about entities.""" # Define dictionary to store information about entities. entities: dict = {} # Define key to pass information about entities into prompt. memory_key: str = "entities" def clear(self): self.entities = {} @property def memory_variables(self) -> List[str]: """Define the variables we are providing to the prompt.""" return [self.memory_key]
https://python.langchain.com/en/latest/modules/memory/examples/custom_memory.html
0f6fdc56eb3f-1
return [self.memory_key] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]: """Load the memory variables, in this case the entity key.""" # Get the input text and run through spacy doc = nlp(inputs[list(inputs.keys())[0]]) # Extract known information about entities, if they exist. entities = [self.entities[str(ent)] for ent in doc.ents if str(ent) in self.entities] # Return combined information about entities to put into context. return {self.memory_key: "\n".join(entities)} def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None: """Save context from this conversation to buffer.""" # Get the input text and run through spacy text = inputs[list(inputs.keys())[0]] doc = nlp(text) # For each entity that was mentioned, save this information to the dictionary. for ent in doc.ents: ent_str = str(ent) if ent_str in self.entities: self.entities[ent_str] += f"\n{text}" else: self.entities[ent_str] = text We now define a prompt that takes in information about entities as well as user input from langchain.prompts.prompt import PromptTemplate template = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. You are provided with information about entities the Human mentions, if relevant. Relevant entity information: {entities} Conversation: Human: {input} AI:""" prompt = PromptTemplate(
https://python.langchain.com/en/latest/modules/memory/examples/custom_memory.html
0f6fdc56eb3f-2
Conversation: Human: {input} AI:""" prompt = PromptTemplate( input_variables=["entities", "input"], template=template ) And now we put it all together! llm = OpenAI(temperature=0) conversation = ConversationChain(llm=llm, prompt=prompt, verbose=True, memory=SpacyEntityMemory()) In the first example, with no prior knowledge about Harrison, the “Relevant entity information” section is empty. conversation.predict(input="Harrison likes machine learning") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. You are provided with information about entities the Human mentions, if relevant. Relevant entity information: Conversation: Human: Harrison likes machine learning AI: > Finished ConversationChain chain. " That's great to hear! Machine learning is a fascinating field of study. It involves using algorithms to analyze data and make predictions. Have you ever studied machine learning, Harrison?" Now in the second example, we can see that it pulls in information about Harrison. conversation.predict(input="What do you think Harrison's favorite subject in college was?") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. You are provided with information about entities the Human mentions, if relevant. Relevant entity information: Harrison likes machine learning Conversation: Human: What do you think Harrison's favorite subject in college was?
https://python.langchain.com/en/latest/modules/memory/examples/custom_memory.html
0f6fdc56eb3f-3
Conversation: Human: What do you think Harrison's favorite subject in college was? AI: > Finished ConversationChain chain. ' From what I know about Harrison, I believe his favorite subject in college was machine learning. He has expressed a strong interest in the subject and has mentioned it often.' Again, please note that this implementation is pretty simple and brittle and probably not useful in a production setting. Its purpose is to showcase that you can add custom memory implementations. previous How to customize conversational memory next Dynamodb Chat Message History By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/memory/examples/custom_memory.html
4dde306e2b73-0
.ipynb .pdf VectorStore-Backed Memory Contents Initialize your VectorStore Create your the VectorStoreRetrieverMemory Using in a chain VectorStore-Backed Memory# VectorStoreRetrieverMemory stores memories in a VectorDB and queries the top-K most “salient” docs every time it is called. This differs from most of the other Memory classes in that it doesn’t explicitly track the order of interactions. In this case, the “docs” are previous conversation snippets. This can be useful to refer to relevant pieces of information that the AI was told earlier in the conversation. from datetime import datetime from langchain.embeddings.openai import OpenAIEmbeddings from langchain.llms import OpenAI from langchain.memory import VectorStoreRetrieverMemory from langchain.chains import ConversationChain from langchain.prompts import PromptTemplate Initialize your VectorStore# Depending on the store you choose, this step may look different. Consult the relevant VectorStore documentation for more details. import faiss from langchain.docstore import InMemoryDocstore from langchain.vectorstores import FAISS embedding_size = 1536 # Dimensions of the OpenAIEmbeddings index = faiss.IndexFlatL2(embedding_size) embedding_fn = OpenAIEmbeddings().embed_query vectorstore = FAISS(embedding_fn, index, InMemoryDocstore({}), {}) Create your the VectorStoreRetrieverMemory# The memory object is instantiated from any VectorStoreRetriever. # In actual usage, you would set `k` to be a higher value, but we use k=1 to show that # the vector lookup still returns the semantically relevant information retriever = vectorstore.as_retriever(search_kwargs=dict(k=1)) memory = VectorStoreRetrieverMemory(retriever=retriever)
https://python.langchain.com/en/latest/modules/memory/types/vectorstore_retriever_memory.html
4dde306e2b73-1
memory = VectorStoreRetrieverMemory(retriever=retriever) # When added to an agent, the memory object can save pertinent information from conversations or used tools memory.save_context({"input": "My favorite food is pizza"}, {"output": "thats good to know"}) memory.save_context({"input": "My favorite sport is soccer"}, {"output": "..."}) memory.save_context({"input": "I don't the Celtics"}, {"output": "ok"}) # # Notice the first result returned is the memory pertaining to tax help, which the language model deems more semantically relevant # to a 1099 than the other documents, despite them both containing numbers. print(memory.load_memory_variables({"prompt": "what sport should i watch?"})["history"]) input: My favorite sport is soccer output: ... Using in a chain# Let’s walk through an example, again setting verbose=True so we can see the prompt. llm = OpenAI(temperature=0) # Can be any valid LLM _DEFAULT_TEMPLATE = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Relevant pieces of previous conversation: {history} (You do not need to use these pieces of information if not relevant) Current conversation: Human: {input} AI:""" PROMPT = PromptTemplate( input_variables=["history", "input"], template=_DEFAULT_TEMPLATE ) conversation_with_summary = ConversationChain( llm=llm, prompt=PROMPT, # We set a very low max_token_limit for the purposes of testing. memory=memory, verbose=True )
https://python.langchain.com/en/latest/modules/memory/types/vectorstore_retriever_memory.html
4dde306e2b73-2
memory=memory, verbose=True ) conversation_with_summary.predict(input="Hi, my name is Perry, what's up?") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Relevant pieces of previous conversation: input: My favorite food is pizza output: thats good to know (You do not need to use these pieces of information if not relevant) Current conversation: Human: Hi, my name is Perry, what's up? AI: > Finished chain. " Hi Perry, I'm doing well. How about you?" # Here, the basketball related content is surfaced conversation_with_summary.predict(input="what's my favorite sport?") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Relevant pieces of previous conversation: input: My favorite sport is soccer output: ... (You do not need to use these pieces of information if not relevant) Current conversation: Human: what's my favorite sport? AI: > Finished chain. ' You told me earlier that your favorite sport is soccer.' # Even though the language model is stateless, since relavent memory is fetched, it can "reason" about the time. # Timestamping memories and data is useful in general to let the agent determine temporal relevance conversation_with_summary.predict(input="Whats my favorite food") > Entering new ConversationChain chain...
https://python.langchain.com/en/latest/modules/memory/types/vectorstore_retriever_memory.html
4dde306e2b73-3
> Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Relevant pieces of previous conversation: input: My favorite food is pizza output: thats good to know (You do not need to use these pieces of information if not relevant) Current conversation: Human: Whats my favorite food AI: > Finished chain. ' You said your favorite food is pizza.' # The memories from the conversation are automatically stored, # since this query best matches the introduction chat above, # the agent is able to 'remember' the user's name. conversation_with_summary.predict(input="What's my name?") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Relevant pieces of previous conversation: input: Hi, my name is Perry, what's up? response: Hi Perry, I'm doing well. How about you? (You do not need to use these pieces of information if not relevant) Current conversation: Human: What's my name? AI: > Finished chain. ' Your name is Perry.' previous ConversationTokenBufferMemory next How to add Memory to an LLMChain Contents Initialize your VectorStore Create your the VectorStoreRetrieverMemory Using in a chain By Harrison Chase © Copyright 2023, Harrison Chase.
https://python.langchain.com/en/latest/modules/memory/types/vectorstore_retriever_memory.html
4dde306e2b73-4
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/memory/types/vectorstore_retriever_memory.html
95d410d08f21-0
.ipynb .pdf ConversationTokenBufferMemory Contents Using in a chain ConversationTokenBufferMemory# ConversationTokenBufferMemory keeps a buffer of recent interactions in memory, and uses token length rather than number of interactions to determine when to flush interactions. Let’s first walk through how to use the utilities from langchain.memory import ConversationTokenBufferMemory from langchain.llms import OpenAI llm = OpenAI() memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=10) memory.save_context({"input": "hi"}, {"output": "whats up"}) memory.save_context({"input": "not much you"}, {"output": "not much"}) memory.load_memory_variables({}) {'history': 'Human: not much you\nAI: not much'} We can also get the history as a list of messages (this is useful if you are using this with a chat model). memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=10, return_messages=True) memory.save_context({"input": "hi"}, {"output": "whats up"}) memory.save_context({"input": "not much you"}, {"output": "not much"}) Using in a chain# Let’s walk through an example, again setting verbose=True so we can see the prompt. from langchain.chains import ConversationChain conversation_with_summary = ConversationChain( llm=llm, # We set a very low max_token_limit for the purposes of testing. memory=ConversationTokenBufferMemory(llm=OpenAI(), max_token_limit=60), verbose=True ) conversation_with_summary.predict(input="Hi, what's up?") > Entering new ConversationChain chain... Prompt after formatting:
https://python.langchain.com/en/latest/modules/memory/types/token_buffer.html
95d410d08f21-1
> Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi, what's up? AI: > Finished chain. " Hi there! I'm doing great, just enjoying the day. How about you?" conversation_with_summary.predict(input="Just working on writing some documentation!") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi, what's up? AI: Hi there! I'm doing great, just enjoying the day. How about you? Human: Just working on writing some documentation! AI: > Finished chain. ' Sounds like a productive day! What kind of documentation are you writing?' conversation_with_summary.predict(input="For LangChain! Have you heard of it?") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi, what's up? AI: Hi there! I'm doing great, just enjoying the day. How about you? Human: Just working on writing some documentation! AI: Sounds like a productive day! What kind of documentation are you writing?
https://python.langchain.com/en/latest/modules/memory/types/token_buffer.html
95d410d08f21-2
AI: Sounds like a productive day! What kind of documentation are you writing? Human: For LangChain! Have you heard of it? AI: > Finished chain. " Yes, I have heard of LangChain! It is a decentralized language-learning platform that connects native speakers and learners in real time. Is that the documentation you're writing about?" # We can see here that the buffer is updated conversation_with_summary.predict(input="Haha nope, although a lot of people confuse it for that") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: For LangChain! Have you heard of it? AI: Yes, I have heard of LangChain! It is a decentralized language-learning platform that connects native speakers and learners in real time. Is that the documentation you're writing about? Human: Haha nope, although a lot of people confuse it for that AI: > Finished chain. " Oh, I see. Is there another language learning platform you're referring to?" previous ConversationSummaryBufferMemory next VectorStore-Backed Memory Contents Using in a chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/memory/types/token_buffer.html
5be7a656b592-0
.ipynb .pdf ConversationSummaryBufferMemory Contents Using in a chain ConversationSummaryBufferMemory# ConversationSummaryBufferMemory combines the last two ideas. It keeps a buffer of recent interactions in memory, but rather than just completely flushing old interactions it compiles them into a summary and uses both. Unlike the previous implementation though, it uses token length rather than number of interactions to determine when to flush interactions. Let’s first walk through how to use the utilities from langchain.memory import ConversationSummaryBufferMemory from langchain.llms import OpenAI llm = OpenAI() memory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=10) memory.save_context({"input": "hi"}, {"output": "whats up"}) memory.save_context({"input": "not much you"}, {"output": "not much"}) memory.load_memory_variables({}) {'history': 'System: \nThe human says "hi", and the AI responds with "whats up".\nHuman: not much you\nAI: not much'} We can also get the history as a list of messages (this is useful if you are using this with a chat model). memory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=10, return_messages=True) memory.save_context({"input": "hi"}, {"output": "whats up"}) memory.save_context({"input": "not much you"}, {"output": "not much"}) We can also utilize the predict_new_summary method directly. messages = memory.chat_memory.messages previous_summary = "" memory.predict_new_summary(messages, previous_summary) '\nThe human and AI state that they are not doing much.' Using in a chain# Let’s walk through an example, again setting verbose=True so we can see the prompt. from langchain.chains import ConversationChain conversation_with_summary = ConversationChain(
https://python.langchain.com/en/latest/modules/memory/types/summary_buffer.html
5be7a656b592-1
from langchain.chains import ConversationChain conversation_with_summary = ConversationChain( llm=llm, # We set a very low max_token_limit for the purposes of testing. memory=ConversationSummaryBufferMemory(llm=OpenAI(), max_token_limit=40), verbose=True ) conversation_with_summary.predict(input="Hi, what's up?") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi, what's up? AI: > Finished chain. " Hi there! I'm doing great. I'm learning about the latest advances in artificial intelligence. What about you?" conversation_with_summary.predict(input="Just working on writing some documentation!") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi, what's up? AI: Hi there! I'm doing great. I'm spending some time learning about the latest developments in AI technology. How about you? Human: Just working on writing some documentation! AI: > Finished chain. ' That sounds like a great use of your time. Do you have experience with writing documentation?' # We can see here that there is a summary of the conversation and then some previous interactions conversation_with_summary.predict(input="For LangChain! Have you heard of it?") > Entering new ConversationChain chain...
https://python.langchain.com/en/latest/modules/memory/types/summary_buffer.html
5be7a656b592-2
> Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: System: The human asked the AI what it was up to and the AI responded that it was learning about the latest developments in AI technology. Human: Just working on writing some documentation! AI: That sounds like a great use of your time. Do you have experience with writing documentation? Human: For LangChain! Have you heard of it? AI: > Finished chain. " No, I haven't heard of LangChain. Can you tell me more about it?" # We can see here that the summary and the buffer are updated conversation_with_summary.predict(input="Haha nope, although a lot of people confuse it for that") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: System: The human asked the AI what it was up to and the AI responded that it was learning about the latest developments in AI technology. The human then mentioned they were writing documentation, to which the AI responded that it sounded like a great use of their time and asked if they had experience with writing documentation. Human: For LangChain! Have you heard of it? AI: No, I haven't heard of LangChain. Can you tell me more about it? Human: Haha nope, although a lot of people confuse it for that AI:
https://python.langchain.com/en/latest/modules/memory/types/summary_buffer.html
5be7a656b592-3
Human: Haha nope, although a lot of people confuse it for that AI: > Finished chain. ' Oh, okay. What is LangChain?' previous ConversationSummaryMemory next ConversationTokenBufferMemory Contents Using in a chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 07, 2023.
https://python.langchain.com/en/latest/modules/memory/types/summary_buffer.html