id
stringlengths
14
16
text
stringlengths
45
2.05k
source
stringlengths
53
111
f37c475f5146-0
.ipynb .pdf Pinecone Pinecone# This notebook shows how to use functionality related to the Pinecone vector database. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Pinecone from langchain.document_loaders import TextLoader from langchain.document_loaders import TextLoader loader = TextLoader('../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() import pinecone # initialize pinecone pinecone.init( api_key="YOUR_API_KEY", # find at app.pinecone.io environment="YOUR_ENV" # next to api key in console ) index_name = "langchain-demo" docsearch = Pinecone.from_documents(docs, embeddings, index_name=index_name) query = "What did the president say about Ketanji Brown Jackson" docs = docsearch.similarity_search(query) print(docs[0].page_content) previous PGVector next Qdrant By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/indexes/vectorstore_examples/pinecone.html
732c4d325943-0
.ipynb .pdf ElasticSearch ElasticSearch# This notebook shows how to use functionality related to the ElasticSearch database. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import ElasticVectorSearch from langchain.document_loaders import TextLoader from langchain.document_loaders import TextLoader loader = TextLoader('../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() db = ElasticVectorSearch.from_documents(docs, embeddings, elasticsearch_url="http://localhost:9200" query = "What did the president say about Ketanji Brown Jackson" docs = db.similarity_search(query) print(docs[0].page_content) In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
https://langchain.readthedocs.io/en/latest/modules/indexes/vectorstore_examples/elasticsearch.html
732c4d325943-1
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. previous Deep Lake next FAISS By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/indexes/vectorstore_examples/elasticsearch.html
b6b30973a376-0
.ipynb .pdf VectorStores Contents Add texts From Documents VectorStores# This notebook showcases basic functionality related to VectorStores. A key part of working with vectorstores is creating the vector to put in them, which is usually created via embeddings. Therefore, it is recommended that you familiarize yourself with the embedding notebook before diving into this. This covers generic high level functionality related to all vector stores. For guides on specific vectorstores, please see the how-to guides here from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Chroma with open('../../state_of_the_union.txt') as f: state_of_the_union = f.read() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_texts(texts, embeddings) query = "What did the president say about Ketanji Brown Jackson" docs = docsearch.similarity_search(query) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. print(docs[0].page_content) In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/vectorstores.html
b6b30973a376-1
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. Add texts# You can easily add text to a vectorstore with the add_texts method. It will return a list of document IDs (in case you need to use them downstream). docsearch.add_texts(["Ankush went to Princeton"]) ['a05e3d0c-ab40-11ed-a853-e65801318981'] query = "Where did Ankush go to college?" docs = docsearch.similarity_search(query) docs[0] Document(page_content='Ankush went to Princeton', lookup_str='', metadata={}, lookup_index=0) From Documents# We can also initialize a vectorstore from documents directly. This is useful when we use the method on the text splitter to get documents directly (handy when the original documents have associated metadata). documents = text_splitter.create_documents([state_of_the_union], metadatas=[{"source": "State of the Union"}]) docsearch = Chroma.from_documents(documents, embeddings) query = "What did the president say about Ketanji Brown Jackson" docs = docsearch.similarity_search(query) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. print(docs[0].page_content)
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/vectorstores.html
b6b30973a376-2
print(docs[0].page_content) In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. previous Text Splitter next AtlasDB Contents Add texts From Documents By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/vectorstores.html
eb846bfd8f86-0
.ipynb .pdf Hypothetical Document Embeddings Contents Multiple generations Using our own prompts Using HyDE Hypothetical Document Embeddings# This notebook goes over how to use Hypothetical Document Embeddings (HyDE), as described in this paper. At a high level, HyDE is an embedding technique that takes queries, generates a hypothetical answer, and then embeds that generated document and uses that as the final example. In order to use HyDE, we therefore need to provide a base embedding model, as well as an LLMChain that can be used to generate those documents. By default, the HyDE class comes with some default prompts to use (see the paper for more details on them), but we can also create our own. from langchain.llms import OpenAI from langchain.embeddings import OpenAIEmbeddings from langchain.chains import LLMChain, HypotheticalDocumentEmbedder from langchain.prompts import PromptTemplate base_embeddings = OpenAIEmbeddings() llm = OpenAI() # Load with `web_search` prompt embeddings = HypotheticalDocumentEmbedder.from_llm(llm, base_embeddings, "web_search") # Now we can use it as any embedding class! result = embeddings.embed_query("Where is the Taj Mahal?") Multiple generations# We can also generate multiple documents and then combine the embeddings for those. By default, we combine those by taking the average. We can do this by changing the LLM we use to generate documents to return multiple things. multi_llm = OpenAI(n=4, best_of=4) embeddings = HypotheticalDocumentEmbedder.from_llm(multi_llm, base_embeddings, "web_search") result = embeddings.embed_query("Where is the Taj Mahal?") Using our own prompts#
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/hyde.html
eb846bfd8f86-1
result = embeddings.embed_query("Where is the Taj Mahal?") Using our own prompts# Besides using preconfigured prompts, we can also easily construct our own prompts and use those in the LLMChain that is generating the documents. This can be useful if we know the domain our queries will be in, as we can condition the prompt to generate text more similar to that. In the example below, let’s condition it to generate text about a state of the union address (because we will use that in the next example). prompt_template = """Please answer the user's question about the most recent state of the union address Question: {question} Answer:""" prompt = PromptTemplate(input_variables=["question"], template=prompt_template) llm_chain = LLMChain(llm=llm, prompt=prompt) embeddings = HypotheticalDocumentEmbedder(llm_chain=llm_chain, base_embeddings=base_embeddings) result = embeddings.embed_query("What did the president say about Ketanji Brown Jackson") Using HyDE# Now that we have HyDE, we can use it as we would any other embedding class! Here is using it to find similar passages in the state of the union example. from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Chroma with open('../../state_of_the_union.txt') as f: state_of_the_union = f.read() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) docsearch = Chroma.from_texts(texts, embeddings) query = "What did the president say about Ketanji Brown Jackson" docs = docsearch.similarity_search(query) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient.
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/hyde.html
eb846bfd8f86-2
Using DuckDB in-memory for database. Data will be transient. print(docs[0].page_content) In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. previous Embeddings next Text Splitter Contents Multiple generations Using our own prompts Using HyDE By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/hyde.html
cb2147dc524f-0
.ipynb .pdf Text Splitter Contents Generic Recursive Text Splitting Markdown Text Splitter Latex Text Splitter Python Code Text Splitter Character Text Splitting HuggingFace Length Function tiktoken (OpenAI) Length Function NLTK Text Splitter Spacy Text Splitter Token Text Splitter Text Splitter# When you want to deal with long pieces of text, it is necessary to split up that text into chunks. As simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together. What “semantically related” means could depend on the type of text. This notebook showcases several ways to do that. At a high level, text splitters work as following: Split the text up into small, semantically meaningful chunks (often sentences). Start combining these small chunks into a larger chunk until you reach a certain size (as measured by some function). Once you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks). That means there two different axes along which you can customize your text splitter: How the text is split How the chunk size is measured For all the examples below, we will highlight both of these attributes # This is a long document we can split up. with open('../../state_of_the_union.txt') as f: state_of_the_union = f.read() Generic Recursive Text Splitting#
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/textsplitter.html
cb2147dc524f-1
state_of_the_union = f.read() Generic Recursive Text Splitting# This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is ["\n\n", "\n", " ", ""]. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text. How the text is split: by list of characters How the chunk size is measured: by length function passed in (defaults to number of characters) from langchain.text_splitter import RecursiveCharacterTextSplitter text_splitter = RecursiveCharacterTextSplitter( # Set a really small chunk size, just to show. chunk_size = 100, chunk_overlap = 20, length_function = len, ) texts = text_splitter.create_documents([state_of_the_union]) print(texts[0]) print(texts[1]) page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet.' lookup_str='' metadata={} lookup_index=0 page_content='and the Cabinet. Justices of the Supreme Court. My fellow Americans.' lookup_str='' metadata={} lookup_index=0 Markdown Text Splitter# MarkdownTextSplitter splits text along Markdown headings, code blocks, or horizontal rules. It’s implemented as a simple subclass of RecursiveCharacterSplitter with Markdown-specific separators. See the source code to see the Markdown syntax expected by default. How the text is split: by list of markdown specific characters How the chunk size is measured: by length function passed in (defaults to number of characters) from langchain.text_splitter import MarkdownTextSplitter
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/textsplitter.html
cb2147dc524f-2
from langchain.text_splitter import MarkdownTextSplitter markdown_text = """ # 🦜️🔗 LangChain ⚡ Building applications with LLMs through composability ⚡ ## Quick Install ```bash # Hopefully this code block isn't split pip install langchain ``` As an open source project in a rapidly developing field, we are extremely open to contributions. """ markdown_splitter = MarkdownTextSplitter(chunk_size=100, chunk_overlap=0) docs = markdown_splitter.create_documents([markdown_text]) docs [Document(page_content='# 🦜️🔗 LangChain\n\n⚡ Building applications with LLMs through composability ⚡', lookup_str='', metadata={}, lookup_index=0), Document(page_content="Quick Install\n\n```bash\n# Hopefully this code block isn't split\npip install langchain", lookup_str='', metadata={}, lookup_index=0), Document(page_content='As an open source project in a rapidly developing field, we are extremely open to contributions.', lookup_str='', metadata={}, lookup_index=0)] Latex Text Splitter# LatexTextSplitter splits text along Latex headings, headlines, enumerations and more. It’s implemented as a simple subclass of RecursiveCharacterSplitter with Latex-specific separators. See the source code to see the Latex syntax expected by default. How the text is split: by list of latex specific tags How the chunk size is measured: by length function passed in (defaults to number of characters) from langchain.text_splitter import LatexTextSplitter latex_text = """ \documentclass{article} \begin{document} \maketitle \section{Introduction}
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/textsplitter.html
cb2147dc524f-3
\begin{document} \maketitle \section{Introduction} Large language models (LLMs) are a type of machine learning model that can be trained on vast amounts of text data to generate human-like language. In recent years, LLMs have made significant advances in a variety of natural language processing tasks, including language translation, text generation, and sentiment analysis. \subsection{History of LLMs} The earliest LLMs were developed in the 1980s and 1990s, but they were limited by the amount of data that could be processed and the computational power available at the time. In the past decade, however, advances in hardware and software have made it possible to train LLMs on massive datasets, leading to significant improvements in performance. \subsection{Applications of LLMs} LLMs have many applications in industry, including chatbots, content creation, and virtual assistants. They can also be used in academia for research in linguistics, psychology, and computational linguistics. \end{document} """ latex_splitter = LatexTextSplitter(chunk_size=400, chunk_overlap=0) docs = latex_splitter.create_documents([latex_text]) docs Python Code Text Splitter# PythonCodeTextSplitter splits text along python class and method definitions. It’s implemented as a simple subclass of RecursiveCharacterSplitter with Python-specific separators. See the source code to see the Python syntax expected by default. How the text is split: by list of python specific characters How the chunk size is measured: by length function passed in (defaults to number of characters) from langchain.text_splitter import PythonCodeTextSplitter python_text = """ class Foo: def bar(): def foo(): def testing_func(): def bar(): """
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/textsplitter.html
cb2147dc524f-4
def foo(): def testing_func(): def bar(): """ python_splitter = PythonCodeTextSplitter(chunk_size=30, chunk_overlap=0) docs = python_splitter.create_documents([python_text]) docs [Document(page_content='Foo:\n\n def bar():', lookup_str='', metadata={}, lookup_index=0), Document(page_content='foo():\n\ndef testing_func():', lookup_str='', metadata={}, lookup_index=0), Document(page_content='bar():', lookup_str='', metadata={}, lookup_index=0)] Character Text Splitting# This is a more simple method. This splits based on characters (by default “\n\n”) and measure chunk length by number of characters. How the text is split: by single character How the chunk size is measured: by length function passed in (defaults to number of characters) from langchain.text_splitter import CharacterTextSplitter text_splitter = CharacterTextSplitter( separator = "\n\n", chunk_size = 1000, chunk_overlap = 200, length_function = len, ) texts = text_splitter.create_documents([state_of_the_union]) print(texts[0])
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/textsplitter.html
cb2147dc524f-5
texts = text_splitter.create_documents([state_of_the_union]) print(texts[0]) page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' lookup_str='' metadata={} lookup_index=0 Here’s an example of passing metadata along with the documents, notice that it is split along with the documents. metadatas = [{"document": 1}, {"document": 2}] documents = text_splitter.create_documents([state_of_the_union, state_of_the_union], metadatas=metadatas) print(documents[0])
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/textsplitter.html
cb2147dc524f-6
print(documents[0]) page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' lookup_str='' metadata={'document': 1} lookup_index=0 HuggingFace Length Function# Most LLMs are constrained by the number of tokens that you can pass in, which is not the same as the number of characters. In order to get a more accurate estimate, we can use HuggingFace tokenizers to count the text length. How the text is split: by character passed in How the chunk size is measured: by Hugging Face tokenizer from transformers import GPT2TokenizerFast tokenizer = GPT2TokenizerFast.from_pretrained("gpt2") text_splitter = CharacterTextSplitter.from_huggingface_tokenizer(tokenizer, chunk_size=100, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) print(texts[0])
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/textsplitter.html
cb2147dc524f-7
texts = text_splitter.split_text(state_of_the_union) print(texts[0]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. tiktoken (OpenAI) Length Function# You can also use tiktoken, a open source tokenizer package from OpenAI to estimate tokens used. Will probably be more accurate for their models. How the text is split: by character passed in How the chunk size is measured: by tiktoken tokenizer text_splitter = CharacterTextSplitter.from_tiktoken_encoder(chunk_size=100, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) print(texts[0]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. NLTK Text Splitter# Rather than just splitting on “\n\n”, we can use NLTK to split based on tokenizers. How the text is split: by NLTK How the chunk size is measured: by length function passed in (defaults to number of characters) from langchain.text_splitter import NLTKTextSplitter text_splitter = NLTKTextSplitter(chunk_size=1000)
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/textsplitter.html
cb2147dc524f-8
text_splitter = NLTKTextSplitter(chunk_size=1000) texts = text_splitter.split_text(state_of_the_union) print(texts[0]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies. Spacy Text Splitter# Another alternative to NLTK is to use Spacy. How the text is split: by Spacy How the chunk size is measured: by length function passed in (defaults to number of characters) from langchain.text_splitter import SpacyTextSplitter text_splitter = SpacyTextSplitter(chunk_size=1000) texts = text_splitter.split_text(state_of_the_union) print(texts[0]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart.
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/textsplitter.html
cb2147dc524f-9
My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies. Token Text Splitter# How the text is split: by tiktoken tokens How the chunk size is measured: by tiktoken tokens from langchain.text_splitter import TokenTextSplitter text_splitter = TokenTextSplitter(chunk_size=10, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) print(texts[0]) Madam Speaker, Madam Vice President, our previous Hypothetical Document Embeddings next VectorStores Contents Generic Recursive Text Splitting Markdown Text Splitter Latex Text Splitter Python Code Text Splitter Character Text Splitting HuggingFace Length Function tiktoken (OpenAI) Length Function NLTK Text Splitter Spacy Text Splitter Token Text Splitter By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/textsplitter.html
53477c3d057e-0
.ipynb .pdf Embeddings Contents OpenAI AzureOpenAI Cohere Hugging Face Hub TensorflowHub InstructEmbeddings Self Hosted Embeddings Fake Embeddings SageMaker Endpoint Embeddings Embeddings# This notebook goes over how to use the Embedding class in LangChain. The Embedding class is a class designed for interfacing with embeddings. There are lots of Embedding providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them. Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space. The base Embedding class in LangChain exposes two methods: embed_documents and embed_query. The largest difference is that these two methods have different interfaces: one works over multiple documents, while the other works over a single document. Besides this, another reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself). OpenAI# Let’s load the OpenAI Embedding class. from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() text = "This is a test document." query_result = embeddings.embed_query(text) doc_result = embeddings.embed_documents([text]) Let’s load the OpenAI Embedding class with first generation models (e.g. text-search-ada-doc-001/text-search-ada-query-001). Note: These are not recommended models - see here from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings(model_name="ada") text = "This is a test document."
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/embeddings.html
53477c3d057e-1
text = "This is a test document." query_result = embeddings.embed_query(text) doc_result = embeddings.embed_documents([text]) AzureOpenAI# Let’s load the OpenAI Embedding class with environment variables set to indicate to use Azure endpoints. # set the environment variables needed for openai package to know to reach out to azure import os os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/" os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key" embeddings = OpenAIEmbeddings(model="your-embeddings-deployment-name") text = "This is a test document." query_result = embeddings.embed_query(text) doc_result = embeddings.embed_documents([text]) Cohere# Let’s load the Cohere Embedding class. from langchain.embeddings import CohereEmbeddings embeddings = CohereEmbeddings(cohere_api_key=cohere_api_key) text = "This is a test document." query_result = embeddings.embed_query(text) doc_result = embeddings.embed_documents([text]) Hugging Face Hub# Let’s load the Hugging Face Embedding class. from langchain.embeddings import HuggingFaceEmbeddings embeddings = HuggingFaceEmbeddings() text = "This is a test document." query_result = embeddings.embed_query(text) doc_result = embeddings.embed_documents([text]) TensorflowHub# Let’s load the TensorflowHub Embedding class. from langchain.embeddings import TensorflowHubEmbeddings embeddings = TensorflowHubEmbeddings()
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/embeddings.html
53477c3d057e-2
embeddings = TensorflowHubEmbeddings() 2023-01-30 23:53:01.652176: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-01-30 23:53:34.362802: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. text = "This is a test document." query_result = embeddings.embed_query(text) InstructEmbeddings# Let’s load the HuggingFace instruct Embeddings class. from langchain.embeddings import HuggingFaceInstructEmbeddings embeddings = HuggingFaceInstructEmbeddings( query_instruction="Represent the query for retrieval: " ) load INSTRUCTOR_Transformer max_seq_length 512 text = "This is a test document." query_result = embeddings.embed_query(text) Self Hosted Embeddings# Let’s load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes. from langchain.embeddings import ( SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, SelfHostedHuggingFaceInstructEmbeddings, ) import runhouse as rh # For an on-demand A100 with GCP, Azure, or Lambda
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/embeddings.html
53477c3d057e-3
# For an on-demand A100 with GCP, Azure, or Lambda gpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False) # For an on-demand A10G with AWS (no single A100s on AWS) # gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws') # For an existing cluster # gpu = rh.cluster(ips=['<ip of the cluster>'], # ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'}, # name='my-cluster') embeddings = SelfHostedHuggingFaceEmbeddings(hardware=gpu) text = "This is a test document." query_result = embeddings.embed_query(text) And similarly for SelfHostedHuggingFaceInstructEmbeddings: embeddings = SelfHostedHuggingFaceInstructEmbeddings(hardware=gpu) Now let’s load an embedding model with a custom load function: def get_pipeline(): from transformers import ( AutoModelForCausalLM, AutoTokenizer, pipeline, ) # Must be inside the function in notebooks model_id = "facebook/bart-base" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) return pipeline("feature-extraction", model=model, tokenizer=tokenizer) def inference_fn(pipeline, prompt): # Return last hidden state of the model if isinstance(prompt, list): return [emb[0][-1] for emb in pipeline(prompt)] return pipeline(prompt)[0][-1] embeddings = SelfHostedEmbeddings( model_load_fn=get_pipeline, hardware=gpu,
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/embeddings.html
53477c3d057e-4
model_load_fn=get_pipeline, hardware=gpu, model_reqs=["./", "torch", "transformers"], inference_fn=inference_fn, ) query_result = embeddings.embed_query(text) Fake Embeddings# LangChain also provides a fake embedding class. You can use this to test your pipelines. from langchain.embeddings import FakeEmbeddings embeddings = FakeEmbeddings(size=1352) query_result = embeddings.embed_query("foo") doc_results = embeddings.embed_documents(["foo"]) SageMaker Endpoint Embeddings# Let’s load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker. For instrucstions on how to do this, please see here !pip3 install langchain boto3 from typing import Dict from langchain.embeddings import SagemakerEndpointEmbeddings from langchain.llms.sagemaker_endpoint import ContentHandlerBase import json class ContentHandler(ContentHandlerBase): content_type = "application/json" accepts = "application/json" def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes: input_str = json.dumps({"inputs": prompt, **model_kwargs}) return input_str.encode('utf-8') def transform_output(self, output: bytes) -> str: response_json = json.loads(output.read().decode("utf-8")) return response_json["embeddings"] content_handler = ContentHandler() embeddings = SagemakerEndpointEmbeddings( # endpoint_name="endpoint-name", # credentials_profile_name="credentials-profile-name", endpoint_name="huggingface-pytorch-inference-2023-03-21-16-14-03-834",
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/embeddings.html
53477c3d057e-5
region_name="us-east-1", content_handler=content_handler ) query_result = embeddings.embed_query("foo") doc_results = embeddings.embed_documents(["foo"]) doc_results previous How To Guides next Hypothetical Document Embeddings Contents OpenAI AzureOpenAI Cohere Hugging Face Hub TensorflowHub InstructEmbeddings Self Hosted Embeddings Fake Embeddings SageMaker Endpoint Embeddings By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/embeddings.html
f4c02895d130-0
.ipynb .pdf Graph QA Contents Create the graph Querying the graph Save the graph Graph QA# This notebook goes over how to do question answering over a graph data structure. Create the graph# In this section, we construct an example graph. At the moment, this works best for small pieces of text. from langchain.indexes import GraphIndexCreator from langchain.llms import OpenAI from langchain.document_loaders import TextLoader index_creator = GraphIndexCreator(llm=OpenAI(temperature=0)) with open("../../state_of_the_union.txt") as f: all_text = f.read() We will use just a small snippet, because extracting the knowledge triplets is a bit intensive at the moment. text = "\n".join(all_text.split("\n\n")[105:108]) text 'It won’t look like much, but if you stop and look closely, you’ll see a “Field of dreams,” the ground on which America’s future will be built. \nThis is where Intel, the American company that helped build Silicon Valley, is going to build its $20 billion semiconductor “mega site”. \nUp to eight state-of-the-art factories in one place. 10,000 new good-paying jobs. ' graph = index_creator.from_text(text) We can inspect the created graph. graph.get_triples() [('Intel', '$20 billion semiconductor "mega site"', 'is going to build'), ('Intel', 'state-of-the-art factories', 'is building'), ('Intel', '10,000 new good-paying jobs', 'is creating'), ('Intel', 'Silicon Valley', 'is helping build'), ('Field of dreams', "America's future will be built", 'is the ground on which')] Querying the graph#
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/graph_qa.html
f4c02895d130-1
'is the ground on which')] Querying the graph# We can now use the graph QA chain to ask question of the graph from langchain.chains import GraphQAChain chain = GraphQAChain.from_llm(OpenAI(temperature=0), graph=graph, verbose=True) chain.run("what is Intel going to build?") > Entering new GraphQAChain chain... Entities Extracted: Intel Full Context: Intel is going to build $20 billion semiconductor "mega site" Intel is building state-of-the-art factories Intel is creating 10,000 new good-paying jobs Intel is helping build Silicon Valley > Finished chain. ' Intel is going to build a $20 billion semiconductor "mega site" with state-of-the-art factories, creating 10,000 new good-paying jobs and helping to build Silicon Valley.' Save the graph# We can also save and load the graph. graph.write_to_gml("graph.gml") from langchain.indexes.graph import NetworkxEntityGraph loaded_graph = NetworkxEntityGraph.from_gml("graph.gml") loaded_graph.get_triples() [('Intel', '$20 billion semiconductor "mega site"', 'is going to build'), ('Intel', 'state-of-the-art factories', 'is building'), ('Intel', '10,000 new good-paying jobs', 'is creating'), ('Intel', 'Silicon Valley', 'is helping build'), ('Field of dreams', "America's future will be built", 'is the ground on which')] previous Chat Vector DB next Question Answering with Sources Contents Create the graph Querying the graph Save the graph By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/graph_qa.html
95c4f7c73985-0
.ipynb .pdf VectorDB Question Answering with Sources Contents Chain Type VectorDB Question Answering with Sources# This notebook goes over how to do question-answering with sources over a vector database. It does this by using the VectorDBQAWithSourcesChain, which does the lookup of the documents from a vector database. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.embeddings.cohere import CohereEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch from langchain.vectorstores import Chroma with open('../../state_of_the_union.txt') as f: state_of_the_union = f.read() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": f"{i}-pl"} for i in range(len(texts))]) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. Exiting: Cleaning up .chroma directory from langchain.chains import VectorDBQAWithSourcesChain from langchain import OpenAI chain = VectorDBQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", vectorstore=docsearch) chain({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True) {'answer': ' The president thanked Justice Breyer for his service and mentioned his legacy of excellence.\n', 'sources': '30-pl'} Chain Type#
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/vector_db_qa_with_sources.html
95c4f7c73985-1
'sources': '30-pl'} Chain Type# You can easily specify different chain types to load and use in the VectorDBQAWithSourcesChain chain. For a more detailed walkthrough of these types, please see this notebook. There are two ways to load different chain types. First, you can specify the chain type argument in the from_chain_type method. This allows you to pass in the name of the chain type you want to use. For example, in the below we change the chain type to map_reduce. chain = VectorDBQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="map_reduce", vectorstore=docsearch) chain({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True) {'answer': ' The president honored Justice Stephen Breyer for his service.\n', 'sources': '30-pl'} The above way allows you to really simply change the chain_type, but it does provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly (as you did in this notebook) and then pass that directly to the the VectorDBQA chain with the combine_documents_chain parameter. For example: from langchain.chains.qa_with_sources import load_qa_with_sources_chain qa_chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff") qa = VectorDBQAWithSourcesChain(combine_documents_chain=qa_chain, vectorstore=docsearch) qa({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True) {'answer': ' The president honored Justice Stephen Breyer for his service.\n', 'sources': '30-pl'} previous Vector DB Question/Answering next Vector DB Text Generation Contents Chain Type By Harrison Chase
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/vector_db_qa_with_sources.html
95c4f7c73985-2
next Vector DB Text Generation Contents Chain Type By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/vector_db_qa_with_sources.html
b6792bab5f8e-0
.ipynb .pdf Chat Vector DB Contents Return Source Documents Chat Vector DB with search_distance Chat Vector DB with map_reduce Chat Vector DB with Question Answering with sources Chat Vector DB with streaming to stdout get_chat_history Function Chat Vector DB# This notebook goes over how to set up a chain to chat with a vector database. The only difference between this chain and the VectorDBQAChain is that this allows for passing in of a chat history which can be used to allow for follow up questions. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.text_splitter import CharacterTextSplitter from langchain.llms import OpenAI from langchain.chains import ChatVectorDBChain Load in documents. You can replace this with a loader for whatever type of data you want from langchain.document_loaders import TextLoader loader = TextLoader('../../state_of_the_union.txt') documents = loader.load() If you had multiple loaders that you wanted to combine, you do something like: # loaders = [....] # docs = [] # for loader in loaders: # docs.extend(loader.load()) We now split the documents, create embeddings for them, and put them in a vectorstore. This allows us to do semantic search over them. text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) documents = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() vectorstore = Chroma.from_documents(documents, embeddings) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. We now initialize the ChatVectorDBChain qa = ChatVectorDBChain.from_llm(OpenAI(temperature=0), vectorstore)
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/chat_vector_db.html
b6792bab5f8e-1
qa = ChatVectorDBChain.from_llm(OpenAI(temperature=0), vectorstore) Here’s an example of asking a question with no chat history chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = qa({"question": query, "chat_history": chat_history}) result["answer"] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." Here’s an example of asking a question with some chat history chat_history = [(query, result["answer"])] query = "Did he mention who she suceeded" result = qa({"question": query, "chat_history": chat_history}) result['answer'] ' Justice Stephen Breyer' Return Source Documents# You can also easily return source documents from the ChatVectorDBChain. This is useful for when you want to inspect what documents were returned. qa = ChatVectorDBChain.from_llm(OpenAI(temperature=0), vectorstore, return_source_documents=True) chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = qa({"question": query, "chat_history": chat_history}) result['source_documents'][0]
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/chat_vector_db.html
b6792bab5f8e-2
result['source_documents'][0] Document(page_content='In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \n\nWe cannot let this happen. \n\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0) Chat Vector DB with search_distance# If you are using a vector store that supports filtering by search distance, you can add a threshold value parameter. vectordbkwargs = {"search_distance": 0.9} qa = ChatVectorDBChain.from_llm(OpenAI(temperature=0), vectorstore, return_source_documents=True) chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = qa({"question": query, "chat_history": chat_history, "vectordbkwargs": vectordbkwargs}) Chat Vector DB with map_reduce# We can also use different types of combine document chains with the Chat Vector DB chain. from langchain.chains import LLMChain
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/chat_vector_db.html
b6792bab5f8e-3
from langchain.chains import LLMChain from langchain.chains.question_answering import load_qa_chain from langchain.chains.chat_vector_db.prompts import CONDENSE_QUESTION_PROMPT llm = OpenAI(temperature=0) question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT) doc_chain = load_qa_chain(llm, chain_type="map_reduce") chain = ChatVectorDBChain( vectorstore=vectorstore, question_generator=question_generator, combine_docs_chain=doc_chain, ) chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = chain({"question": query, "chat_history": chat_history}) result['answer'] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." Chat Vector DB with Question Answering with sources# You can also use this chain with the question answering with sources chain. from langchain.chains.qa_with_sources import load_qa_with_sources_chain llm = OpenAI(temperature=0) question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT) doc_chain = load_qa_with_sources_chain(llm, chain_type="map_reduce") chain = ChatVectorDBChain( vectorstore=vectorstore, question_generator=question_generator, combine_docs_chain=doc_chain, ) chat_history = [] query = "What did the president say about Ketanji Brown Jackson"
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/chat_vector_db.html
b6792bab5f8e-4
chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = chain({"question": query, "chat_history": chat_history}) result['answer'] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \nSOURCES: ../../state_of_the_union.txt" Chat Vector DB with streaming to stdout# Output from the chain will be streamed to stdout token by token in this example. from langchain.chains.llm import LLMChain from langchain.callbacks.base import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.chains.chat_vector_db.prompts import CONDENSE_QUESTION_PROMPT, QA_PROMPT from langchain.chains.question_answering import load_qa_chain # Construct a ChatVectorDBChain with a streaming llm for combine docs # and a separate, non-streaming llm for question generation llm = OpenAI(temperature=0) streaming_llm = OpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0) question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT) doc_chain = load_qa_chain(streaming_llm, chain_type="stuff", prompt=QA_PROMPT) qa = ChatVectorDBChain(vectorstore=vectorstore, combine_docs_chain=doc_chain, question_generator=question_generator) chat_history = [] query = "What did the president say about Ketanji Brown Jackson"
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/chat_vector_db.html
b6792bab5f8e-5
chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = qa({"question": query, "chat_history": chat_history}) The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. chat_history = [(query, result["answer"])] query = "Did he mention who she suceeded" result = qa({"question": query, "chat_history": chat_history}) Justice Stephen Breyer get_chat_history Function# You can also specify a get_chat_history function, which can be used to format the chat_history string. def get_chat_history(inputs) -> str: res = [] for human, ai in inputs: res.append(f"Human:{human}\nAI:{ai}") return "\n".join(res) qa = ChatVectorDBChain.from_llm(OpenAI(temperature=0), vectorstore, get_chat_history=get_chat_history) chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = qa({"question": query, "chat_history": chat_history}) result['answer'] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." previous Analyze Document next
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/chat_vector_db.html
b6792bab5f8e-6
previous Analyze Document next Graph QA Contents Return Source Documents Chat Vector DB with search_distance Chat Vector DB with map_reduce Chat Vector DB with Question Answering with sources Chat Vector DB with streaming to stdout get_chat_history Function By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/chat_vector_db.html
a39fe77e394c-0
.ipynb .pdf Question Answering with Sources Contents Prepare Data Quickstart The stuff Chain The map_reduce Chain The refine Chain The map-rerank Chain Question Answering with Sources# This notebook walks through how to use LangChain for question answering with sources over a list of documents. It covers four different chain types: stuff, map_reduce, refine,map-rerank. For a more in depth explanation of what these chain types are, see here. Prepare Data# First we prepare the data. For this example we do similarity search over a vector database, but these documents could be fetched in any manner (the point of this notebook to highlight what to do AFTER you fetch the documents). from langchain.embeddings.openai import OpenAIEmbeddings from langchain.embeddings.cohere import CohereEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch from langchain.vectorstores import Chroma from langchain.docstore.document import Document from langchain.prompts import PromptTemplate with open('../../state_of_the_union.txt') as f: state_of_the_union = f.read() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": str(i)} for i in range(len(texts))]) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. query = "What did the president say about Justice Breyer" docs = docsearch.similarity_search(query) from langchain.chains.qa_with_sources import load_qa_with_sources_chain
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/qa_with_sources.html
a39fe77e394c-1
from langchain.chains.qa_with_sources import load_qa_with_sources_chain from langchain.llms import OpenAI Quickstart# If you just want to get started as quickly as possible, this is the recommended way to do it: chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'} If you want more control and understanding over what is happening, please see the information below. The stuff Chain# This sections shows results of using the stuff Chain to do question answering with sources. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. template = """Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES"). If you don't know the answer, just say that you don't know. Don't try to make up an answer. ALWAYS return a "SOURCES" part in your answer. Respond in Italian. QUESTION: {question} ========= {summaries} ========= FINAL ANSWER IN ITALIAN:""" PROMPT = PromptTemplate(template=template, input_variables=["summaries", "question"])
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/qa_with_sources.html
a39fe77e394c-2
PROMPT = PromptTemplate(template=template, input_variables=["summaries", "question"]) chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff", prompt=PROMPT) query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': '\nNon so cosa abbia detto il presidente riguardo a Justice Breyer.\nSOURCES: 30, 31, 33'} The map_reduce Chain# This sections shows results of using the map_reduce Chain to do question answering with sources. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'} Intermediate Steps We can also return the intermediate steps for map_reduce chains, should we want to inspect them. This is done with the return_map_steps variable. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True) chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': [' "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."', ' None', ' None', ' None'],
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/qa_with_sources.html
a39fe77e394c-3
' None', ' None', ' None'], 'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. question_prompt_template = """Use the following portion of a long document to see if any of the text is relevant to answer the question. Return any relevant text in Italian. {context} Question: {question} Relevant text, if any, in Italian:""" QUESTION_PROMPT = PromptTemplate( template=question_prompt_template, input_variables=["context", "question"] ) combine_prompt_template = """Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES"). If you don't know the answer, just say that you don't know. Don't try to make up an answer. ALWAYS return a "SOURCES" part in your answer. Respond in Italian. QUESTION: {question} ========= {summaries} ========= FINAL ANSWER IN ITALIAN:""" COMBINE_PROMPT = PromptTemplate( template=combine_prompt_template, input_variables=["summaries", "question"] ) chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True, question_prompt=QUESTION_PROMPT, combine_prompt=COMBINE_PROMPT) chain({"input_documents": docs, "question": query}, return_only_outputs=True)
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/qa_with_sources.html
a39fe77e394c-4
chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ["\nStasera vorrei onorare qualcuno che ha dedicato la sua vita a servire questo paese: il giustizia Stephen Breyer - un veterano dell'esercito, uno studioso costituzionale e un giustizia in uscita della Corte Suprema degli Stati Uniti. Giustizia Breyer, grazie per il tuo servizio.", ' Non pertinente.', ' Non rilevante.', " Non c'è testo pertinente."], 'output_text': ' Non conosco la risposta. SOURCES: 30, 31, 33, 20.'} Batch Size When using the map_reduce chain, one thing to keep in mind is the batch size you are using during the map step. If this is too high, it could cause rate limiting errors. You can control this by setting the batch size on the LLM used. Note that this only applies for LLMs with this parameter. Below is an example of doing so: llm = OpenAI(batch_size=5, temperature=0) The refine Chain# This sections shows results of using the refine Chain to do question answering with sources. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="refine") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True)
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/qa_with_sources.html
a39fe77e394c-5
chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': "\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked him for his service and praised his career as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He noted Justice Breyer's reputation as a consensus builder and the broad range of support he has received from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also highlighted the importance of securing the border and fixing the immigration system in order to advance liberty and justice, and mentioned the new technology, joint patrols, dedicated immigration judges, and commitments to support partners in South and Central America that have been put in place. He also expressed his commitment to the LGBTQ+ community, noting the need for the bipartisan Equality Act and the importance of protecting transgender Americans from state laws targeting them. He also highlighted his commitment to bipartisanship, noting the 80 bipartisan bills he signed into law last year, and his plans to strengthen the Violence Against Women Act. Additionally, he announced that the Justice Department will name a chief prosecutor for pandemic fraud and his plan to lower the deficit by more than one trillion dollars in a"} Intermediate Steps We can also return the intermediate steps for refine chains, should we want to inspect them. This is done with the return_intermediate_steps variable. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True) chain({"input_documents": docs, "question": query}, return_only_outputs=True)
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/qa_with_sources.html
a39fe77e394c-6
chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ['\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service.', '\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. \n\nSource: 31',
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/qa_with_sources.html
a39fe77e394c-7
'\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. He also mentioned the need to pass the bipartisan Equality Act to protect LGBTQ+ Americans, and to strengthen the Violence Against Women Act that he had written three decades ago. \n\nSource: 31, 33',
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/qa_with_sources.html
a39fe77e394c-8
'\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. He also mentioned the need to pass the bipartisan Equality Act to protect LGBTQ+ Americans, and to strengthen the Violence Against Women Act that he had written three decades ago. Additionally, he mentioned his plan to lower costs to give families a fair shot, lower the deficit, and go after criminals who stole billions in relief money meant for small businesses and millions of Americans. He also announced that the Justice Department will name a chief prosecutor for pandemic fraud. \n\nSource: 20, 31, 33'],
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/qa_with_sources.html
a39fe77e394c-9
'output_text': '\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. He also mentioned the need to pass the bipartisan Equality Act to protect LGBTQ+ Americans, and to strengthen the Violence Against Women Act that he had written three decades ago. Additionally, he mentioned his plan to lower costs to give families a fair shot, lower the deficit, and go after criminals who stole billions in relief money meant for small businesses and millions of Americans. He also announced that the Justice Department will name a chief prosecutor for pandemic fraud. \n\nSource: 20, 31, 33'} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. refine_template = ( "The original question is as follows: {question}\n" "We have provided an existing answer, including sources: {existing_answer}\n" "We have the opportunity to refine the existing answer" "(only if needed) with some more context below.\n" "------------\n" "{context_str}\n" "------------\n" "Given the new context, refine the original answer to better " "answer the question (in Italian)"
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/qa_with_sources.html
a39fe77e394c-10
"answer the question (in Italian)" "If you do update it, please update the sources as well. " "If the context isn't useful, return the original answer." ) refine_prompt = PromptTemplate( input_variables=["question", "existing_answer", "context_str"], template=refine_template, ) question_template = ( "Context information is below. \n" "---------------------\n" "{context_str}" "\n---------------------\n" "Given the context information and not prior knowledge, " "answer the question in Italian: {question}\n" ) question_prompt = PromptTemplate( input_variables=["context_str", "question"], template=question_template ) chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True, question_prompt=question_prompt, refine_prompt=refine_prompt) chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ['\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese e ha onorato la sua carriera.',
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/qa_with_sources.html
a39fe77e394c-11
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per",
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/qa_with_sources.html
a39fe77e394c-12
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per",
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/qa_with_sources.html
a39fe77e394c-13
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per"],
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/qa_with_sources.html
a39fe77e394c-14
'output_text': "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per"} The map-rerank Chain# This sections shows results of using the map-rerank Chain to do question answering with sources. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_rerank", metadata_keys=['source'], return_intermediate_steps=True) query = "What did the president say about Justice Breyer" result = chain({"input_documents": docs, "question": query}, return_only_outputs=True) result["output_text"] ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.' result["intermediate_steps"] [{'answer': ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.', 'score': '100'},
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/qa_with_sources.html
a39fe77e394c-15
'score': '100'}, {'answer': ' This document does not answer the question', 'score': '0'}, {'answer': ' This document does not answer the question', 'score': '0'}, {'answer': ' This document does not answer the question', 'score': '0'}] Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. from langchain.output_parsers import RegexParser output_parser = RegexParser( regex=r"(.*?)\nScore: (.*)", output_keys=["answer", "score"], ) prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. In addition to giving an answer, also return a score of how fully it answered the user's question. This should be in the following format: Question: [question here] Helpful Answer In Italian: [answer here] Score: [score between 0 and 100] Begin! Context: --------- {context} --------- Question: {question} Helpful Answer In Italian:""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"], output_parser=output_parser, ) chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_rerank", metadata_keys=['source'], return_intermediate_steps=True, prompt=PROMPT) query = "What did the president say about Justice Breyer" result = chain({"input_documents": docs, "question": query}, return_only_outputs=True) result {'source': 30,
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/qa_with_sources.html
a39fe77e394c-16
result {'source': 30, 'intermediate_steps': [{'answer': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha onorato la sua carriera.', 'score': '100'}, {'answer': ' Il presidente non ha detto nulla sulla Giustizia Breyer.', 'score': '100'}, {'answer': ' Non so.', 'score': '0'}, {'answer': ' Il presidente non ha detto nulla sulla giustizia Breyer.', 'score': '100'}], 'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha onorato la sua carriera.'} previous Graph QA next Question Answering Contents Prepare Data Quickstart The stuff Chain The map_reduce Chain The refine Chain The map-rerank Chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/qa_with_sources.html
1bada55fce73-0
.ipynb .pdf Analyze Document Contents Summarize Question Answering Analyze Document# The AnalyzeDocumentChain is more of an end to chain. This chain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain. This can be used as more of an end-to-end chain. with open('../../state_of_the_union.txt') as f: state_of_the_union = f.read() Summarize# Let’s take a look at it in action below, using it summarize a long document. from langchain import OpenAI from langchain.chains.summarize import load_summarize_chain llm = OpenAI(temperature=0) summary_chain = load_summarize_chain(llm, chain_type="map_reduce") from langchain.chains import AnalyzeDocumentChain summarize_document_chain = AnalyzeDocumentChain(combine_docs_chain=summary_chain) summarize_document_chain.run(state_of_the_union) " In this speech, President Biden addresses the American people and the world, discussing the recent aggression of Russia's Vladimir Putin in Ukraine and the US response. He outlines economic sanctions and other measures taken to hold Putin accountable, and announces the US Department of Justice's task force to go after the crimes of Russian oligarchs. He also announces plans to fight inflation and lower costs for families, invest in American manufacturing, and provide military, economic, and humanitarian assistance to Ukraine. He calls for immigration reform, protecting the rights of women, and advancing the rights of LGBTQ+ Americans, and pays tribute to military families. He concludes with optimism for the future of America." Question Answering# Let’s take a look at this using a question answering chain. from langchain.chains.question_answering import load_qa_chain qa_chain = load_qa_chain(llm, chain_type="map_reduce")
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/analyze_document.html
1bada55fce73-1
qa_chain = load_qa_chain(llm, chain_type="map_reduce") qa_document_chain = AnalyzeDocumentChain(combine_docs_chain=qa_chain) qa_document_chain.run(input_document=state_of_the_union, question="what did the president say about justice breyer?") ' The president thanked Justice Breyer for his service.' previous Weaviate next Chat Vector DB Contents Summarize Question Answering By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/analyze_document.html
52e143e51b52-0
.ipynb .pdf Question Answering Contents Prepare Data Quickstart The stuff Chain The map_reduce Chain The refine Chain The map-rerank Chain Question Answering# This notebook walks through how to use LangChain for question answering over a list of documents. It covers four different types of chains: stuff, map_reduce, refine, map_rerank. For a more in depth explanation of what these chain types are, see here. Prepare Data# First we prepare the data. For this example we do similarity search over a vector database, but these documents could be fetched in any manner (the point of this notebook to highlight what to do AFTER you fetch the documents). from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Chroma from langchain.docstore.document import Document from langchain.prompts import PromptTemplate from langchain.indexes.vectorstore import VectorstoreIndexCreator index_creator = VectorstoreIndexCreator() from langchain.document_loaders import TextLoader loader = TextLoader('../../state_of_the_union.txt') docsearch = index_creator.from_loaders([loader]) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. query = "What did the president say about Justice Breyer" docs = docsearch.similarity_search(query) from langchain.chains.question_answering import load_qa_chain from langchain.llms import OpenAI Quickstart# If you just want to get started as quickly as possible, this is the recommended way to do it: chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff") query = "What did the president say about Justice Breyer" chain.run(input_documents=docs, question=query)
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/question_answering.html
52e143e51b52-1
chain.run(input_documents=docs, question=query) ' The president said that he was honoring Justice Breyer for his service to the country and that he was a Constitutional scholar, Army veteran, and retiring Justice of the United States Supreme Court.' If you want more control and understanding over what is happening, please see the information below. The stuff Chain# This sections shows results of using the stuff Chain to do question answering. chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president said that he was honoring Justice Breyer for his service to the country and that he was a Constitutional scholar, Army veteran, and retiring Justice of the United States Supreme Court.'} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. {context} Question: {question} Answer in Italian:""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"] ) chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff", prompt=PROMPT) chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha onorato la sua carriera come giudice della Corte Suprema degli Stati Uniti.'}
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/question_answering.html
52e143e51b52-2
The map_reduce Chain# This sections shows results of using the map_reduce Chain to do question answering. chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president said, "Justice Breyer, thank you for your service."'} Intermediate Steps We can also return the intermediate steps for map_reduce chains, should we want to inspect them. This is done with the return_map_steps variable. chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce", return_map_steps=True) chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': [' "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."', ' None', ' None', ' None'], 'output_text': ' The president said, "Justice Breyer, thank you for your service."'} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. question_prompt_template = """Use the following portion of a long document to see if any of the text is relevant to answer the question. Return any relevant text translated into italian. {context} Question: {question} Relevant text, if any, in Italian:""" QUESTION_PROMPT = PromptTemplate( template=question_prompt_template, input_variables=["context", "question"] )
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/question_answering.html
52e143e51b52-3
template=question_prompt_template, input_variables=["context", "question"] ) combine_prompt_template = """Given the following extracted parts of a long document and a question, create a final answer italian. If you don't know the answer, just say that you don't know. Don't try to make up an answer. QUESTION: {question} ========= {summaries} ========= Answer in Italian:""" COMBINE_PROMPT = PromptTemplate( template=combine_prompt_template, input_variables=["summaries", "question"] ) chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce", return_map_steps=True, question_prompt=QUESTION_PROMPT, combine_prompt=COMBINE_PROMPT) chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ["\nStasera vorrei onorare qualcuno che ha dedicato la sua vita a servire questo paese: il giustizia Stephen Breyer - un veterano dell'esercito, uno studioso costituzionale e un giustizia in uscita della Corte Suprema degli Stati Uniti. Giustizia Breyer, grazie per il tuo servizio.", '\nNessun testo pertinente.', "\nCome ho detto l'anno scorso, soprattutto ai nostri giovani americani transgender, avrò sempre il tuo sostegno come tuo Presidente, in modo che tu possa essere te stesso e raggiungere il tuo potenziale donato da Dio.",
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/question_answering.html
52e143e51b52-4
'\nNella mia amministrazione, i guardiani sono stati accolti di nuovo. Stiamo andando dietro ai criminali che hanno rubato miliardi di dollari di aiuti di emergenza destinati alle piccole imprese e a milioni di americani. E stasera, annuncio che il Dipartimento di Giustizia nominerà un procuratore capo per la frode pandemica.'], 'output_text': ' Non conosco la risposta alla tua domanda su cosa abbia detto il Presidente riguardo al Giustizia Breyer.'} Batch Size When using the map_reduce chain, one thing to keep in mind is the batch size you are using during the map step. If this is too high, it could cause rate limiting errors. You can control this by setting the batch size on the LLM used. Note that this only applies for LLMs with this parameter. Below is an example of doing so: llm = OpenAI(batch_size=5, temperature=0) The refine Chain# This sections shows results of using the refine Chain to do question answering. chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True)
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/question_answering.html
52e143e51b52-5
chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his commitment to protecting the rights of LGBTQ+ Americans and his support for the bipartisan Equality Act. He also mentioned his plan to lower costs to give families a fair shot, lower the deficit, and go after criminals who stole pandemic relief funds. He also announced that the Justice Department will name a chief prosecutor for pandemic fraud.'} Intermediate Steps We can also return the intermediate steps for refine chains, should we want to inspect them. This is done with the return_refine_steps variable. chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine", return_refine_steps=True) chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ['\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country and his legacy of excellence.', '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice.', '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his commitment to protecting the rights of LGBTQ+ Americans and his support for the bipartisan Equality Act.',
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/question_answering.html
52e143e51b52-6
'\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his commitment to protecting the rights of LGBTQ+ Americans and his support for the bipartisan Equality Act. He also mentioned his plan to lower costs to give families a fair shot, lower the deficit, and go after criminals who stole pandemic relief funds. He also announced that the Justice Department will name a chief prosecutor for pandemic fraud.'], 'output_text': '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his commitment to protecting the rights of LGBTQ+ Americans and his support for the bipartisan Equality Act. He also mentioned his plan to lower costs to give families a fair shot, lower the deficit, and go after criminals who stole pandemic relief funds. He also announced that the Justice Department will name a chief prosecutor for pandemic fraud.'} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. refine_prompt_template = ( "The original question is as follows: {question}\n" "We have provided an existing answer: {existing_answer}\n" "We have the opportunity to refine the existing answer" "(only if needed) with some more context below.\n" "------------\n" "{context_str}\n" "------------\n" "Given the new context, refine the original answer to better " "answer the question. " "If the context isn't useful, return the original answer. Reply in Italian." ) refine_prompt = PromptTemplate( input_variables=["question", "existing_answer", "context_str"],
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/question_answering.html
52e143e51b52-7
input_variables=["question", "existing_answer", "context_str"], template=refine_prompt_template, ) initial_qa_template = ( "Context information is below. \n" "---------------------\n" "{context_str}" "\n---------------------\n" "Given the context information and not prior knowledge, " "answer the question: {question}\nYour answer should be in Italian.\n" ) initial_qa_prompt = PromptTemplate( input_variables=["context_str", "question"], template=initial_qa_template ) chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine", return_refine_steps=True, question_prompt=initial_qa_prompt, refine_prompt=refine_prompt) chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ['\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese e ha onorato la sua carriera. Ha anche detto che la sua nomina di Circuit Court of Appeals Judge Ketanji Brown Jackson continuerà il suo eccezionale lascito.',
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/question_answering.html
52e143e51b52-8
"\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese e ha onorato la sua carriera. Ha anche detto che la sua nomina di Circuit Court of Appeals Judge Ketanji Brown Jackson continuerà il suo eccezionale lascito. Ha sottolineato che la sua esperienza come avvocato di alto livello in pratica privata, come ex difensore federale pubblico e come membro di una famiglia di educatori e agenti di polizia, la rende una costruttrice di consenso. Ha anche sottolineato che, dalla sua nomina, ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani.",
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/question_answering.html
52e143e51b52-9
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese e ha onorato la sua carriera. Ha anche detto che la sua nomina di Circuit Court of Appeals Judge Ketanji Brown Jackson continuerà il suo eccezionale lascito. Ha sottolineato che la sua esperienza come avvocato di alto livello in pratica privata, come ex difensore federale pubblico e come membro di una famiglia di educatori e agenti di polizia, la rende una costruttrice di consenso. Ha anche sottolineato che, dalla sua nomina, ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Ha inoltre sottolineato che la nomina di Justice Breyer è un passo importante verso l'uguaglianza per tutti gli americani, in partic",
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/question_answering.html
52e143e51b52-10
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese e ha onorato la sua carriera. Ha anche detto che la sua nomina di Circuit Court of Appeals Judge Ketanji Brown Jackson continuerà il suo eccezionale lascito. Ha sottolineato che la sua esperienza come avvocato di alto livello in pratica privata, come ex difensore federale pubblico e come membro di una famiglia di educatori e agenti di polizia, la rende una costruttrice di consenso. Ha anche sottolineato che, dalla sua nomina, ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Ha inoltre sottolineato che la nomina di Justice Breyer è un passo importante verso l'uguaglianza per tutti gli americani, in partic"],
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/question_answering.html
52e143e51b52-11
'output_text': "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese e ha onorato la sua carriera. Ha anche detto che la sua nomina di Circuit Court of Appeals Judge Ketanji Brown Jackson continuerà il suo eccezionale lascito. Ha sottolineato che la sua esperienza come avvocato di alto livello in pratica privata, come ex difensore federale pubblico e come membro di una famiglia di educatori e agenti di polizia, la rende una costruttrice di consenso. Ha anche sottolineato che, dalla sua nomina, ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Ha inoltre sottolineato che la nomina di Justice Breyer è un passo importante verso l'uguaglianza per tutti gli americani, in partic"} The map-rerank Chain# This sections shows results of using the map-rerank Chain to do question answering with sources. chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_rerank", return_intermediate_steps=True) query = "What did the president say about Justice Breyer" results = chain({"input_documents": docs, "question": query}, return_only_outputs=True) results["output_text"] ' The president thanked Justice Breyer for his service and honored him for dedicating his life to serving the country. ' results["intermediate_steps"] [{'answer': ' The president thanked Justice Breyer for his service and honored him for dedicating his life to serving the country. ', 'score': '100'},
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/question_answering.html
52e143e51b52-12
'score': '100'}, {'answer': " The president said that Justice Breyer is a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that since she's been nominated, she's received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans, and that she is a consensus builder.", 'score': '100'}, {'answer': ' The president did not mention Justice Breyer in this context.', 'score': '0'}, {'answer': ' The president did not mention Justice Breyer in the given context. ', 'score': '0'}] Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. from langchain.output_parsers import RegexParser output_parser = RegexParser( regex=r"(.*?)\nScore: (.*)", output_keys=["answer", "score"], ) prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. In addition to giving an answer, also return a score of how fully it answered the user's question. This should be in the following format: Question: [question here] Helpful Answer In Italian: [answer here] Score: [score between 0 and 100] Begin! Context: --------- {context} --------- Question: {question} Helpful Answer In Italian:""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"], output_parser=output_parser, )
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/question_answering.html
52e143e51b52-13
input_variables=["context", "question"], output_parser=output_parser, ) chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_rerank", return_intermediate_steps=True, prompt=PROMPT) query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': [{'answer': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha onorato la sua carriera.', 'score': '100'}, {'answer': ' Il presidente non ha detto nulla sulla Giustizia Breyer.', 'score': '100'}, {'answer': ' Non so.', 'score': '0'}, {'answer': ' Il presidente non ha detto nulla sulla giustizia Breyer.', 'score': '100'}], 'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha onorato la sua carriera.'} previous Question Answering with Sources next Summarization Contents Prepare Data Quickstart The stuff Chain The map_reduce Chain The refine Chain The map-rerank Chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/question_answering.html
a4db2cc4fcc2-0
.ipynb .pdf Summarization Contents Prepare Data Quickstart The stuff Chain The map_reduce Chain The refine Chain Summarization# This notebook walks through how to use LangChain for summarization over a list of documents. It covers three different chain types: stuff, map_reduce, and refine. For a more in depth explanation of what these chain types are, see here. Prepare Data# First we prepare the data. For this example we create multiple documents from one long one, but these documents could be fetched in any manner (the point of this notebook to highlight what to do AFTER you fetch the documents). from langchain import OpenAI, PromptTemplate, LLMChain from langchain.text_splitter import CharacterTextSplitter from langchain.chains.mapreduce import MapReduceChain from langchain.prompts import PromptTemplate llm = OpenAI(temperature=0) text_splitter = CharacterTextSplitter() with open('../../state_of_the_union.txt') as f: state_of_the_union = f.read() texts = text_splitter.split_text(state_of_the_union) from langchain.docstore.document import Document docs = [Document(page_content=t) for t in texts[:3]] from langchain.chains.summarize import load_summarize_chain Quickstart# If you just want to get started as quickly as possible, this is the recommended way to do it: chain = load_summarize_chain(llm, chain_type="map_reduce") chain.run(docs)
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/summarize.html
a4db2cc4fcc2-1
chain.run(docs) ' In response to Russian aggression in Ukraine, the United States and its allies are taking action to hold Putin accountable, including economic sanctions, asset seizures, and military assistance. The US is also providing economic and humanitarian aid to Ukraine, and has passed the American Rescue Plan and the Bipartisan Infrastructure Law to help struggling families and create jobs. The US remains unified and determined to protect Ukraine and the free world.' If you want more control and understanding over what is happening, please see the information below. The stuff Chain# This sections shows results of using the stuff Chain to do summarization. chain = load_summarize_chain(llm, chain_type="stuff") chain.run(docs) ' In his speech, President Biden addressed the crisis in Ukraine, the American Rescue Plan, and the Bipartisan Infrastructure Law. He discussed the need to invest in America, educate Americans, and build the economy from the bottom up. He also announced the release of 60 million barrels of oil from reserves around the world, and the creation of a dedicated task force to go after the crimes of Russian oligarchs. He concluded by emphasizing the need to Buy American and use taxpayer dollars to rebuild America.' Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. prompt_template = """Write a concise summary of the following: {text} CONCISE SUMMARY IN ITALIAN:""" PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"]) chain = load_summarize_chain(llm, chain_type="stuff", prompt=PROMPT) chain.run(docs)
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/summarize.html
a4db2cc4fcc2-2
chain.run(docs) "\n\nIn questa serata, il Presidente degli Stati Uniti ha annunciato una serie di misure per affrontare la crisi in Ucraina, causata dall'aggressione di Putin. Ha anche annunciato l'invio di aiuti economici, militari e umanitari all'Ucraina. Ha anche annunciato che gli Stati Uniti e i loro alleati stanno imponendo sanzioni economiche a Putin e stanno rilasciando 60 milioni di barili di petrolio dalle riserve di tutto il mondo. Inoltre, ha annunciato che il Dipartimento di Giustizia degli Stati Uniti sta creando una task force dedicata ai crimini degli oligarchi russi. Il Presidente ha anche annunciato l'approvazione della legge bipartitica sull'infrastruttura, che prevede investimenti per la ricostruzione dell'America. Questo porterà a creare posti" The map_reduce Chain# This sections shows results of using the map_reduce Chain to do summarization. chain = load_summarize_chain(llm, chain_type="map_reduce") chain.run(docs) " In response to Russia's aggression in Ukraine, the United States and its allies have imposed economic sanctions and are taking other measures to hold Putin accountable. The US is also providing economic and military assistance to Ukraine, protecting NATO countries, and releasing oil from its Strategic Petroleum Reserve. President Biden and Vice President Harris have passed legislation to help struggling families and rebuild America's infrastructure." Intermediate Steps We can also return the intermediate steps for map_reduce chains, should we want to inspect them. This is done with the return_map_steps variable.
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/summarize.html
a4db2cc4fcc2-3
chain = load_summarize_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True) chain({"input_documents": docs}, return_only_outputs=True) {'map_steps': [" In response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains.", ' The United States and its European allies are taking action to punish Russia for its invasion of Ukraine, including seizing assets, closing off airspace, and providing economic and military assistance to Ukraine. The US is also mobilizing forces to protect NATO countries and has released 30 million barrels of oil from its Strategic Petroleum Reserve to help blunt gas prices. The world is uniting in support of Ukraine and democracy, and the US stands with its Ukrainian-American citizens.', " President Biden and Vice President Harris ran for office with a new economic vision for America, and have since passed the American Rescue Plan and the Bipartisan Infrastructure Law to help struggling families and rebuild America's infrastructure. This includes creating jobs, modernizing roads, airports, ports, and waterways, replacing lead pipes, providing affordable high-speed internet, and investing in American products to support American jobs."], 'output_text': " In response to Russia's aggression in Ukraine, the United States and its allies have imposed economic sanctions and are taking other measures to hold Putin accountable. The US is also providing economic and military assistance to Ukraine, protecting NATO countries, and passing legislation to help struggling families and rebuild America's infrastructure. The world is uniting in support of Ukraine and democracy, and the US stands with its Ukrainian-American citizens."} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian.
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/summarize.html
a4db2cc4fcc2-4
prompt_template = """Write a concise summary of the following: {text} CONCISE SUMMARY IN ITALIAN:""" PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"]) chain = load_summarize_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True, map_prompt=PROMPT, combine_prompt=PROMPT) chain({"input_documents": docs}, return_only_outputs=True) {'intermediate_steps': ["\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Gli Stati Uniti e i loro alleati stanno ora imponendo sanzioni economiche a Putin e stanno tagliando l'accesso della Russia alla tecnologia. Il Dipartimento di Giustizia degli Stati Uniti sta anche creando una task force dedicata per andare dopo i crimini degli oligarchi russi.",
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/summarize.html
a4db2cc4fcc2-5
"\n\nStiamo unendo le nostre forze con quelle dei nostri alleati europei per sequestrare yacht, appartamenti di lusso e jet privati di Putin. Abbiamo chiuso lo spazio aereo americano ai voli russi e stiamo fornendo più di un miliardo di dollari in assistenza all'Ucraina. Abbiamo anche mobilitato le nostre forze terrestri, aeree e navali per proteggere i paesi della NATO. Abbiamo anche rilasciato 60 milioni di barili di petrolio dalle riserve di tutto il mondo, di cui 30 milioni dalla nostra riserva strategica di petrolio. Stiamo affrontando una prova reale e ci vorrà del tempo, ma alla fine Putin non riuscirà a spegnere l'amore dei popoli per la libertà.",
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/summarize.html
a4db2cc4fcc2-6
"\n\nIl Presidente Biden ha lottato per passare l'American Rescue Plan per aiutare le persone che soffrivano a causa della pandemia. Il piano ha fornito sollievo economico immediato a milioni di americani, ha aiutato a mettere cibo sulla loro tavola, a mantenere un tetto sopra le loro teste e a ridurre il costo dell'assicurazione sanitaria. Il piano ha anche creato più di 6,5 milioni di nuovi posti di lavoro, il più alto numero di posti di lavoro creati in un anno nella storia degli Stati Uniti. Il Presidente Biden ha anche firmato la legge bipartitica sull'infrastruttura, la più ampia iniziativa di ricostruzione della storia degli Stati Uniti. Il piano prevede di modernizzare le strade, gli aeroporti, i porti e le vie navigabili in"], 'output_text': "\n\nIl Presidente Biden sta lavorando per aiutare le persone che soffrono a causa della pandemia attraverso l'American Rescue Plan e la legge bipartitica sull'infrastruttura. Gli Stati Uniti e i loro alleati stanno anche imponendo sanzioni economiche a Putin e tagliando l'accesso della Russia alla tecnologia. Stanno anche sequestrando yacht, appartamenti di lusso e jet privati di Putin e fornendo più di un miliardo di dollari in assistenza all'Ucraina. Alla fine, Putin non riuscirà a spegnere l'amore dei popoli per la libertà."} The refine Chain# This sections shows results of using the refine Chain to do summarization.
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/summarize.html
a4db2cc4fcc2-7
The refine Chain# This sections shows results of using the refine Chain to do summarization. chain = load_summarize_chain(llm, chain_type="refine") chain.run(docs) "\n\nIn response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. We are joining with our European allies to find and seize the assets of Russian oligarchs, including yachts, luxury apartments, and private jets. The U.S. is also closing off American airspace to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The U.S. and its allies are providing support to the Ukrainians in their fight for freedom, including military, economic, and humanitarian assistance. The U.S. is also mobilizing ground forces, air squadrons, and ship deployments to protect NATO countries. The U.S. and its allies are also releasing 60 million barrels of oil from reserves around the world, with the U.S. contributing 30 million barrels from its own Strategic Petroleum Reserve. In addition, the U.S. has passed the American Rescue Plan to provide immediate economic relief for tens of millions of Americans, and the Bipartisan Infrastructure Law to rebuild America and create jobs. This investment will" Intermediate Steps We can also return the intermediate steps for refine chains, should we want to inspect them. This is done with the return_refine_steps variable. chain = load_summarize_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True) chain({"input_documents": docs}, return_only_outputs=True)
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/summarize.html
a4db2cc4fcc2-8
chain({"input_documents": docs}, return_only_outputs=True) {'refine_steps': [" In response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains.", "\n\nIn response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. We are joining with our European allies to find and seize the assets of Russian oligarchs, including yachts, luxury apartments, and private jets. The U.S. is also closing off American airspace to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The U.S. and its allies are providing support to the Ukrainians in their fight for freedom, including military, economic, and humanitarian assistance. The U.S. is also mobilizing ground forces, air squadrons, and ship deployments to protect NATO countries. The U.S. and its allies are also releasing 60 million barrels of oil from reserves around the world, with the U.S. contributing 30 million barrels from its own Strategic Petroleum Reserve. Putin's war on Ukraine has left Russia weaker and the rest of the world stronger, with the world uniting in support of democracy and peace.",
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/summarize.html
a4db2cc4fcc2-9
"\n\nIn response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. We are joining with our European allies to find and seize the assets of Russian oligarchs, including yachts, luxury apartments, and private jets. The U.S. is also closing off American airspace to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The U.S. and its allies are providing support to the Ukrainians in their fight for freedom, including military, economic, and humanitarian assistance. The U.S. is also mobilizing ground forces, air squadrons, and ship deployments to protect NATO countries. The U.S. and its allies are also releasing 60 million barrels of oil from reserves around the world, with the U.S. contributing 30 million barrels from its own Strategic Petroleum Reserve. In addition, the U.S. has passed the American Rescue Plan to provide immediate economic relief for tens of millions of Americans, and the Bipartisan Infrastructure Law to rebuild America and create jobs. This includes investing"],
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/summarize.html
a4db2cc4fcc2-10
'output_text': "\n\nIn response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. We are joining with our European allies to find and seize the assets of Russian oligarchs, including yachts, luxury apartments, and private jets. The U.S. is also closing off American airspace to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The U.S. and its allies are providing support to the Ukrainians in their fight for freedom, including military, economic, and humanitarian assistance. The U.S. is also mobilizing ground forces, air squadrons, and ship deployments to protect NATO countries. The U.S. and its allies are also releasing 60 million barrels of oil from reserves around the world, with the U.S. contributing 30 million barrels from its own Strategic Petroleum Reserve. In addition, the U.S. has passed the American Rescue Plan to provide immediate economic relief for tens of millions of Americans, and the Bipartisan Infrastructure Law to rebuild America and create jobs. This includes investing"} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. prompt_template = """Write a concise summary of the following: {text} CONCISE SUMMARY IN ITALIAN:""" PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"]) refine_template = ( "Your job is to produce a final summary\n" "We have provided an existing summary up to a certain point: {existing_answer}\n" "We have the opportunity to refine the existing summary" "(only if needed) with some more context below.\n" "------------\n"
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/summarize.html
a4db2cc4fcc2-11
"------------\n" "{text}\n" "------------\n" "Given the new context, refine the original summary in Italian" "If the context isn't useful, return the original summary." ) refine_prompt = PromptTemplate( input_variables=["existing_answer", "text"], template=refine_template, ) chain = load_summarize_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True, question_prompt=PROMPT, refine_prompt=refine_prompt) chain({"input_documents": docs}, return_only_outputs=True) {'intermediate_steps': ["\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Insieme ai nostri alleati, stiamo imponendo sanzioni economiche, tagliando l'accesso della Russia alla tecnologia e bloccando i suoi più grandi istituti bancari dal sistema finanziario internazionale. Il Dipartimento di Giustizia degli Stati Uniti sta anche assemblando una task force dedicata per andare dopo i crimini degli oligarchi russi.",
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/summarize.html
a4db2cc4fcc2-12
"\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Insieme ai nostri alleati, stiamo imponendo sanzioni economiche, tagliando l'accesso della Russia alla tecnologia, bloccando i suoi più grandi istituti bancari dal sistema finanziario internazionale e chiudendo lo spazio aereo americano a tutti i voli russi. Il Dipartimento di Giustizia degli Stati Uniti sta anche assemblando una task force dedicata per andare dopo i crimini degli oligarchi russi. Stiamo fornendo più di un miliardo di dollari in assistenza diretta all'Ucraina e fornendo assistenza militare,",
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/summarize.html
a4db2cc4fcc2-13
"\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Insieme ai nostri alleati, stiamo imponendo sanzioni economiche, tagliando l'accesso della Russia alla tecnologia, bloccando i suoi più grandi istituti bancari dal sistema finanziario internazionale e chiudendo lo spazio aereo americano a tutti i voli russi. Il Dipartimento di Giustizia degli Stati Uniti sta anche assemblando una task force dedicata per andare dopo i crimini degli oligarchi russi. Stiamo fornendo più di un miliardo di dollari in assistenza diretta all'Ucraina e fornendo assistenza militare."],
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/summarize.html
a4db2cc4fcc2-14
'output_text': "\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Insieme ai nostri alleati, stiamo imponendo sanzioni economiche, tagliando l'accesso della Russia alla tecnologia, bloccando i suoi più grandi istituti bancari dal sistema finanziario internazionale e chiudendo lo spazio aereo americano a tutti i voli russi. Il Dipartimento di Giustizia degli Stati Uniti sta anche assemblando una task force dedicata per andare dopo i crimini degli oligarchi russi. Stiamo fornendo più di un miliardo di dollari in assistenza diretta all'Ucraina e fornendo assistenza militare."} previous Question Answering next Vector DB Question/Answering Contents Prepare Data Quickstart The stuff Chain The map_reduce Chain The refine Chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/summarize.html
3e9820eff9cd-0
.ipynb .pdf Vector DB Question/Answering Contents Chain Type Custom Prompts Return Source Documents Vector DB Question/Answering# This example showcases question answering over a vector database. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.text_splitter import CharacterTextSplitter from langchain import OpenAI, VectorDBQA from langchain.document_loaders import TextLoader loader = TextLoader('../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_documents(texts, embeddings) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. qa = VectorDBQA.from_chain_type(llm=OpenAI(), chain_type="stuff", vectorstore=docsearch) query = "What did the president say about Ketanji Brown Jackson" qa.run(query) " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice and federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." Chain Type# You can easily specify different chain types to load and use in the VectorDBQA chain. For a more detailed walkthrough of these types, please see this notebook.
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/vector_db_qa.html
3e9820eff9cd-1
There are two ways to load different chain types. First, you can specify the chain type argument in the from_chain_type method. This allows you to pass in the name of the chain type you want to use. For example, in the below we change the chain type to map_reduce. qa = VectorDBQA.from_chain_type(llm=OpenAI(), chain_type="map_reduce", vectorstore=docsearch) query = "What did the president say about Ketanji Brown Jackson" qa.run(query) " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." The above way allows you to really simply change the chain_type, but it does provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly (as you did in this notebook) and then pass that directly to the the VectorDBQA chain with the combine_documents_chain parameter. For example: from langchain.chains.question_answering import load_qa_chain qa_chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff") qa = VectorDBQA(combine_documents_chain=qa_chain, vectorstore=docsearch) query = "What did the president say about Ketanji Brown Jackson" qa.run(query)
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/vector_db_qa.html
3e9820eff9cd-2
query = "What did the president say about Ketanji Brown Jackson" qa.run(query) " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." Custom Prompts# You can pass in custom prompts to do question answering. These prompts are the same prompts as you can pass into the base question answering chain from langchain.prompts import PromptTemplate prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. {context} Question: {question} Answer in Italian:""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"] ) chain_type_kwargs = {"prompt": PROMPT} qa = VectorDBQA.from_chain_type(llm=OpenAI(), chain_type="stuff", vectorstore=docsearch, chain_type_kwargs=chain_type_kwargs) query = "What did the president say about Ketanji Brown Jackson" qa.run(query)
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/vector_db_qa.html
3e9820eff9cd-3
query = "What did the president say about Ketanji Brown Jackson" qa.run(query) " Il Presidente ha detto che Ketanji Brown Jackson è uno dei pensatori legali più importanti del nostro Paese, che continuerà l'eccellente eredità di giustizia Breyer. È un ex principale litigante in pratica privata, un ex difensore federale pubblico e appartiene a una famiglia di insegnanti e poliziotti delle scuole pubbliche. È un costruttore di consenso che ha ricevuto un ampio supporto da parte di Fraternal Order of Police e giudici designati da democratici e repubblicani." Return Source Documents# Additionally, we can return the source documents used to answer the question by specifying an optional parameter when constructing the chain. qa = VectorDBQA.from_chain_type(llm=OpenAI(), chain_type="stuff", vectorstore=docsearch, return_source_documents=True) query = "What did the president say about Ketanji Brown Jackson" result = qa({"query": query}) result["result"] " The president said that Ketanji Brown Jackson is one of our nation's top legal minds and that she will continue Justice Breyer's legacy of excellence." result["source_documents"]
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/vector_db_qa.html
3e9820eff9cd-4
result["source_documents"] [Document(page_content='In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \n\nWe cannot let this happen. \n\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={}, lookup_index=0),
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/vector_db_qa.html
3e9820eff9cd-5
Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', lookup_str='', metadata={}, lookup_index=0),
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/vector_db_qa.html
3e9820eff9cd-6
Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', lookup_str='', metadata={}, lookup_index=0),
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/vector_db_qa.html
3e9820eff9cd-7
Document(page_content='As I’ve told Xi Jinping, it is never a good bet to bet against the American people. \n\nWe’ll create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America. \n\nAnd we’ll do it all to withstand the devastating effects of the climate crisis and promote environmental justice. \n\nWe’ll build a national network of 500,000 electric vehicle charging stations, begin to replace poisonous lead pipes—so every child—and every American—has clean water to drink at home and at school, provide affordable high-speed internet for every American—urban, suburban, rural, and tribal communities. \n\n4,000 projects have already been announced. \n\nAnd tonight, I’m announcing that this year we will start fixing over 65,000 miles of highway and 1,500 bridges in disrepair. \n\nWhen we use taxpayer dollars to rebuild America – we are going to Buy American: buy American products to support American jobs.', lookup_str='', metadata={}, lookup_index=0)] previous Summarization next VectorDB Question Answering with Sources Contents Chain Type Custom Prompts Return Source Documents By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/vector_db_qa.html
c04f843b7992-0
.ipynb .pdf Vector DB Text Generation Contents Prepare Data Set Up Vector DB Set Up LLM Chain with Custom Prompt Generate Text Vector DB Text Generation# This notebook walks through how to use LangChain for text generation over a vector index. This is useful if we want to generate text that is able to draw from a large body of custom text, for example, generating blog posts that have an understanding of previous blog posts written, or product tutorials that can refer to product documentation. Prepare Data# First, we prepare the data. For this example, we fetch a documentation site that consists of markdown files hosted on Github and split them into small enough Documents. from langchain.llms import OpenAI from langchain.docstore.document import Document import requests from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.text_splitter import CharacterTextSplitter from langchain.prompts import PromptTemplate import pathlib import subprocess import tempfile def get_github_docs(repo_owner, repo_name): with tempfile.TemporaryDirectory() as d: subprocess.check_call( f"git clone --depth 1 https://github.com/{repo_owner}/{repo_name}.git .", cwd=d, shell=True, ) git_sha = ( subprocess.check_output("git rev-parse HEAD", shell=True, cwd=d) .decode("utf-8") .strip() ) repo_path = pathlib.Path(d) markdown_files = list(repo_path.glob("*/*.md")) + list( repo_path.glob("*/*.mdx") ) for markdown_file in markdown_files: with open(markdown_file, "r") as f: relative_path = markdown_file.relative_to(repo_path)
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/vector_db_text_generation.html
c04f843b7992-1
relative_path = markdown_file.relative_to(repo_path) github_url = f"https://github.com/{repo_owner}/{repo_name}/blob/{git_sha}/{relative_path}" yield Document(page_content=f.read(), metadata={"source": github_url}) sources = get_github_docs("yirenlu92", "deno-manual-forked") source_chunks = [] splitter = CharacterTextSplitter(separator=" ", chunk_size=1024, chunk_overlap=0) for source in sources: for chunk in splitter.split_text(source.page_content): source_chunks.append(Document(page_content=chunk, metadata=source.metadata)) Cloning into '.'... Set Up Vector DB# Now that we have the documentation content in chunks, let’s put all this information in a vector index for easy retrieval. search_index = Chroma.from_documents(source_chunks, OpenAIEmbeddings()) Set Up LLM Chain with Custom Prompt# Next, let’s set up a simple LLM chain but give it a custom prompt for blog post generation. Note that the custom prompt is parameterized and takes two inputs: context, which will be the documents fetched from the vector search, and topic, which is given by the user. from langchain.chains import LLMChain prompt_template = """Use the context below to write a 400 word blog post about the topic below: Context: {context} Topic: {topic} Blog post:""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "topic"] ) llm = OpenAI(temperature=0) chain = LLMChain(llm=llm, prompt=PROMPT) Generate Text#
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/vector_db_text_generation.html
c04f843b7992-2
chain = LLMChain(llm=llm, prompt=PROMPT) Generate Text# Finally, we write a function to apply our inputs to the chain. The function takes an input parameter topic. We find the documents in the vector index that correspond to that topic, and use them as additional context in our simple LLM chain. def generate_blog_post(topic): docs = search_index.similarity_search(topic, k=4) inputs = [{"context": doc.page_content, "topic": topic} for doc in docs] print(chain.apply(inputs)) generate_blog_post("environment variables")
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/vector_db_text_generation.html
c04f843b7992-3
[{'text': '\n\nEnvironment variables are a great way to store and access sensitive information in your Deno applications. Deno offers built-in support for environment variables with `Deno.env`, and you can also use a `.env` file to store and access environment variables.\n\nUsing `Deno.env` is simple. It has getter and setter methods, so you can easily set and retrieve environment variables. For example, you can set the `FIREBASE_API_KEY` and `FIREBASE_AUTH_DOMAIN` environment variables like this:\n\n```ts\nDeno.env.set("FIREBASE_API_KEY", "examplekey123");\nDeno.env.set("FIREBASE_AUTH_DOMAIN", "firebasedomain.com");\n\nconsole.log(Deno.env.get("FIREBASE_API_KEY")); // examplekey123\nconsole.log(Deno.env.get("FIREBASE_AUTH_DOMAIN")); // firebasedomain.com\n```\n\nYou can also store environment variables in a `.env` file. This is a great'}, {'text': '\n\nEnvironment variables are a powerful tool for managing configuration settings in a program. They allow us to set values that can be used by the program, without having to hard-code them into the code. This makes it easier to change
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/vector_db_text_generation.html
c04f843b7992-4
into the code. This makes it easier to change settings without having to modify the code.\n\nIn Deno, environment variables can be set in a few different ways. The most common way is to use the `VAR=value` syntax. This will set the environment variable `VAR` to the value `value`. This can be used to set any number of environment variables before running a command. For example, if we wanted to set the environment variable `VAR` to `hello` before running a Deno command, we could do so like this:\n\n```\nVAR=hello deno run main.ts\n```\n\nThis will set the environment variable `VAR` to `hello` before running the command. We can then access this variable in our code using the `Deno.env.get()` function. For example, if we ran the following command:\n\n```\nVAR=hello && deno eval "console.log(\'Deno: \' + Deno.env.get(\'VAR'}, {'text': '\n\nEnvironment variables are a powerful tool for developers, allowing them to store and access data without having to hard-code it into their applications. In Deno,
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/vector_db_text_generation.html
c04f843b7992-5
to hard-code it into their applications. In Deno, you can access environment variables using the `Deno.env.get()` function.\n\nFor example, if you wanted to access the `HOME` environment variable, you could do so like this:\n\n```js\n// env.js\nDeno.env.get("HOME");\n```\n\nWhen running this code, you\'ll need to grant the Deno process access to environment variables. This can be done by passing the `--allow-env` flag to the `deno run` command. You can also specify which environment variables you want to grant access to, like this:\n\n```shell\n# Allow access to only the HOME env var\ndeno run --allow-env=HOME env.js\n```\n\nIt\'s important to note that environment variables are case insensitive on Windows, so Deno also matches them case insensitively (on Windows only).\n\nAnother thing to be aware of when using environment variables is subprocess permissions. Subprocesses are powerful and can access system resources regardless of the permissions you granted to the Den'}, {'text': '\n\nEnvironment variables are an important part of any programming language,
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/vector_db_text_generation.html
c04f843b7992-6
variables are an important part of any programming language, and Deno is no exception. Deno is a secure JavaScript and TypeScript runtime built on the V8 JavaScript engine, and it recently added support for environment variables. This feature was added in Deno version 1.6.0, and it is now available for use in Deno applications.\n\nEnvironment variables are used to store information that can be used by programs. They are typically used to store configuration information, such as the location of a database or the name of a user. In Deno, environment variables are stored in the `Deno.env` object. This object is similar to the `process.env` object in Node.js, and it allows you to access and set environment variables.\n\nThe `Deno.env` object is a read-only object, meaning that you cannot directly modify the environment variables. Instead, you must use the `Deno.env.set()` function to set environment variables. This function takes two arguments: the name of the environment variable and the value to set it to. For example, if you wanted to set the `FOO`
https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/vector_db_text_generation.html