id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
b43c1b8a4702-3 | -> **Question**: How does Compositional Reasoning with Large Language Models works?
**Answer**: Compositional reasoning with large language models refers to the ability of these models to correctly identify and represent complex concepts by breaking them down into smaller, more basic parts and combining them in a structured way. This involves understanding the syntax and semantics of language and using that understanding to build up more complex meanings from simpler ones.
In the context of the paper "Does CLIP Bind Concepts? Probing Compositionality in Large Image Models", the authors focus specifically on the ability of a large pretrained vision and language model (CLIP) to encode compositional concepts and to bind variables in a structure-sensitive way. They examine CLIP's ability to compose concepts in a single-object setting, as well as in situations where concept binding is needed.
The authors situate their work within the tradition of research on compositional distributional semantics models (CDSMs), which seek to bridge the gap between distributional models and formal semantics by building architectures which operate over vectors yet still obey traditional theories of linguistic composition. They compare the performance of CLIP with several architectures from research on CDSMs to evaluate its ability to encode and reason about compositional concepts.
questions = [
"What are Heat-bath random walks with Markov base? Include references to answer.",
]
chat_history = []
for question in questions:
result = qa({"question": question, "chat_history": chat_history})
chat_history.append((question, result['answer']))
print(f"-> **Question**: {question} \n")
print(f"**Answer**: {result['answer']} \n")
-> **Question**: What are Heat-bath random walks with Markov base? Include references to answer. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html |
b43c1b8a4702-4 | **Answer**: Heat-bath random walks with Markov base (HB-MB) is a class of stochastic processes that have been studied in the field of statistical mechanics and condensed matter physics. In these processes, a particle moves in a lattice by making a transition to a neighboring site, which is chosen according to a probability distribution that depends on the energy of the particle and the energy of its surroundings.
The HB-MB process was introduced by Bortz, Kalos, and Lebowitz in 1975 as a way to simulate the dynamics of interacting particles in a lattice at thermal equilibrium. The method has been used to study a variety of physical phenomena, including phase transitions, critical behavior, and transport properties.
References:
Bortz, A. B., Kalos, M. H., & Lebowitz, J. L. (1975). A new algorithm for Monte Carlo simulation of Ising spin systems. Journal of Computational Physics, 17(1), 10-18.
Binder, K., & Heermann, D. W. (2010). Monte Carlo simulation in statistical physics: an introduction. Springer Science & Business Media.
previous
Retrievers
next
Azure Cognitive Search
Contents
Installation
Examples
Running retriever
Question Answering on facts
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html |
ff870fadaa54-0 | .ipynb
.pdf
Databerry
Contents
Query
Databerry#
Databerry platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources).
Then your Datastores can be connected to ChatGPT via Plugins or any other Large Langue Model (LLM) via the Databerry API.
This notebook shows how to use Databerry’s retriever.
First, you will need to sign up for Databerry, create a datastore, add some data and get your datastore api endpoint url. You need the API Key.
Query#
Now that our index is set up, we can set up a retriever and start querying it.
from langchain.retrievers import DataberryRetriever
retriever = DataberryRetriever(
datastore_url="https://clg1xg2h80000l708dymr0fxc.databerry.ai/query",
# api_key="DATABERRY_API_KEY", # optional if datastore is public
# top_k=10 # optional
)
retriever.get_relevant_documents("What is Daftpage?") | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html |
ff870fadaa54-1 | )
retriever.get_relevant_documents("What is Daftpage?")
[Document(page_content='✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramGetting StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!DaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord', metadata={'source': 'https:/daftpage.com/help/getting-started', 'score': 0.8697265}), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html |
ff870fadaa54-2 | Document(page_content="✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramHelp CenterWelcome to Daftpage’s help center—the one-stop shop for learning everything about building websites with Daftpage.Daftpage is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops🎨 Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at [email protected] the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.86570895}), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html |
ff870fadaa54-3 | Document(page_content=" is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops🎨 Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at [email protected] the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.8645384})]
previous
Contextual Compression
next
ElasticSearch BM25
Contents
Query
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html |
dab2d16b4c16-0 | .ipynb
.pdf
Time Weighted VectorStore
Contents
Low Decay Rate
High Decay Rate
Virtual Time
Time Weighted VectorStore#
This retriever uses a combination of semantic similarity and a time decay.
The algorithm for scoring them is:
semantic_similarity + (1.0 - decay_rate) ** hours_passed
Notably, hours_passed refers to the hours passed since the object in the retriever was last accessed, not since it was created. This means that frequently accessed objects remain “fresh.”
import faiss
from datetime import datetime, timedelta
from langchain.docstore import InMemoryDocstore
from langchain.embeddings import OpenAIEmbeddings
from langchain.retrievers import TimeWeightedVectorStoreRetriever
from langchain.schema import Document
from langchain.vectorstores import FAISS
Low Decay Rate#
A low decay rate (in this, to be extreme, we will set close to 0) means memories will be “remembered” for longer. A decay rate of 0 means memories never be forgotten, making this retriever equivalent to the vector lookup.
# Define your embedding model
embeddings_model = OpenAIEmbeddings()
# Initialize the vectorstore as empty
embedding_size = 1536
index = faiss.IndexFlatL2(embedding_size)
vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
retriever = TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, decay_rate=.0000000000000000000000001, k=1)
yesterday = datetime.now() - timedelta(days=1)
retriever.add_documents([Document(page_content="hello world", metadata={"last_accessed_at": yesterday})])
retriever.add_documents([Document(page_content="hello foo")]) | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/time_weighted_vectorstore.html |
dab2d16b4c16-1 | retriever.add_documents([Document(page_content="hello foo")])
['d7f85756-2371-4bdf-9140-052780a0f9b3']
# "Hello World" is returned first because it is most salient, and the decay rate is close to 0., meaning it's still recent enough
retriever.get_relevant_documents("hello world")
[Document(page_content='hello world', metadata={'last_accessed_at': datetime.datetime(2023, 5, 13, 21, 0, 27, 678341), 'created_at': datetime.datetime(2023, 5, 13, 21, 0, 27, 279596), 'buffer_idx': 0})]
High Decay Rate#
With a high decay rate (e.g., several 9’s), the recency score quickly goes to 0! If you set this all the way to 1, recency is 0 for all objects, once again making this equivalent to a vector lookup.
# Define your embedding model
embeddings_model = OpenAIEmbeddings()
# Initialize the vectorstore as empty
embedding_size = 1536
index = faiss.IndexFlatL2(embedding_size)
vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
retriever = TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, decay_rate=.999, k=1)
yesterday = datetime.now() - timedelta(days=1)
retriever.add_documents([Document(page_content="hello world", metadata={"last_accessed_at": yesterday})])
retriever.add_documents([Document(page_content="hello foo")])
['40011466-5bbe-4101-bfd1-e22e7f505de2'] | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/time_weighted_vectorstore.html |
dab2d16b4c16-2 | # "Hello Foo" is returned first because "hello world" is mostly forgotten
retriever.get_relevant_documents("hello world")
[Document(page_content='hello foo', metadata={'last_accessed_at': datetime.datetime(2023, 4, 16, 22, 9, 2, 494798), 'created_at': datetime.datetime(2023, 4, 16, 22, 9, 2, 178722), 'buffer_idx': 1})]
Virtual Time#
Using some utils in LangChain, you can mock out the time component
from langchain.utils import mock_now
import datetime
# Notice the last access time is that date time
with mock_now(datetime.datetime(2011, 2, 3, 10, 11)):
print(retriever.get_relevant_documents("hello world"))
[Document(page_content='hello world', metadata={'last_accessed_at': MockDateTime(2011, 2, 3, 10, 11), 'created_at': datetime.datetime(2023, 5, 13, 21, 0, 27, 279596), 'buffer_idx': 0})]
previous
TF-IDF
next
VectorStore
Contents
Low Decay Rate
High Decay Rate
Virtual Time
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/time_weighted_vectorstore.html |
3f988f0a663c-0 | .ipynb
.pdf
SVM
Contents
Create New Retriever with Texts
Use Retriever
SVM#
Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.
This notebook goes over how to use a retriever that under the hood uses an SVM using scikit-learn package.
Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.ipynb
#!pip install scikit-learn
#!pip install lark
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.retrievers import SVMRetriever
from langchain.embeddings import OpenAIEmbeddings
Create New Retriever with Texts#
retriever = SVMRetriever.from_texts(["foo", "bar", "world", "hello", "foo bar"], OpenAIEmbeddings())
Use Retriever#
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
result
[Document(page_content='foo', metadata={}),
Document(page_content='foo bar', metadata={}),
Document(page_content='hello', metadata={}),
Document(page_content='world', metadata={})]
previous
Self-querying
next
TF-IDF
Contents
Create New Retriever with Texts
Use Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/svm.html |
48b5402081a8-0 | .ipynb
.pdf
Self-querying with Chroma
Contents
Creating a Chroma vectorstore
Creating our self-querying retriever
Testing it out
Filter k
Self-querying with Chroma#
Chroma is a database for building AI applications with embeddings.
In the notebook we’ll demo the SelfQueryRetriever wrapped around a Chroma vector store.
Creating a Chroma vectorstore#
First we’ll want to create a Chroma VectorStore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
NOTE: The self-query retriever requires you to have lark installed (pip install lark). We also need the chromadb package.
#!pip install lark
#!pip install chromadb
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.schema import Document
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
embeddings = OpenAIEmbeddings()
docs = [
Document(page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}),
Document(page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}),
Document(page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html |
48b5402081a8-1 | Document(page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}),
Document(page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}),
Document(page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={"year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9})
]
vectorstore = Chroma.from_documents(
docs, embeddings
)
Using embedded DuckDB without persistence: data will be transient
Creating our self-querying retriever#
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.llms import OpenAI
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
metadata_field_info=[
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating",
description="A 1-10 rating for the movie",
type="float"
),
] | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html |
48b5402081a8-2 | type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
Testing it out#
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
query='dinosaur' filter=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}),
Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2})]
# This example only specifies a filter
retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html |
48b5402081a8-3 | Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig')
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]
# This example specifies a composite filter
retriever.get_relevant_documents("What's a highly rated (above 8.5) science fiction film?")
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)])
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]
# This example specifies a query and composite filter
retriever.get_relevant_documents("What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated") | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html |
48b5402081a8-4 | query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')])
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
Filter k#
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True
)
# This example only specifies a relevant query
retriever.get_relevant_documents("what are two movies about dinosaurs")
query='dinosaur' filter=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html |
48b5402081a8-5 | Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2})]
previous
ChatGPT Plugin
next
Cohere Reranker
Contents
Creating a Chroma vectorstore
Creating our self-querying retriever
Testing it out
Filter k
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html |
70b4ecfde0f0-0 | .ipynb
.pdf
Vespa
Vespa#
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.
This notebook shows how to use Vespa.ai as a LangChain retriever.
In order to create a retriever, we use pyvespa to
create a connection a Vespa service.
#!pip install pyvespa
from vespa.application import Vespa
vespa_app = Vespa(url="https://doc-search.vespa.oath.cloud")
This creates a connection to a Vespa service, here the Vespa documentation search service.
Using pyvespa package, you can also connect to a
Vespa Cloud instance
or a local
Docker instance.
After connecting to the service, you can set up the retriever:
from langchain.retrievers.vespa_retriever import VespaRetriever
vespa_query_body = {
"yql": "select content from paragraph where userQuery()",
"hits": 5,
"ranking": "documentation",
"locale": "en-us"
}
vespa_content_field = "content"
retriever = VespaRetriever(vespa_app, vespa_query_body, vespa_content_field)
This sets up a LangChain retriever that fetches documents from the Vespa application.
Here, up to 5 results are retrieved from the content field in the paragraph document type,
using doumentation as the ranking method. The userQuery() is replaced with the actual query
passed from LangChain.
Please refer to the pyvespa documentation
for more information.
Now you can return the results and continue using the results in LangChain.
retriever.get_relevant_documents("what is vespa?") | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vespa.html |
70b4ecfde0f0-1 | retriever.get_relevant_documents("what is vespa?")
previous
VectorStore
next
Weaviate Hybrid Search
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vespa.html |
915c369e6c0c-0 | .ipynb
.pdf
Cohere Reranker
Contents
Set up the base vector store retriever
Doing reranking with CohereRerank
Cohere Reranker#
Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.
This notebook shows how to use Cohere’s rerank endpoint in a retriever. This builds on top of ideas in the ContextualCompressionRetriever.
#!pip install cohere
#!pip install faiss
# OR (depending on Python version)
#!pip install faiss-cpu
# get a new token: https://dashboard.cohere.ai/
import os
import getpass
os.environ['COHERE_API_KEY'] = getpass.getpass('Cohere API Key:')
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
# Helper function for printing docs
def pretty_print_docs(docs):
print(f"\n{'-' * 100}\n".join([f"Document {i+1}:\n\n" + d.page_content for i, d in enumerate(docs)]))
Set up the base vector store retriever#
Let’s start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs.
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.document_loaders import TextLoader
from langchain.vectorstores import FAISS
documents = TextLoader('../../../state_of_the_union.txt').load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)
texts = text_splitter.split_documents(documents) | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html |
915c369e6c0c-1 | texts = text_splitter.split_documents(documents)
retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever(search_kwargs={"k": 20})
query = "What did the president say about Ketanji Brown Jackson"
docs = retriever.get_relevant_documents(query)
pretty_print_docs(docs)
Document 1:
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
----------------------------------------------------------------------------------------------------
Document 2:
As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential.
While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.
----------------------------------------------------------------------------------------------------
Document 3:
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
----------------------------------------------------------------------------------------------------
Document 4:
He met the Ukrainian people.
From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html |
915c369e6c0c-2 | Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.
In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.
----------------------------------------------------------------------------------------------------
Document 5:
I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.
I’ve worked on these issues a long time.
I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.
So let’s not abandon our streets. Or choose between safety and equal justice.
----------------------------------------------------------------------------------------------------
Document 6:
Vice President Harris and I ran for office with a new economic vision for America.
Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up
and the middle out, not from the top down.
Because we know that when the middle class grows, the poor have a ladder up and the wealthy do very well.
America used to have the best roads, bridges, and airports on Earth.
Now our infrastructure is ranked 13th in the world.
----------------------------------------------------------------------------------------------------
Document 7:
And tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud.
By the end of this year, the deficit will be down to less than half what it was before I took office.
The only president ever to cut the deficit by more than one trillion dollars in a single year.
Lowering your costs also means demanding more competition.
I’m a capitalist, but capitalism without competition isn’t capitalism.
It’s exploitation—and it drives up prices. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html |
915c369e6c0c-3 | It’s exploitation—and it drives up prices.
----------------------------------------------------------------------------------------------------
Document 8:
For the past 40 years we were told that if we gave tax breaks to those at the very top, the benefits would trickle down to everyone else.
But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century.
Vice President Harris and I ran for office with a new economic vision for America.
----------------------------------------------------------------------------------------------------
Document 9:
All told, we created 369,000 new manufacturing jobs in America just last year.
Powered by people I’ve met like JoJo Burgess, from generations of union steelworkers from Pittsburgh, who’s here with us tonight.
As Ohio Senator Sherrod Brown says, “It’s time to bury the label “Rust Belt.”
It’s time.
But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills.
----------------------------------------------------------------------------------------------------
Document 10:
I’m also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve.
And fourth, let’s end cancer as we know it.
This is personal to me and Jill, to Kamala, and to so many of you.
Cancer is the #2 cause of death in America–second only to heart disease.
----------------------------------------------------------------------------------------------------
Document 11:
He will never extinguish their love of freedom. He will never weaken the resolve of the free world.
We meet tonight in an America that has lived through two of the hardest years this nation has ever faced.
The pandemic has been punishing. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html |
915c369e6c0c-4 | The pandemic has been punishing.
And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more.
I understand.
----------------------------------------------------------------------------------------------------
Document 12:
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.
Last year COVID-19 kept us apart. This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
And with an unwavering resolve that freedom will always triumph over tyranny.
----------------------------------------------------------------------------------------------------
Document 13:
I know.
One of those soldiers was my son Major Beau Biden.
We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops.
But I’m committed to finding out everything we can.
Committed to military families like Danielle Robinson from Ohio.
The widow of Sergeant First Class Heath Robinson.
He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq.
----------------------------------------------------------------------------------------------------
Document 14:
And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things.
So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together.
First, beat the opioid epidemic.
There is so much we can do. Increase funding for prevention, treatment, harm reduction, and recovery.
----------------------------------------------------------------------------------------------------
Document 15:
Third, support our veterans.
Veterans are the best of us. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html |
915c369e6c0c-5 | Third, support our veterans.
Veterans are the best of us.
I’ve always believed that we have a sacred obligation to equip all those we send to war and care for them and their families when they come home.
My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free.
Our troops in Iraq and Afghanistan faced many dangers.
----------------------------------------------------------------------------------------------------
Document 16:
When we invest in our workers, when we build the economy from the bottom up and the middle out together, we can do something we haven’t done in a long time: build a better America.
For more than two years, COVID-19 has impacted every decision in our lives and the life of the nation.
And I know you’re tired, frustrated, and exhausted.
But I also know this.
----------------------------------------------------------------------------------------------------
Document 17:
Now is the hour.
Our moment of responsibility.
Our test of resolve and conscience, of history itself.
It is in this moment that our character is formed. Our purpose is found. Our future is forged.
Well I know this nation.
We will meet the test.
To protect freedom and liberty, to expand fairness and opportunity.
We will save democracy.
As hard as these times have been, I am more optimistic about America today than I have been my whole life.
----------------------------------------------------------------------------------------------------
Document 18:
He didn’t know how to stop fighting, and neither did she.
Through her pain she found purpose to demand we do better.
Tonight, Danielle—we are.
The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits.
And tonight, I’m announcing we’re expanding eligibility to veterans suffering from nine respiratory cancers.
----------------------------------------------------------------------------------------------------
Document 19: | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html |
915c369e6c0c-6 | ----------------------------------------------------------------------------------------------------
Document 19:
I understand.
I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it.
That’s why one of the first things I did as President was fight to pass the American Rescue Plan.
Because people were hurting. We needed to act, and we did.
Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis.
----------------------------------------------------------------------------------------------------
Document 20:
So let’s not abandon our streets. Or choose between safety and equal justice.
Let’s come together to protect our communities, restore trust, and hold law enforcement accountable.
That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.
Doing reranking with CohereRerank#
Now let’s wrap our base retriever with a ContextualCompressionRetriever. We’ll add an CohereRerank, uses the Cohere rerank endpoint to rerank the returned results.
from langchain.llms import OpenAI
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import CohereRerank
llm = OpenAI(temperature=0)
compressor = CohereRerank()
compression_retriever = ContextualCompressionRetriever(base_compressor=compressor, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html |
915c369e6c0c-7 | And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
----------------------------------------------------------------------------------------------------
Document 2:
I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.
I’ve worked on these issues a long time.
I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.
So let’s not abandon our streets. Or choose between safety and equal justice.
----------------------------------------------------------------------------------------------------
Document 3:
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
You can of course use this retriever within a QA pipeline
from langchain.chains import RetrievalQA
chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0), retriever=compression_retriever)
chain({"query": query})
{'query': 'What did the president say about Ketanji Brown Jackson',
'result': " The president said that Ketanji Brown Jackson is one of the nation's top legal minds and that she is a consensus builder who has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."}
previous
Self-querying with Chroma
next
Contextual Compression
Contents | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html |
915c369e6c0c-8 | previous
Self-querying with Chroma
next
Contextual Compression
Contents
Set up the base vector store retriever
Doing reranking with CohereRerank
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html |
1b4b7f778040-0 | .ipynb
.pdf
Azure Cognitive Search
Contents
Set up Azure Cognitive Search
Using the Azure Cognitive Search Retriever
Azure Cognitive Search#
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
Search is foundational to any app that surfaces text to users, where common scenarios include catalog or document search, online retail apps, or data exploration over proprietary content. When you create a search service, you’ll work with the following capabilities:
A search engine for full text search over a search index containing user-owned content
Rich indexing, with lexical analysis and optional AI enrichment for content extraction and transformation
Rich query syntax for text search, fuzzy search, autocomplete, geo-search and more
Programmability through REST APIs and client libraries in Azure SDKs
Azure integration at the data layer, machine learning layer, and AI (Cognitive Services)
This notebook shows how to use Azure Cognitive Search (ACS) within LangChain.
Set up Azure Cognitive Search#
To set up ACS, please follow the instrcutions here.
Please note
the name of your ACS service,
the name of your ACS index,
your API key.
Your API key can be either Admin or Query key, but as we only read data it is recommended to use a Query key.
Using the Azure Cognitive Search Retriever#
import os
from langchain.retrievers import AzureCognitiveSearchRetriever
Set Service Name, Index Name and API key as environment variables (alternatively, you can pass them as arguments to AzureCognitiveSearchRetriever).
os.environ["AZURE_COGNITIVE_SEARCH_SERVICE_NAME"] = "<YOUR_ACS_SERVICE_NAME>"
os.environ["AZURE_COGNITIVE_SEARCH_INDEX_NAME"] ="<YOUR_ACS_INDEX_NAME>" | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/azure_cognitive_search.html |
1b4b7f778040-1 | os.environ["AZURE_COGNITIVE_SEARCH_API_KEY"] = "<YOUR_API_KEY>"
Create the Retriever
retriever = AzureCognitiveSearchRetriever(content_key="content")
Now you can use retrieve documents from Azure Cognitive Search
retriever.get_relevant_documents("what is langchain")
previous
Arxiv
next
ChatGPT Plugin
Contents
Set up Azure Cognitive Search
Using the Azure Cognitive Search Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/azure_cognitive_search.html |
3bae4a27f0cf-0 | .ipynb
.pdf
Zep
Contents
Retriever Example
Initialize the Zep Chat Message History Class and add a chat message history to the memory store
Use the Zep Retriever to vector search over the Zep memory
Zep#
Zep - A long-term memory store for LLM applications.
More on Zep:
Zep stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs.
Key Features:
Long-term memory persistence, with access to historical messages irrespective of your summarization strategy.
Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies.
Vector search over memories, with messages automatically embedded on creation.
Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.
Python and JavaScript SDKs.
Zep’s Go Extractor model is easily extensible, with a simple, clean interface available to build new enrichment functionality, such as summarizers, entity extractors, embedders, and more.
Zep project: getzep/zep
Retriever Example#
This notebook demonstrates how to search historical chat message histories using the Zep Long-term Memory Store.
We’ll demonstrate:
Adding conversation history to the Zep memory store.
Vector search over the conversation history.
from langchain.memory.chat_message_histories import ZepChatMessageHistory
from langchain.schema import HumanMessage, AIMessage
from uuid import uuid4
# Set this to your Zep server URL
ZEP_API_URL = "http://localhost:8000"
Initialize the Zep Chat Message History Class and add a chat message history to the memory store# | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html |
3bae4a27f0cf-1 | Initialize the Zep Chat Message History Class and add a chat message history to the memory store#
NOTE: Unlike other Retrievers, the content returned by the Zep Retriever is session/user specific. A session_id is required when instantiating the Retriever.
session_id = str(uuid4()) # This is a unique identifier for the user/session
# Set up Zep Chat History. We'll use this to add chat histories to the memory store
zep_chat_history = ZepChatMessageHistory(
session_id=session_id,
url=ZEP_API_URL,
)
# Preload some messages into the memory. The default message window is 12 messages. We want to push beyond this to demonstrate auto-summarization.
test_history = [
{"role": "human", "content": "Who was Octavia Butler?"},
{
"role": "ai",
"content": (
"Octavia Estelle Butler (June 22, 1947 – February 24, 2006) was an American"
" science fiction author."
),
},
{"role": "human", "content": "Which books of hers were made into movies?"},
{
"role": "ai",
"content": (
"The most well-known adaptation of Octavia Butler's work is the FX series"
" Kindred, based on her novel of the same name."
),
},
{"role": "human", "content": "Who were her contemporaries?"},
{
"role": "ai",
"content": (
"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R."
" Delany, and Joanna Russ."
),
}, | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html |
3bae4a27f0cf-2 | " Delany, and Joanna Russ."
),
},
{"role": "human", "content": "What awards did she win?"},
{
"role": "ai",
"content": (
"Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur"
" Fellowship."
),
},
{
"role": "human",
"content": "Which other women sci-fi writers might I want to read?",
},
{
"role": "ai",
"content": "You might want to read Ursula K. Le Guin or Joanna Russ.",
},
{
"role": "human",
"content": (
"Write a short synopsis of Butler's book, Parable of the Sower. What is it"
" about?"
),
},
{
"role": "ai",
"content": (
"Parable of the Sower is a science fiction novel by Octavia Butler,"
" published in 1993. It follows the story of Lauren Olamina, a young woman"
" living in a dystopian future where society has collapsed due to"
" environmental disasters, poverty, and violence."
),
},
]
for msg in test_history:
zep_chat_history.append(
HumanMessage(content=msg["content"])
if msg["role"] == "human"
else AIMessage(content=msg["content"])
)
Use the Zep Retriever to vector search over the Zep memory#
Zep provides native vector search over historical conversation memory. Embedding happens automatically. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html |
3bae4a27f0cf-3 | Zep provides native vector search over historical conversation memory. Embedding happens automatically.
NOTE: Embedding of messages occurs asynchronously, so the first query may not return results. Subsequent queries will return results as the embeddings are generated.
from langchain.retrievers import ZepRetriever
zep_retriever = ZepRetriever(
session_id=session_id, # Ensure that you provide the session_id when instantiating the Retriever
url=ZEP_API_URL,
top_k=5,
)
await zep_retriever.aget_relevant_documents("Who wrote Parable of the Sower?")
[Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7759001673780126, 'uuid': '3a82a02f-056e-4c6a-b960-67ebdf3b2b93', 'created_at': '2023-05-25T15:03:30.2041Z', 'role': 'human', 'token_count': 8}),
Document(page_content="Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.", metadata={'score': 0.7602262941130749, 'uuid': 'a2fc9c21-0897-46c8-bef7-6f5c0f71b04a', 'created_at': '2023-05-25T15:03:30.248065Z', 'role': 'ai', 'token_count': 27}), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html |
3bae4a27f0cf-4 | Document(page_content='Who were her contemporaries?', metadata={'score': 0.757553366415519, 'uuid': '41f9c41a-a205-41e1-b48b-a0a4cd943fc8', 'created_at': '2023-05-25T15:03:30.243995Z', 'role': 'human', 'token_count': 8}),
Document(page_content='Octavia Estelle Butler (June 22, 1947 – February 24, 2006) was an American science fiction author.', metadata={'score': 0.7546211059317948, 'uuid': '34678311-0098-4f1a-8fd4-5615ac692deb', 'created_at': '2023-05-25T15:03:30.231427Z', 'role': 'ai', 'token_count': 31}),
Document(page_content='Which books of hers were made into movies?', metadata={'score': 0.7496714959247069, 'uuid': '18046c3a-9666-4d3e-b4f0-43d1394732b7', 'created_at': '2023-05-25T15:03:30.236837Z', 'role': 'human', 'token_count': 11})]
We can also use the Zep sync API to retrieve results:
zep_retriever.get_relevant_documents("Who wrote Parable of the Sower?") | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html |
3bae4a27f0cf-5 | [Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', metadata={'score': 0.8897321402776546, 'uuid': '1c09603a-52c1-40d7-9d69-29f26256029c', 'created_at': '2023-05-25T15:03:30.268257Z', 'role': 'ai', 'token_count': 56}),
Document(page_content="Write a short synopsis of Butler's book, Parable of the Sower. What is it about?", metadata={'score': 0.8857628682610436, 'uuid': 'f6706e8c-6c91-452f-8c1b-9559fd924657', 'created_at': '2023-05-25T15:03:30.265302Z', 'role': 'human', 'token_count': 23}),
Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7759670375149477, 'uuid': '3a82a02f-056e-4c6a-b960-67ebdf3b2b93', 'created_at': '2023-05-25T15:03:30.2041Z', 'role': 'human', 'token_count': 8}), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html |
3bae4a27f0cf-6 | Document(page_content="Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.", metadata={'score': 0.7602854653476563, 'uuid': 'a2fc9c21-0897-46c8-bef7-6f5c0f71b04a', 'created_at': '2023-05-25T15:03:30.248065Z', 'role': 'ai', 'token_count': 27}),
Document(page_content='You might want to read Ursula K. Le Guin or Joanna Russ.', metadata={'score': 0.7595293992240313, 'uuid': 'f22f2498-6118-4c74-8718-aa89ccd7e3d6', 'created_at': '2023-05-25T15:03:30.261198Z', 'role': 'ai', 'token_count': 18})]
previous
Wikipedia
next
Chains
Contents
Retriever Example
Initialize the Zep Chat Message History Class and add a chat message history to the memory store
Use the Zep Retriever to vector search over the Zep memory
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html |
50b3f339d3b1-0 | .ipynb
.pdf
ChatGPT Plugin
Contents
Using the ChatGPT Retriever Plugin
ChatGPT Plugin#
OpenAI plugins connect ChatGPT to third-party applications. These plugins enable ChatGPT to interact with APIs defined by developers, enhancing ChatGPT’s capabilities and allowing it to perform a wide range of actions.
Plugins can allow ChatGPT to do things like:
Retrieve real-time information; e.g., sports scores, stock prices, the latest news, etc.
Retrieve knowledge-base information; e.g., company docs, personal notes, etc.
Perform actions on behalf of the user; e.g., booking a flight, ordering food, etc.
This notebook shows how to use the ChatGPT Retriever Plugin within LangChain.
# STEP 1: Load
# Load documents using LangChain's DocumentLoaders
# This is from https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/csv.html
from langchain.document_loaders.csv_loader import CSVLoader
loader = CSVLoader(file_path='../../document_loaders/examples/example_data/mlb_teams_2012.csv')
data = loader.load()
# STEP 2: Convert
# Convert Document to format expected by https://github.com/openai/chatgpt-retrieval-plugin
from typing import List
from langchain.docstore.document import Document
import json
def write_json(path: str, documents: List[Document])-> None:
results = [{"text": doc.page_content} for doc in documents]
with open(path, "w") as f:
json.dump(results, f, indent=2)
write_json("foo.json", data)
# STEP 3: Use
# Ingest this as you would any other json file in https://github.com/openai/chatgpt-retrieval-plugin/tree/main/scripts/process_json | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin.html |
50b3f339d3b1-1 | Using the ChatGPT Retriever Plugin#
Okay, so we’ve created the ChatGPT Retriever Plugin, but how do we actually use it?
The below code walks through how to do that.
We want to use ChatGPTPluginRetriever so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.retrievers import ChatGPTPluginRetriever
retriever = ChatGPTPluginRetriever(url="http://0.0.0.0:8000", bearer_token="foo")
retriever.get_relevant_documents("alice's phone number")
[Document(page_content="This is Alice's phone number: 123-456-7890", lookup_str='', metadata={'id': '456_0', 'metadata': {'source': 'email', 'source_id': '567', 'url': None, 'created_at': '1609592400.0', 'author': 'Alice', 'document_id': '456'}, 'embedding': None, 'score': 0.925571561}, lookup_index=0),
Document(page_content='This is a document about something', lookup_str='', metadata={'id': '123_0', 'metadata': {'source': 'file', 'source_id': 'https://example.com/doc1', 'url': 'https://example.com/doc1', 'created_at': '1609502400.0', 'author': 'Alice', 'document_id': '123'}, 'embedding': None, 'score': 0.6987589}, lookup_index=0), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin.html |
50b3f339d3b1-2 | Document(page_content='Team: Angels "Payroll (millions)": 154.49 "Wins": 89', lookup_str='', metadata={'id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631_0', 'metadata': {'source': None, 'source_id': None, 'url': None, 'created_at': None, 'author': None, 'document_id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631'}, 'embedding': None, 'score': 0.697888613}, lookup_index=0)]
previous
Azure Cognitive Search
next
Self-querying with Chroma
Contents
Using the ChatGPT Retriever Plugin
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin.html |
d46e6f13ff67-0 | .ipynb
.pdf
Self-querying
Contents
Creating a Pinecone index
Creating our self-querying retriever
Testing it out
Filter k
Self-querying#
In the notebook we’ll demo the SelfQueryRetriever, which, as the name suggests, has the ability to query itself. Specifically, given any natural language query, the retriever uses a query-constructing LLM chain to write a structured query and then applies that structured query to it’s underlying VectorStore. This allows the retriever to not only use the user-input query for semantic similarity comparison with the contents of stored documented, but to also extract filters from the user query on the metadata of stored documents and to execute those filters.
Creating a Pinecone index#
First we’ll want to create a Pinecone VectorStore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
To use Pinecone, you to have pinecone package installed and you must have an API key and an Environment. Here are the installation instructions.
NOTE: The self-query retriever requires you to have lark package installed.
# !pip install lark
#!pip install pinecone-client
import os
import pinecone
pinecone.init(api_key=os.environ["PINECONE_API_KEY"], environment=os.environ["PINECONE_ENV"])
/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pinecone/index.py:4: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)
from tqdm.autonotebook import tqdm
from langchain.schema import Document
from langchain.embeddings.openai import OpenAIEmbeddings | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html |
d46e6f13ff67-1 | from langchain.schema import Document
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Pinecone
embeddings = OpenAIEmbeddings()
# create new index
pinecone.create_index("langchain-self-retriever-demo", dimension=1536)
docs = [
Document(page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": ["action", "science fiction"]}),
Document(page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}),
Document(page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}),
Document(page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}),
Document(page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}),
Document(page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={"year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": ["science fiction", "thriller"], "rating": 9.9})
]
vectorstore = Pinecone.from_documents(
docs, embeddings, index_name="langchain-self-retriever-demo"
)
Creating our self-querying retriever# | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html |
d46e6f13ff67-2 | )
Creating our self-querying retriever#
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.llms import OpenAI
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
metadata_field_info=[
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating",
description="A 1-10 rating for the movie",
type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
Testing it out#
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
query='dinosaur' filter=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': ['action', 'science fiction'], 'rating': 7.7, 'year': 1993.0}), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html |
d46e6f13ff67-3 | Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995.0}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006.0}),
Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'director': 'Christopher Nolan', 'rating': 8.2, 'year': 2010.0})]
# This example only specifies a filter
retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006.0}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': ['science fiction', 'thriller'], 'rating': 9.9, 'year': 1979.0})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html |
d46e6f13ff67-4 | [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'director': 'Greta Gerwig', 'rating': 8.3, 'year': 2019.0})]
# This example specifies a composite filter
retriever.get_relevant_documents("What's a highly rated (above 8.5) science fiction film?")
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)])
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': ['science fiction', 'thriller'], 'rating': 9.9, 'year': 1979.0})]
# This example specifies a query and composite filter
retriever.get_relevant_documents("What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990.0), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005.0), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')])
[Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995.0})]
Filter k#
We can also use the self query retriever to specify k: the number of documents to fetch. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html |
d46e6f13ff67-5 | We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True
)
# This example only specifies a relevant query
retriever.get_relevant_documents("What are two movies about dinosaurs")
previous
Self-querying with Qdrant
next
SVM
Contents
Creating a Pinecone index
Creating our self-querying retriever
Testing it out
Filter k
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html |
1e28517620d1-0 | .ipynb
.pdf
Weaviate Hybrid Search
Weaviate Hybrid Search#
Weaviate is an open source vector database.
Hybrid search is a technique that combines multiple search algorithms to improve the accuracy and relevance of search results. It uses the best features of both keyword-based search algorithms with vector search techniques.
The Hybrid search in Weaviate uses sparse and dense vectors to represent the meaning and context of search queries and documents.
This notebook shows how to use Weaviate hybrid search as a LangChain retriever.
Set up the retriever:
#!pip install weaviate-client
import weaviate
import os
WEAVIATE_URL = os.getenv("WEAVIATE_URL")
client = weaviate.Client(
url=WEAVIATE_URL,
auth_client_secret=weaviate.AuthApiKey(api_key=os.getenv("WEAVIATE_API_KEY")),
additional_headers={
"X-Openai-Api-Key": os.getenv("OPENAI_API_KEY"),
},
)
# client.schema.delete_all()
from langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetriever
from langchain.schema import Document
/workspaces/langchain/langchain/vectorstores/analyticdb.py:20: MovedIn20Warning: The ``declarative_base()`` function is now available as sqlalchemy.orm.declarative_base(). (deprecated since: 2.0) (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
Base = declarative_base() # type: Any
retriever = WeaviateHybridSearchRetriever(
client, index_name="LangChain", text_key="text"
)
Add some data:
docs = [
Document(
metadata={ | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html |
1e28517620d1-1 | )
Add some data:
docs = [
Document(
metadata={
"title": "Embracing The Future: AI Unveiled",
"author": "Dr. Rebecca Simmons",
},
page_content="A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.",
),
Document(
metadata={
"title": "Symbiosis: Harmonizing Humans and AI",
"author": "Prof. Jonathan K. Sterling",
},
page_content="Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.",
),
Document(
metadata={"title": "AI: The Ethical Quandary", "author": "Dr. Rebecca Simmons"},
page_content="In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.",
),
Document(
metadata={
"title": "Conscious Constructs: The Search for AI Sentience",
"author": "Dr. Samuel Cortez",
},
page_content="Dr. Cortez takes readers on a journey exploring the controversial topic of AI consciousness. The book provides compelling arguments for and against the possibility of true AI sentience.",
),
Document(
metadata={
"title": "Invisible Routines: Hidden AI in Everyday Life",
"author": "Prof. Jonathan K. Sterling",
}, | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html |
1e28517620d1-2 | "author": "Prof. Jonathan K. Sterling",
},
page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.",
),
]
retriever.add_documents(docs)
['eda16d7d-437d-4613-84ae-c2e38705ec7a',
'04b501bf-192b-4e72-be77-2fbbe7e67ebf',
'18a1acdb-23b7-4482-ab04-a6c2ed51de77',
'88e82cc3-c020-4b5a-b3c6-ca7cf3fc6a04',
'f6abd9d5-32ed-46c4-bd08-f8d0f7c9fc95']
Do a hybrid search:
retriever.get_relevant_documents("the ethical implications of AI")
[Document(page_content='In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.', metadata={}),
Document(page_content='A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.', metadata={}),
Document(page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", metadata={}), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html |
1e28517620d1-3 | Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={})]
Do a hybrid search with where filter:
retriever.get_relevant_documents(
"AI integration in society",
where_filter={
"path": ["author"],
"operator": "Equal",
"valueString": "Prof. Jonathan K. Sterling",
},
)
[Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={}),
Document(page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", metadata={})]
previous
Vespa
next
Self-querying with Weaviate
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html |
97856c20c029-0 | .ipynb
.pdf
TF-IDF
Contents
Create New Retriever with Texts
Create a New Retriever with Documents
Use Retriever
TF-IDF#
TF-IDF means term-frequency times inverse document-frequency.
This notebook goes over how to use a retriever that under the hood uses TF-IDF using scikit-learn package.
For more information on the details of TF-IDF see this blog post.
# !pip install scikit-learn
from langchain.retrievers import TFIDFRetriever
Create New Retriever with Texts#
retriever = TFIDFRetriever.from_texts(["foo", "bar", "world", "hello", "foo bar"])
Create a New Retriever with Documents#
You can now create a new retriever with the documents you created.
from langchain.schema import Document
retriever = TFIDFRetriever.from_documents([Document(page_content="foo"), Document(page_content="bar"), Document(page_content="world"), Document(page_content="hello"), Document(page_content="foo bar")])
Use Retriever#
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
result
[Document(page_content='foo', metadata={}),
Document(page_content='foo bar', metadata={}),
Document(page_content='hello', metadata={}),
Document(page_content='world', metadata={})]
previous
SVM
next
Time Weighted VectorStore
Contents
Create New Retriever with Texts
Create a New Retriever with Documents
Use Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/tf_idf.html |
be887a72aa1f-0 | .ipynb
.pdf
Metal
Contents
Ingest Documents
Query
Metal#
Metal is a managed service for ML Embeddings.
This notebook shows how to use Metal’s retriever.
First, you will need to sign up for Metal and get an API key. You can do so here
# !pip install metal_sdk
from metal_sdk.metal import Metal
API_KEY = ""
CLIENT_ID = ""
INDEX_ID = ""
metal = Metal(API_KEY, CLIENT_ID, INDEX_ID);
Ingest Documents#
You only need to do this if you haven’t already set up an index
metal.index( {"text": "foo1"})
metal.index( {"text": "foo"})
{'data': {'id': '642739aa7559b026b4430e42',
'text': 'foo',
'createdAt': '2023-03-31T19:51:06.748Z'}}
Query#
Now that our index is set up, we can set up a retriever and start querying it.
from langchain.retrievers import MetalRetriever
retriever = MetalRetriever(metal, params={"limit": 2})
retriever.get_relevant_documents("foo1")
[Document(page_content='foo1', metadata={'dist': '1.19209289551e-07', 'id': '642739a17559b026b4430e40', 'createdAt': '2023-03-31T19:50:57.853Z'}),
Document(page_content='foo1', metadata={'dist': '4.05311584473e-06', 'id': '642738f67559b026b4430e3c', 'createdAt': '2023-03-31T19:48:06.769Z'})]
previous
kNN
next | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/metal.html |
be887a72aa1f-1 | previous
kNN
next
Pinecone Hybrid Search
Contents
Ingest Documents
Query
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/metal.html |
03d5e8796414-0 | .ipynb
.pdf
Self-querying with Weaviate
Contents
Creating a Weaviate vectorstore
Creating our self-querying retriever
Testing it out
Filter k
Self-querying with Weaviate#
Creating a Weaviate vectorstore#
First we’ll want to create a Weaviate VectorStore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
NOTE: The self-query retriever requires you to have lark installed (pip install lark). We also need the weaviate-client package.
#!pip install lark weaviate-client
from langchain.schema import Document
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Weaviate
import os
embeddings = OpenAIEmbeddings()
docs = [
Document(page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}),
Document(page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}),
Document(page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}),
Document(page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate_self_query.html |
03d5e8796414-1 | Document(page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}),
Document(page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={"year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9})
]
vectorstore = Weaviate.from_documents(
docs, embeddings, weaviate_url="http://127.0.0.1:8080"
)
Creating our self-querying retriever#
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.llms import OpenAI
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
metadata_field_info=[
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating",
description="A 1-10 rating for the movie",
type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0) | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate_self_query.html |
03d5e8796414-2 | llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
Testing it out#
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
query='dinosaur' filter=None limit=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'rating': None, 'year': 1995}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'genre': 'science fiction', 'rating': 9.9, 'year': 1979}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'genre': None, 'rating': 8.6, 'year': 2006})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'genre': None, 'rating': 8.3, 'year': 2019})]
Filter k#
We can also use the self query retriever to specify k: the number of documents to fetch. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate_self_query.html |
03d5e8796414-3 | We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True
)
# This example only specifies a relevant query
retriever.get_relevant_documents("what are two movies about dinosaurs")
query='dinosaur' filter=None limit=2
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'rating': None, 'year': 1995})]
previous
Weaviate Hybrid Search
next
Wikipedia
Contents
Creating a Weaviate vectorstore
Creating our self-querying retriever
Testing it out
Filter k
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate_self_query.html |
84a8d8c78443-0 | .ipynb
.pdf
Self-querying with Qdrant
Contents
Creating a Qdrant vectorstore
Creating our self-querying retriever
Testing it out
Filter k
Self-querying with Qdrant#
Qdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful
In the notebook we’ll demo the SelfQueryRetriever wrapped around a Qdrant vector store.
Creating a Qdrant vectorstore#
First we’ll want to create a Chroma VectorStore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
NOTE: The self-query retriever requires you to have lark installed (pip install lark). We also need the qdrant-client package.
#!pip install lark qdrant-client
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
# import os
# import getpass
# os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.schema import Document
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Qdrant
embeddings = OpenAIEmbeddings()
docs = [
Document(page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}),
Document(page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/qdrant_self_query.html |
84a8d8c78443-1 | Document(page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}),
Document(page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}),
Document(page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}),
Document(page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={"year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9})
]
vectorstore = Qdrant.from_documents(
docs,
embeddings,
location=":memory:", # Local mode with in-memory storage only
collection_name="my_documents",
)
Creating our self-querying retriever#
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.llms import OpenAI
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
metadata_field_info=[
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/qdrant_self_query.html |
84a8d8c78443-2 | type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating",
description="A 1-10 rating for the movie",
type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
Testing it out#
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
query='dinosaur' filter=None limit=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]
# This example only specifies a filter
retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5") | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/qdrant_self_query.html |
84a8d8c78443-3 | query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]
# This example specifies a composite filter
retriever.get_relevant_documents("What's a highly rated (above 8.5) science fiction film?")
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction')]) limit=None | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/qdrant_self_query.html |
84a8d8c78443-4 | [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]
# This example specifies a query and composite filter
retriever.get_relevant_documents("What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) limit=None
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
Filter k#
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True
)
# This example only specifies a relevant query
retriever.get_relevant_documents("what are two movies about dinosaurs")
query='dinosaur' filter=None limit=2
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/qdrant_self_query.html |
84a8d8c78443-5 | Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
previous
PubMed Retriever
next
Self-querying
Contents
Creating a Qdrant vectorstore
Creating our self-querying retriever
Testing it out
Filter k
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/qdrant_self_query.html |
19d3bb1aea45-0 | .ipynb
.pdf
PubMed Retriever
PubMed Retriever#
This notebook goes over how to use PubMed as a retriever
PubMed® comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.
from langchain.retrievers import PubMedRetriever
retriever = PubMedRetriever()
retriever.get_relevant_documents("chatgpt")
[Document(page_content='', metadata={'uid': '37268021', 'title': 'Dermatology in the wake of an AI revolution: who gets a say?', 'pub_date': '<Year>2023</Year><Month>May</Month><Day>31</Day>'}),
Document(page_content='', metadata={'uid': '37267643', 'title': 'What is ChatGPT and what do we do with it? Implications of the age of AI for nursing and midwifery practice and education: An editorial.', 'pub_date': '<Year>2023</Year><Month>May</Month><Day>30</Day>'}),
Document(page_content='The nursing field has undergone notable changes over time and is projected to undergo further modifications in the future, owing to the advent of sophisticated technologies and growing healthcare needs. The advent of ChatGPT, an AI-powered language model, is expected to exert a significant influence on the nursing profession, specifically in the domains of patient care and instruction. The present article delves into the ramifications of ChatGPT within the nursing domain and accentuates its capacity and constraints to transform the discipline.', metadata={'uid': '37266721', 'title': 'The Impact of ChatGPT on the Nursing Profession: Revolutionizing Patient Care and Education.', 'pub_date': '<Year>2023</Year><Month>Jun</Month><Day>02</Day>'})] | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pubmed.html |
19d3bb1aea45-1 | previous
Pinecone Hybrid Search
next
Self-querying with Qdrant
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pubmed.html |
8742e5258147-0 | .ipynb
.pdf
ElasticSearch BM25
Contents
Create New Retriever
Add texts (if necessary)
Use Retriever
ElasticSearch BM25#
Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.
In information retrieval, Okapi BM25 (BM is an abbreviation of best matching) is a ranking function used by search engines to estimate the relevance of documents to a given search query. It is based on the probabilistic retrieval framework developed in the 1970s and 1980s by Stephen E. Robertson, Karen Spärck Jones, and others.
The name of the actual ranking function is BM25. The fuller name, Okapi BM25, includes the name of the first system to use it, which was the Okapi information retrieval system, implemented at London’s City University in the 1980s and 1990s. BM25 and its newer variants, e.g. BM25F (a version of BM25 that can take document structure and anchor text into account), represent TF-IDF-like retrieval functions used in document retrieval.
This notebook shows how to use a retriever that uses ElasticSearch and BM25.
For more information on the details of BM25 see this blog post.
#!pip install elasticsearch
from langchain.retrievers import ElasticSearchBM25Retriever
Create New Retriever#
elasticsearch_url="http://localhost:9200"
retriever = ElasticSearchBM25Retriever.create(elasticsearch_url, "langchain-index-4")
# Alternatively, you can load an existing index
# import elasticsearch
# elasticsearch_url="http://localhost:9200" | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/elastic_search_bm25.html |
8742e5258147-1 | # import elasticsearch
# elasticsearch_url="http://localhost:9200"
# retriever = ElasticSearchBM25Retriever(elasticsearch.Elasticsearch(elasticsearch_url), "langchain-index")
Add texts (if necessary)#
We can optionally add texts to the retriever (if they aren’t already in there)
retriever.add_texts(["foo", "bar", "world", "hello", "foo bar"])
['cbd4cb47-8d9f-4f34-b80e-ea871bc49856',
'f3bd2e24-76d1-4f9b-826b-ec4c0e8c7365',
'8631bfc8-7c12-48ee-ab56-8ad5f373676e',
'8be8374c-3253-4d87-928d-d73550a2ecf0',
'd79f457b-2842-4eab-ae10-77aa420b53d7']
Use Retriever#
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
result
[Document(page_content='foo', metadata={}),
Document(page_content='foo bar', metadata={})]
previous
Databerry
next
kNN
Contents
Create New Retriever
Add texts (if necessary)
Use Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/elastic_search_bm25.html |
08feddfa3d2f-0 | .ipynb
.pdf
Getting Started
Getting Started#
The default recommended text splitter is the RecursiveCharacterTextSplitter. This text splitter takes a list of characters. It tries to create chunks based on splitting on the first character, but if any chunks are too large it then moves onto the next character, and so forth. By default the characters it tries to split on are ["\n\n", "\n", " ", ""]
In addition to controlling which characters you can split on, you can also control a few other things:
length_function: how the length of chunks is calculated. Defaults to just counting number of characters, but it’s pretty common to pass a token counter here.
chunk_size: the maximum size of your chunks (as measured by the length function).
chunk_overlap: the maximum overlap between chunks. It can be nice to have some overlap to maintain some continuity between chunks (eg do a sliding window).
# This is a long document we can split up.
with open('../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
# Set a really small chunk size, just to show.
chunk_size = 100,
chunk_overlap = 20,
length_function = len,
)
texts = text_splitter.create_documents([state_of_the_union])
print(texts[0])
print(texts[1])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and' lookup_str='' metadata={} lookup_index=0
page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.' lookup_str='' metadata={} lookup_index=0
previous
Text Splitters
next
Character
By Harrison Chase | https://python.langchain.com/en/latest/modules/indexes/text_splitters/getting_started.html |
08feddfa3d2f-1 | previous
Text Splitters
next
Character
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/getting_started.html |
c1d71cc30b46-0 | .ipynb
.pdf
NLTK
NLTK#
The Natural Language Toolkit, or more commonly NLTK, is a suite of libraries and programs for symbolic and statistical natural language processing (NLP) for English written in the Python programming language.
Rather than just splitting on “\n\n”, we can use NLTK to split based on NLTK tokenizers.
How the text is split: by NLTK tokenizer.
How the chunk size is measured:by number of characters
#pip install nltk
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import NLTKTextSplitter
text_splitter = NLTKTextSplitter(chunk_size=1000)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.
Members of Congress and the Cabinet.
Justices of the Supreme Court.
My fellow Americans.
Last year COVID-19 kept us apart.
This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents.
But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
And with an unwavering resolve that freedom will always triumph over tyranny.
Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways.
But he badly miscalculated.
He thought he could roll into Ukraine and the world would roll over.
Instead he met a wall of strength he never imagined.
He met the Ukrainian people.
From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/nltk.html |
c1d71cc30b46-1 | Groups of citizens blocking tanks with their bodies.
previous
CodeTextSplitter
next
Recursive Character
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/nltk.html |
75834d99a41a-0 | .ipynb
.pdf
CodeTextSplitter
Contents
Python
JS
Markdown
Latex
HTML
CodeTextSplitter#
CodeTextSplitter allows you to split your code with multiple language support. Import enum Language and specify the language.
from langchain.text_splitter import (
RecursiveCharacterTextSplitter,
Language,
)
# Full list of support languages
[e.value for e in Language]
['cpp',
'go',
'java',
'js',
'php',
'proto',
'python',
'rst',
'ruby',
'rust',
'scala',
'swift',
'markdown',
'latex',
'html']
# You can also see the separators used for a given language
RecursiveCharacterTextSplitter.get_separators_for_language(Language.PYTHON)
['\nclass ', '\ndef ', '\n\tdef ', '\n\n', '\n', ' ', '']
Python#
Here’s an example using the PythonTextSplitter
PYTHON_CODE = """
def hello_world():
print("Hello, World!")
# Call the function
hello_world()
"""
python_splitter = RecursiveCharacterTextSplitter.from_language(
language=Language.PYTHON, chunk_size=50, chunk_overlap=0
)
python_docs = python_splitter.create_documents([PYTHON_CODE])
python_docs
[Document(page_content='def hello_world():\n print("Hello, World!")', metadata={}),
Document(page_content='# Call the function\nhello_world()', metadata={})]
JS#
Here’s an example using the JS text splitter
JS_CODE = """
function helloWorld() {
console.log("Hello, World!");
}
// Call the function
helloWorld();
"""
js_splitter = RecursiveCharacterTextSplitter.from_language( | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/code_splitter.html |
75834d99a41a-1 | helloWorld();
"""
js_splitter = RecursiveCharacterTextSplitter.from_language(
language=Language.JS, chunk_size=60, chunk_overlap=0
)
js_docs = js_splitter.create_documents([JS_CODE])
js_docs
[Document(page_content='function helloWorld() {\n console.log("Hello, World!");\n}', metadata={}),
Document(page_content='// Call the function\nhelloWorld();', metadata={})]
Markdown#
Here’s an example using the Markdown text splitter.
markdown_text = """
# 🦜️🔗 LangChain
⚡ Building applications with LLMs through composability ⚡
## Quick Install
```bash
# Hopefully this code block isn't split
pip install langchain
```
As an open source project in a rapidly developing field, we are extremely open to contributions.
"""
md_splitter = RecursiveCharacterTextSplitter.from_language(
language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0
)
md_docs = md_splitter.create_documents([markdown_text])
md_docs
[Document(page_content='# 🦜️🔗 LangChain', metadata={}),
Document(page_content='⚡ Building applications with LLMs through composability ⚡', metadata={}),
Document(page_content='## Quick Install', metadata={}),
Document(page_content="```bash\n# Hopefully this code block isn't split", metadata={}),
Document(page_content='pip install langchain', metadata={}),
Document(page_content='```', metadata={}),
Document(page_content='As an open source project in a rapidly developing field, we', metadata={}),
Document(page_content='are extremely open to contributions.', metadata={})]
Latex#
Here’s an example on Latex text
latex_text = """
\documentclass{article} | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/code_splitter.html |
75834d99a41a-2 | latex_text = """
\documentclass{article}
\begin{document}
\maketitle
\section{Introduction}
Large language models (LLMs) are a type of machine learning model that can be trained on vast amounts of text data to generate human-like language. In recent years, LLMs have made significant advances in a variety of natural language processing tasks, including language translation, text generation, and sentiment analysis.
\subsection{History of LLMs}
The earliest LLMs were developed in the 1980s and 1990s, but they were limited by the amount of data that could be processed and the computational power available at the time. In the past decade, however, advances in hardware and software have made it possible to train LLMs on massive datasets, leading to significant improvements in performance.
\subsection{Applications of LLMs}
LLMs have many applications in industry, including chatbots, content creation, and virtual assistants. They can also be used in academia for research in linguistics, psychology, and computational linguistics.
\end{document}
"""
latex_splitter = RecursiveCharacterTextSplitter.from_language(
language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0
)
latex_docs = latex_splitter.create_documents([latex_text])
latex_docs
[Document(page_content='\\documentclass{article}\n\n\x08egin{document}\n\n\\maketitle', metadata={}),
Document(page_content='\\section{Introduction}', metadata={}),
Document(page_content='Large language models (LLMs) are a type of machine learning', metadata={}),
Document(page_content='model that can be trained on vast amounts of text data to', metadata={}),
Document(page_content='generate human-like language. In recent years, LLMs have', metadata={}), | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/code_splitter.html |
75834d99a41a-3 | Document(page_content='made significant advances in a variety of natural language', metadata={}),
Document(page_content='processing tasks, including language translation, text', metadata={}),
Document(page_content='generation, and sentiment analysis.', metadata={}),
Document(page_content='\\subsection{History of LLMs}', metadata={}),
Document(page_content='The earliest LLMs were developed in the 1980s and 1990s,', metadata={}),
Document(page_content='but they were limited by the amount of data that could be', metadata={}),
Document(page_content='processed and the computational power available at the', metadata={}),
Document(page_content='time. In the past decade, however, advances in hardware and', metadata={}),
Document(page_content='software have made it possible to train LLMs on massive', metadata={}),
Document(page_content='datasets, leading to significant improvements in', metadata={}),
Document(page_content='performance.', metadata={}),
Document(page_content='\\subsection{Applications of LLMs}', metadata={}),
Document(page_content='LLMs have many applications in industry, including', metadata={}),
Document(page_content='chatbots, content creation, and virtual assistants. They', metadata={}),
Document(page_content='can also be used in academia for research in linguistics,', metadata={}),
Document(page_content='psychology, and computational linguistics.', metadata={}),
Document(page_content='\\end{document}', metadata={})]
HTML#
Here’s an example using an HTML text splitter
html_text = """
<!DOCTYPE html>
<html>
<head>
<title>🦜️🔗 LangChain</title>
<style>
body {
font-family: Arial, sans-serif;
}
h1 {
color: darkblue;
}
</style>
</head> | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/code_splitter.html |
75834d99a41a-4 | color: darkblue;
}
</style>
</head>
<body>
<div>
<h1>🦜️🔗 LangChain</h1>
<p>⚡ Building applications with LLMs through composability ⚡</p>
</div>
<div>
As an open source project in a rapidly developing field, we are extremely open to contributions.
</div>
</body>
</html>
"""
html_splitter = RecursiveCharacterTextSplitter.from_language(
language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0
)
html_docs = html_splitter.create_documents([html_text])
html_docs
[Document(page_content='<!DOCTYPE html>\n<html>\n <head>', metadata={}),
Document(page_content='<title>🦜️🔗 LangChain</title>\n <style>', metadata={}),
Document(page_content='body {', metadata={}),
Document(page_content='font-family: Arial, sans-serif;', metadata={}),
Document(page_content='}\n h1 {', metadata={}),
Document(page_content='color: darkblue;\n }', metadata={}),
Document(page_content='</style>\n </head>\n <body>\n <div>', metadata={}),
Document(page_content='<h1>🦜️🔗 LangChain</h1>', metadata={}),
Document(page_content='<p>⚡ Building applications with LLMs through', metadata={}),
Document(page_content='composability ⚡</p>', metadata={}),
Document(page_content='</div>\n <div>', metadata={}),
Document(page_content='As an open source project in a rapidly', metadata={}), | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/code_splitter.html |
75834d99a41a-5 | Document(page_content='As an open source project in a rapidly', metadata={}),
Document(page_content='developing field, we are extremely open to contributions.', metadata={}),
Document(page_content='</div>\n </body>\n</html>', metadata={})]
previous
Character
next
NLTK
Contents
Python
JS
Markdown
Latex
HTML
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/code_splitter.html |
72a74fa96fb6-0 | .ipynb
.pdf
Tiktoken
Tiktoken#
tiktoken is a fast BPE tokeniser created by OpenAI.
How the text is split: by tiktoken tokens
How the chunk size is measured: by tiktoken tokens
#!pip install tiktoken
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import TokenTextSplitter
text_splitter = TokenTextSplitter(chunk_size=10, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our
previous
spaCy
next
Hugging Face tokenizer
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/tiktoken_splitter.html |
af652a0bf529-0 | .ipynb
.pdf
Hugging Face tokenizer
Hugging Face tokenizer#
Hugging Face has many tokenizers.
We use Hugging Face tokenizer, the GPT2TokenizerFast to count the text length in tokens.
How the text is split: by character passed in
How the chunk size is measured: by number of tokens calculated by the Hugging Face tokenizer
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter.from_huggingface_tokenizer(tokenizer, chunk_size=100, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.
Last year COVID-19 kept us apart. This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
previous
Tiktoken
next
tiktoken (OpenAI) tokenizer
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/huggingface_length_function.html |
1334ce68b9a0-0 | .ipynb
.pdf
Character
Character#
This is the simplest method. This splits based on characters (by default “\n\n”) and measure chunk length by number of characters.
How the text is split: by single character
How the chunk size is measured: by number of characters
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(
separator = "\n\n",
chunk_size = 1000,
chunk_overlap = 200,
length_function = len,
)
texts = text_splitter.create_documents([state_of_the_union])
print(texts[0]) | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html |
1334ce68b9a0-1 | print(texts[0])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' lookup_str='' metadata={} lookup_index=0
Here’s an example of passing metadata along with the documents, notice that it is split along with the documents.
metadatas = [{"document": 1}, {"document": 2}]
documents = text_splitter.create_documents([state_of_the_union, state_of_the_union], metadatas=metadatas)
print(documents[0]) | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html |
1334ce68b9a0-2 | print(documents[0])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' lookup_str='' metadata={'document': 1} lookup_index=0
text_splitter.split_text(state_of_the_union)[0] | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html |
1334ce68b9a0-3 | text_splitter.split_text(state_of_the_union)[0]
'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.'
previous
Getting Started
next
CodeTextSplitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html |
061de7673519-0 | .ipynb
.pdf
tiktoken (OpenAI) tokenizer
tiktoken (OpenAI) tokenizer#
tiktoken is a fast BPE tokenizer created by OpenAI.
We can use it to estimate tokens used. It will probably be more accurate for the OpenAI models.
How the text is split: by character passed in
How the chunk size is measured: by tiktoken tokenizer
#!pip install tiktoken
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter.from_tiktoken_encoder(chunk_size=100, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.
Last year COVID-19 kept us apart. This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
previous
Hugging Face tokenizer
next
Vectorstores
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/tiktoken.html |
47602df3af0b-0 | .ipynb
.pdf
Recursive Character
Recursive Character#
This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is ["\n\n", "\n", " ", ""]. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text.
How the text is split: by list of characters
How the chunk size is measured: by number of characters
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
# Set a really small chunk size, just to show.
chunk_size = 100,
chunk_overlap = 20,
length_function = len,
)
texts = text_splitter.create_documents([state_of_the_union])
print(texts[0])
print(texts[1])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and' lookup_str='' metadata={} lookup_index=0
page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.' lookup_str='' metadata={} lookup_index=0
text_splitter.split_text(state_of_the_union)[:2]
['Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and',
'of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.']
previous
NLTK
next
spaCy
By Harrison Chase | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/recursive_text_splitter.html |
47602df3af0b-1 | previous
NLTK
next
spaCy
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/recursive_text_splitter.html |
f3823d6faacb-0 | .ipynb
.pdf
spaCy
spaCy#
spaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython.
Another alternative to NLTK is to use Spacy tokenizer.
How the text is split: by spaCy tokenizer
How the chunk size is measured: by number of characters
#!pip install spacy
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import SpacyTextSplitter
text_splitter = SpacyTextSplitter(chunk_size=1000)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.
Members of Congress and the Cabinet.
Justices of the Supreme Court.
My fellow Americans.
Last year COVID-19 kept us apart.
This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents.
But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
And with an unwavering resolve that freedom will always triumph over tyranny.
Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways.
But he badly miscalculated.
He thought he could roll into Ukraine and the world would roll over.
Instead he met a wall of strength he never imagined.
He met the Ukrainian people.
From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.
previous
Recursive Character
next
Tiktoken
By Harrison Chase | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/spacy.html |
f3823d6faacb-1 | previous
Recursive Character
next
Tiktoken
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/spacy.html |
016cb06970f8-0 | .rst
.pdf
Output Parsers
Output Parsers#
Note
Conceptual Guide
Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.
Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:
get_format_instructions() -> str: A method which returns a string containing instructions for how the output of a language model should be formatted.
parse(str) -> Any: A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.
And then one optional one:
parse_with_prompt(str) -> Any: A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.
To start, we recommend familiarizing yourself with the Getting Started section
Output Parsers
After that, we provide deep dives on all the different types of output parsers.
CommaSeparatedListOutputParser
Datetime
Enum Output Parser
OutputFixingParser
PydanticOutputParser
RetryOutputParser
Structured Output Parser
previous
Similarity ExampleSelector
next
Output Parsers
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/prompts/output_parsers.html |
3c55097117e2-0 | .rst
.pdf
Prompt Templates
Prompt Templates#
Note
Conceptual Guide
Language models take text as input - that text is commonly referred to as a prompt.
Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input.
LangChain provides several classes and functions to make constructing and working with prompts easy.
The following sections of documentation are provided:
Getting Started: An overview of all the functionality LangChain provides for working with and constructing prompts.
How-To Guides: A collection of how-to guides. These highlight how to accomplish various objectives with our prompt class.
Reference: API reference documentation for all prompt classes.
previous
Getting Started
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/prompts/prompt_templates.html |
efa27b50f75f-0 | .rst
.pdf
Example Selectors
Example Selectors#
Note
Conceptual Guide
If you have a large number of examples, you may need to select which ones to include in the prompt. The ExampleSelector is the class responsible for doing so.
The base interface is defined as below:
class BaseExampleSelector(ABC):
"""Interface for selecting examples to include in prompts."""
@abstractmethod
def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:
"""Select which examples to use based on the inputs."""
The only method it needs to expose is a select_examples method. This takes in the input variables and then returns a list of examples. It is up to each specific implementation as to how those examples are selected. Let’s take a look at some below.
See below for a list of example selectors.
How to create a custom example selector
LengthBased ExampleSelector
Maximal Marginal Relevance ExampleSelector
NGram Overlap ExampleSelector
Similarity ExampleSelector
previous
Chat Prompt Templates
next
How to create a custom example selector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/prompts/example_selectors.html |
c35705dff169-0 | .ipynb
.pdf
Chat Prompt Templates
Contents
Format output
Different types of MessagePromptTemplate
Chat Prompt Templates#
Chat Models take a list of chat messages as input - this list commonly referred to as a prompt.
These chat messages differ from raw string (which you would pass into a LLM model) in that every message is associated with a role.
For example, in OpenAI Chat Completion API, a chat message can be associated with the AI, human or system role. The model is supposed to follow instruction from system chat message more closely.
LangChain provides several prompt templates to make constructing and working with prompts easily. You are encouraged to use these chat related prompt templates instead of PromptTemplate when querying chat models to fully exploit the potential of underlying chat model.
from langchain.prompts import (
ChatPromptTemplate,
PromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
To create a message template associated with a role, you use MessagePromptTemplate.
For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:
template="You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
If you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate outside and then pass it in, eg:
prompt=PromptTemplate(
template="You are a helpful assistant that translates {input_language} to {output_language}.",
input_variables=["input_language", "output_language"],
) | https://python.langchain.com/en/latest/modules/prompts/chat_prompt_template.html |
c35705dff169-1 | input_variables=["input_language", "output_language"],
)
system_message_prompt_2 = SystemMessagePromptTemplate(prompt=prompt)
assert system_message_prompt == system_message_prompt_2
After that, you can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
# get a chat completion from the formatted messages
chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages()
[SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}),
HumanMessage(content='I love programming.', additional_kwargs={})]
Format output#
The output of the format method is available as string, list of messages and ChatPromptValue
As string:
output = chat_prompt.format(input_language="English", output_language="French", text="I love programming.")
output
'System: You are a helpful assistant that translates English to French.\nHuman: I love programming.'
# or alternatively
output_2 = chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_string()
assert output == output_2
As ChatPromptValue
chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.")
ChatPromptValue(messages=[SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}), HumanMessage(content='I love programming.', additional_kwargs={})])
As list of Message objects
chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages() | https://python.langchain.com/en/latest/modules/prompts/chat_prompt_template.html |
c35705dff169-2 | [SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}),
HumanMessage(content='I love programming.', additional_kwargs={})]
Different types of MessagePromptTemplate#
LangChain provides different types of MessagePromptTemplate. The most commonly used are AIMessagePromptTemplate, SystemMessagePromptTemplate and HumanMessagePromptTemplate, which create an AI message, system message and human message respectively.
However, in cases where the chat model supports taking chat message with arbitrary role, you can use ChatMessagePromptTemplate, which allows user to specify the role name.
from langchain.prompts import ChatMessagePromptTemplate
prompt = "May the {subject} be with you"
chat_message_prompt = ChatMessagePromptTemplate.from_template(role="Jedi", template=prompt)
chat_message_prompt.format(subject="force")
ChatMessage(content='May the force be with you', additional_kwargs={}, role='Jedi')
LangChain also provides MessagesPlaceholder, which gives you full control of what messages to be rendered during formatting. This can be useful when you are uncertain of what role you should be using for your message prompt templates or when you wish to insert a list of messages during formatting.
from langchain.prompts import MessagesPlaceholder
human_prompt = "Summarize our conversation so far in {word_count} words."
human_message_template = HumanMessagePromptTemplate.from_template(human_prompt)
chat_prompt = ChatPromptTemplate.from_messages([MessagesPlaceholder(variable_name="conversation"), human_message_template])
human_message = HumanMessage(content="What is the best way to learn programming?")
ai_message = AIMessage(content="""\
1. Choose a programming language: Decide on a programming language that you want to learn.
2. Start with the basics: Familiarize yourself with the basic programming concepts such as variables, data types and control structures. | https://python.langchain.com/en/latest/modules/prompts/chat_prompt_template.html |
c35705dff169-3 | 3. Practice, practice, practice: The best way to learn programming is through hands-on experience\
""")
chat_prompt.format_prompt(conversation=[human_message, ai_message], word_count="10").to_messages()
[HumanMessage(content='What is the best way to learn programming?', additional_kwargs={}),
AIMessage(content='1. Choose a programming language: Decide on a programming language that you want to learn. \n\n2. Start with the basics: Familiarize yourself with the basic programming concepts such as variables, data types and control structures.\n\n3. Practice, practice, practice: The best way to learn programming is through hands-on experience', additional_kwargs={}),
HumanMessage(content='Summarize our conversation so far in 10 words.', additional_kwargs={})]
previous
Output Parsers
next
Example Selectors
Contents
Format output
Different types of MessagePromptTemplate
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/prompts/chat_prompt_template.html |
4053d1c1cef4-0 | .ipynb
.pdf
Getting Started
Contents
PromptTemplates
to_string
to_messages
Getting Started#
This section contains everything related to prompts. A prompt is the value passed into the Language Model. This value can either be a string (for LLMs) or a list of messages (for Chat Models).
The data types of these prompts are rather simple, but their construction is anything but. Value props of LangChain here include:
A standard interface for string prompts and message prompts
A standard (to get started) interface for string prompt templates and message prompt templates
Example Selectors: methods for inserting examples into the prompt for the language model to follow
OutputParsers: methods for inserting instructions into the prompt as the format in which the language model should output information, as well as methods for then parsing that string output into a format.
We have in depth documentation for specific types of string prompts, specific types of chat prompts, example selectors, and output parsers.
Here, we cover a quick-start for a standard interface for getting started with simple prompts.
PromptTemplates#
PromptTemplates are responsible for constructing a prompt value. These PromptTemplates can do things like formatting, example selection, and more. At a high level, these are basically objects that expose a format_prompt method for constructing a prompt. Under the hood, ANYTHING can happen.
from langchain.prompts import PromptTemplate, ChatPromptTemplate
string_prompt = PromptTemplate.from_template("tell me a joke about {subject}")
chat_prompt = ChatPromptTemplate.from_template("tell me a joke about {subject}")
string_prompt_value = string_prompt.format_prompt(subject="soccer")
chat_prompt_value = chat_prompt.format_prompt(subject="soccer")
to_string#
This is what is called when passing to an LLM (which expects raw text)
string_prompt_value.to_string()
'tell me a joke about soccer' | https://python.langchain.com/en/latest/modules/prompts/getting_started.html |
4053d1c1cef4-1 | string_prompt_value.to_string()
'tell me a joke about soccer'
chat_prompt_value.to_string()
'Human: tell me a joke about soccer'
to_messages#
This is what is called when passing to ChatModel (which expects a list of messages)
string_prompt_value.to_messages()
[HumanMessage(content='tell me a joke about soccer', additional_kwargs={}, example=False)]
chat_prompt_value.to_messages()
[HumanMessage(content='tell me a joke about soccer', additional_kwargs={}, example=False)]
previous
Prompts
next
Prompt Templates
Contents
PromptTemplates
to_string
to_messages
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/prompts/getting_started.html |
efa014d42d59-0 | .ipynb
.pdf
Output Parsers
Output Parsers#
Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.
Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:
get_format_instructions() -> str: A method which returns a string containing instructions for how the output of a language model should be formatted.
parse(str) -> Any: A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.
And then one optional one:
parse_with_prompt(str, PromptValue) -> Any: A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.
Below we go over the main type of output parser, the PydanticOutputParser. See the examples folder for other options.
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field, validator
from typing import List
model_name = 'text-davinci-003'
temperature = 0.0
model = OpenAI(model_name=model_name, temperature=temperature)
# Define your desired data structure.
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke") | https://python.langchain.com/en/latest/modules/prompts/output_parsers/getting_started.html |
efa014d42d59-1 | punchline: str = Field(description="answer to resolve the joke")
# You can add custom validation logic easily with Pydantic.
@validator('setup')
def question_ends_with_question_mark(cls, field):
if field[-1] != '?':
raise ValueError("Badly formed question!")
return field
# Set up a parser + inject instructions into the prompt template.
parser = PydanticOutputParser(pydantic_object=Joke)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
# And a query intented to prompt a language model to populate the data structure.
joke_query = "Tell me a joke."
_input = prompt.format_prompt(query=joke_query)
output = model(_input.to_string())
parser.parse(output)
Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')
previous
Output Parsers
next
CommaSeparatedListOutputParser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/prompts/output_parsers/getting_started.html |
b2a4bf961f4e-0 | .ipynb
.pdf
Enum Output Parser
Enum Output Parser#
This notebook shows how to use an Enum output parser
from langchain.output_parsers.enum import EnumOutputParser
from enum import Enum
class Colors(Enum):
RED = "red"
GREEN = "green"
BLUE = "blue"
parser = EnumOutputParser(enum=Colors)
parser.parse("red")
<Colors.RED: 'red'>
# Can handle spaces
parser.parse(" green")
<Colors.GREEN: 'green'>
# And new lines
parser.parse("blue\n")
<Colors.BLUE: 'blue'>
# And raises errors when appropriate
parser.parse("yellow")
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File ~/workplace/langchain/langchain/output_parsers/enum.py:25, in EnumOutputParser.parse(self, response)
24 try:
---> 25 return self.enum(response.strip())
26 except ValueError:
File ~/.pyenv/versions/3.9.1/lib/python3.9/enum.py:315, in EnumMeta.__call__(cls, value, names, module, qualname, type, start)
314 if names is None: # simple value lookup
--> 315 return cls.__new__(cls, value)
316 # otherwise, functional API: we're creating a new Enum type
File ~/.pyenv/versions/3.9.1/lib/python3.9/enum.py:611, in Enum.__new__(cls, value)
610 if result is None and exc is None:
--> 611 raise ve_exc
612 elif exc is None:
ValueError: 'yellow' is not a valid Colors
During handling of the above exception, another exception occurred: | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/enum.html |
b2a4bf961f4e-1 | During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
Cell In[8], line 2
1 # And raises errors when appropriate
----> 2 parser.parse("yellow")
File ~/workplace/langchain/langchain/output_parsers/enum.py:27, in EnumOutputParser.parse(self, response)
25 return self.enum(response.strip())
26 except ValueError:
---> 27 raise OutputParserException(
28 f"Response '{response}' is not one of the "
29 f"expected values: {self._valid_values}"
30 )
OutputParserException: Response 'yellow' is not one of the expected values: ['red', 'green', 'blue']
previous
Datetime
next
OutputFixingParser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/enum.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.