id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
b855dc339f65-1
|
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template)
# This is an LLMChain to write a review of a play given a synopsis.
llm = OpenAI(temperature=.7)
template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.
Play Synopsis:
{synopsis}
Review from a New York Times play critic of the above play:"""
prompt_template = PromptTemplate(input_variables=["synopsis"], template=template)
review_chain = LLMChain(llm=llm, prompt=prompt_template)
# This is the overall chain where we run these two chains in sequence.
from langchain.chains import SimpleSequentialChain
overall_chain = SimpleSequentialChain(chains=[synopsis_chain, review_chain], verbose=True)
review = overall_chain.run("Tragedy at sunset on the beach")
> Entering new SimpleSequentialChain chain...
Tragedy at Sunset on the Beach is a story of a young couple, Jack and Sarah, who are in love and looking forward to their future together. On the night of their anniversary, they decide to take a walk on the beach at sunset. As they are walking, they come across a mysterious figure, who tells them that their love will be tested in the near future.
The figure then tells the couple that the sun will soon set, and with it, a tragedy will strike. If Jack and Sarah can stay together and pass the test, they will be granted everlasting love. However, if they fail, their love will be lost forever.
|
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
|
b855dc339f65-2
|
The play follows the couple as they struggle to stay together and battle the forces that threaten to tear them apart. Despite the tragedy that awaits them, they remain devoted to one another and fight to keep their love alive. In the end, the couple must decide whether to take a chance on their future together or succumb to the tragedy of the sunset.
Tragedy at Sunset on the Beach is an emotionally gripping story of love, hope, and sacrifice. Through the story of Jack and Sarah, the audience is taken on a journey of self-discovery and the power of love to overcome even the greatest of obstacles.
The play's talented cast brings the characters to life, allowing us to feel the depths of their emotion and the intensity of their struggle. With its compelling story and captivating performances, this play is sure to draw in audiences and leave them on the edge of their seats.
The play's setting of the beach at sunset adds a touch of poignancy and romanticism to the story, while the mysterious figure serves to keep the audience enthralled. Overall, Tragedy at Sunset on the Beach is an engaging and thought-provoking play that is sure to leave audiences feeling inspired and hopeful.
> Finished chain.
print(review)
Tragedy at Sunset on the Beach is an emotionally gripping story of love, hope, and sacrifice. Through the story of Jack and Sarah, the audience is taken on a journey of self-discovery and the power of love to overcome even the greatest of obstacles.
The play's talented cast brings the characters to life, allowing us to feel the depths of their emotion and the intensity of their struggle. With its compelling story and captivating performances, this play is sure to draw in audiences and leave them on the edge of their seats.
|
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
|
b855dc339f65-3
|
The play's setting of the beach at sunset adds a touch of poignancy and romanticism to the story, while the mysterious figure serves to keep the audience enthralled. Overall, Tragedy at Sunset on the Beach is an engaging and thought-provoking play that is sure to leave audiences feeling inspired and hopeful.
Sequential Chain#
Of course, not all sequential chains will be as simple as passing a single string as an argument and getting a single string as output for all steps in the chain. In this next example, we will experiment with more complex chains that involve multiple inputs, and where there also multiple final outputs.
Of particular importance is how we name the input/output variable names. In the above example we didn’t have to think about that because we were just passing the output of one chain directly as input to the next, but here we do have worry about that because we have multiple inputs.
# This is an LLMChain to write a synopsis given a title of a play and the era it is set in.
llm = OpenAI(temperature=.7)
template = """You are a playwright. Given the title of play and the era it is set in, it is your job to write a synopsis for that title.
Title: {title}
Era: {era}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title", 'era'], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="synopsis")
# This is an LLMChain to write a review of a play given a synopsis.
llm = OpenAI(temperature=.7)
template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.
Play Synopsis:
{synopsis}
|
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
|
b855dc339f65-4
|
Play Synopsis:
{synopsis}
Review from a New York Times play critic of the above play:"""
prompt_template = PromptTemplate(input_variables=["synopsis"], template=template)
review_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="review")
# This is the overall chain where we run these two chains in sequence.
from langchain.chains import SequentialChain
overall_chain = SequentialChain(
chains=[synopsis_chain, review_chain],
input_variables=["era", "title"],
# Here we return multiple variables
output_variables=["synopsis", "review"],
verbose=True)
overall_chain({"title":"Tragedy at sunset on the beach", "era": "Victorian England"})
> Entering new SequentialChain chain...
> Finished chain.
{'title': 'Tragedy at sunset on the beach',
'era': 'Victorian England',
|
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
|
b855dc339f65-5
|
'era': 'Victorian England',
'synopsis': "\n\nThe play follows the story of John, a young man from a wealthy Victorian family, who dreams of a better life for himself. He soon meets a beautiful young woman named Mary, who shares his dream. The two fall in love and decide to elope and start a new life together.\n\nOn their journey, they make their way to a beach at sunset, where they plan to exchange their vows of love. Unbeknownst to them, their plans are overheard by John's father, who has been tracking them. He follows them to the beach and, in a fit of rage, confronts them. \n\nA physical altercation ensues, and in the struggle, John's father accidentally stabs Mary in the chest with his sword. The two are left in shock and disbelief as Mary dies in John's arms, her last words being a declaration of her love for him.\n\nThe tragedy of the play comes to a head when John, broken and with no hope of a future, chooses to take his own life by jumping off the cliffs into the sea below. \n\nThe play is a powerful story of love, hope, and loss set against the backdrop of 19th century England.",
|
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
|
b855dc339f65-6
|
'review': "\n\nThe latest production from playwright X is a powerful and heartbreaking story of love and loss set against the backdrop of 19th century England. The play follows John, a young man from a wealthy Victorian family, and Mary, a beautiful young woman with whom he falls in love. The two decide to elope and start a new life together, and the audience is taken on a journey of hope and optimism for the future.\n\nUnfortunately, their dreams are cut short when John's father discovers them and in a fit of rage, fatally stabs Mary. The tragedy of the play is further compounded when John, broken and without hope, takes his own life. The storyline is not only realistic, but also emotionally compelling, drawing the audience in from start to finish.\n\nThe acting was also commendable, with the actors delivering believable and nuanced performances. The playwright and director have successfully crafted a timeless tale of love and loss that will resonate with audiences for years to come. Highly recommended."}
Memory in Sequential Chains#
Sometimes you may want to pass along some context to use in each step of the chain or in a later part of the chain, but maintaining and chaining together the input/output variables can quickly get messy. Using SimpleMemory is a convenient way to do manage this and clean up your chains.
For example, using the previous playwright SequentialChain, lets say you wanted to include some context about date, time and location of the play, and using the generated synopsis and review, create some social media post text. You could add these new context variables as input_variables, or we can add a SimpleMemory to the chain to manage this context:
from langchain.chains import SequentialChain
from langchain.memory import SimpleMemory
llm = OpenAI(temperature=.7)
|
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
|
b855dc339f65-7
|
from langchain.memory import SimpleMemory
llm = OpenAI(temperature=.7)
template = """You are a social media manager for a theater company. Given the title of play, the era it is set in, the date,time and location, the synopsis of the play, and the review of the play, it is your job to write a social media post for that play.
Here is some context about the time and location of the play:
Date and Time: {time}
Location: {location}
Play Synopsis:
{synopsis}
Review from a New York Times play critic of the above play:
{review}
Social Media Post:
"""
prompt_template = PromptTemplate(input_variables=["synopsis", "review", "time", "location"], template=template)
social_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="social_post_text")
overall_chain = SequentialChain(
memory=SimpleMemory(memories={"time": "December 25th, 8pm PST", "location": "Theater in the Park"}),
chains=[synopsis_chain, review_chain, social_chain],
input_variables=["era", "title"],
# Here we return multiple variables
output_variables=["social_post_text"],
verbose=True)
overall_chain({"title":"Tragedy at sunset on the beach", "era": "Victorian England"})
> Entering new SequentialChain chain...
> Finished chain.
{'title': 'Tragedy at sunset on the beach',
'era': 'Victorian England',
'time': 'December 25th, 8pm PST',
'location': 'Theater in the Park',
|
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
|
b855dc339f65-8
|
'location': 'Theater in the Park',
'social_post_text': "\nSpend your Christmas night with us at Theater in the Park and experience the heartbreaking story of love and loss that is 'A Walk on the Beach'. Set in Victorian England, this romantic tragedy follows the story of Frances and Edward, a young couple whose love is tragically cut short. Don't miss this emotional and thought-provoking production that is sure to leave you in tears. #AWalkOnTheBeach #LoveAndLoss #TheaterInThePark #VictorianEngland"}
previous
Router Chains
next
Serialization
Contents
SimpleSequentialChain
Sequential Chain
Memory in Sequential Chains
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
|
df716fffa0bf-0
|
.ipynb
.pdf
LLM Chain
Contents
LLM Chain
Additional ways of running LLM Chain
Parsing the outputs
Initialize from string
LLM Chain#
LLMChain is perhaps one of the most popular ways of querying an LLM object. It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output. Below we show additional functionalities of LLMChain class.
from langchain import PromptTemplate, OpenAI, LLMChain
prompt_template = "What is a good name for a company that makes {product}?"
llm = OpenAI(temperature=0)
llm_chain = LLMChain(
llm=llm,
prompt=PromptTemplate.from_template(prompt_template)
)
llm_chain("colorful socks")
{'product': 'colorful socks', 'text': '\n\nSocktastic!'}
Additional ways of running LLM Chain#
Aside from __call__ and run methods shared by all Chain object (see Getting Started to learn more), LLMChain offers a few more ways of calling the chain logic:
apply allows you run the chain against a list of inputs:
input_list = [
{"product": "socks"},
{"product": "computer"},
{"product": "shoes"}
]
llm_chain.apply(input_list)
[{'text': '\n\nSocktastic!'},
{'text': '\n\nTechCore Solutions.'},
{'text': '\n\nFootwear Factory.'}]
generate is similar to apply, except it return an LLMResult instead of string. LLMResult often contains useful generation such as token usages and finish reason.
llm_chain.generate(input_list)
|
https://python.langchain.com/en/latest/modules/chains/generic/llm_chain.html
|
df716fffa0bf-1
|
llm_chain.generate(input_list)
LLMResult(generations=[[Generation(text='\n\nSocktastic!', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nTechCore Solutions.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nFootwear Factory.', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'prompt_tokens': 36, 'total_tokens': 55, 'completion_tokens': 19}, 'model_name': 'text-davinci-003'})
predict is similar to run method except that the input keys are specified as keyword arguments instead of a Python dict.
# Single input example
llm_chain.predict(product="colorful socks")
'\n\nSocktastic!'
# Multiple inputs example
template = """Tell me a {adjective} joke about {subject}."""
prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"])
llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0))
llm_chain.predict(adjective="sad", subject="ducks")
'\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.'
Parsing the outputs#
By default, LLMChain does not parse the output even if the underlying prompt object has an output parser. If you would like to apply that output parser on the LLM output, use predict_and_parse instead of predict and apply_and_parse instead of apply.
With predict:
from langchain.output_parsers import CommaSeparatedListOutputParser
output_parser = CommaSeparatedListOutputParser()
template = """List all the colors in a rainbow"""
|
https://python.langchain.com/en/latest/modules/chains/generic/llm_chain.html
|
df716fffa0bf-2
|
template = """List all the colors in a rainbow"""
prompt = PromptTemplate(template=template, input_variables=[], output_parser=output_parser)
llm_chain = LLMChain(prompt=prompt, llm=llm)
llm_chain.predict()
'\n\nRed, orange, yellow, green, blue, indigo, violet'
With predict_and_parser:
llm_chain.predict_and_parse()
['Red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet']
Initialize from string#
You can also construct an LLMChain from a string template directly.
template = """Tell me a {adjective} joke about {subject}."""
llm_chain = LLMChain.from_string(llm=llm, template=template)
llm_chain.predict(adjective="sad", subject="ducks")
'\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.'
previous
Loading from LangChainHub
next
Router Chains
Contents
LLM Chain
Additional ways of running LLM Chain
Parsing the outputs
Initialize from string
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/chains/generic/llm_chain.html
|
df6e0d723089-0
|
.rst
.pdf
Vectorstores
Vectorstores#
Note
Conceptual Guide
Vectorstores are one of the most important components of building indexes.
For an introduction to vectorstores and generic functionality see:
Getting Started
We also have documentation for all the types of vectorstores that are supported.
Please see below for that list.
AnalyticDB
Annoy
Atlas
Chroma
Deep Lake
DocArrayHnswSearch
DocArrayInMemorySearch
ElasticSearch
FAISS
LanceDB
Milvus
MyScale
OpenSearch
PGVector
Pinecone
Qdrant
Redis
SKLearnVectorStore
Supabase (Postgres)
Tair
Typesense
Vectara
Weaviate
Persistance
Retriever options
Zilliz
previous
tiktoken (OpenAI) tokenizer
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores.html
|
1726fb297da7-0
|
.rst
.pdf
Retrievers
Retrievers#
Note
Conceptual Guide
The retriever interface is a generic interface that makes it easy to combine documents with
language models. This interface exposes a get_relevant_documents method which takes in a query
(a string) and returns a list of documents.
Please see below for a list of all the retrievers supported.
Arxiv
Azure Cognitive Search Retriever
ChatGPT Plugin
Self-querying with Chroma
Cohere Reranker
Contextual Compression
Stringing compressors and document transformers together
Databerry
ElasticSearch BM25
kNN
Metal
Pinecone Hybrid Search
Self-querying
SVM
TF-IDF
Time Weighted VectorStore
VectorStore
Vespa
Weaviate Hybrid Search
Self-querying with Weaviate
Wikipedia
Zep Memory
previous
Zilliz
next
Arxiv
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers.html
|
4c7c0357569f-0
|
.ipynb
.pdf
Getting Started
Contents
One Line Index Creation
Walkthrough
Getting Started#
LangChain primarily focuses on constructing indexes with the goal of using them as a Retriever. In order to best understand what this means, it’s worth highlighting what the base Retriever interface is. The BaseRetriever class in LangChain is as follows:
from abc import ABC, abstractmethod
from typing import List
from langchain.schema import Document
class BaseRetriever(ABC):
@abstractmethod
def get_relevant_documents(self, query: str) -> List[Document]:
"""Get texts relevant for a query.
Args:
query: string to find relevant texts for
Returns:
List of relevant documents
"""
It’s that simple! The get_relevant_documents method can be implemented however you see fit.
Of course, we also help construct what we think useful Retrievers are. The main type of Retriever that we focus on is a Vectorstore retriever. We will focus on that for the rest of this guide.
In order to understand what a vectorstore retriever is, it’s important to understand what a Vectorstore is. So let’s look at that.
By default, LangChain uses Chroma as the vectorstore to index and search embeddings. To walk through this tutorial, we’ll first need to install chromadb.
pip install chromadb
This example showcases question answering over documents.
We have chosen this as the example for getting started because it nicely combines a lot of different elements (Text splitters, embeddings, vectorstores) and then also shows how to use them in a chain.
Question answering over documents consists of four steps:
Create an index
Create a Retriever from that index
Create a question answering chain
Ask questions!
|
https://python.langchain.com/en/latest/modules/indexes/getting_started.html
|
4c7c0357569f-1
|
Create a Retriever from that index
Create a question answering chain
Ask questions!
Each of the steps has multiple sub steps and potential configurations. In this notebook we will primarily focus on (1). We will start by showing the one-liner for doing so, but then break down what is actually going on.
First, let’s import some common classes we’ll use no matter what.
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
Next in the generic setup, let’s specify the document loader we want to use. You can download the state_of_the_union.txt file here
from langchain.document_loaders import TextLoader
loader = TextLoader('../state_of_the_union.txt', encoding='utf8')
One Line Index Creation#
To get started as quickly as possible, we can use the VectorstoreIndexCreator.
from langchain.indexes import VectorstoreIndexCreator
index = VectorstoreIndexCreator().from_loaders([loader])
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
Now that the index is created, we can use it to ask questions of the data! Note that under the hood this is actually doing a few steps as well, which we will cover later in this guide.
query = "What did the president say about Ketanji Brown Jackson"
index.query(query)
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
query = "What did the president say about Ketanji Brown Jackson"
index.query_with_sources(query)
|
https://python.langchain.com/en/latest/modules/indexes/getting_started.html
|
4c7c0357569f-2
|
index.query_with_sources(query)
{'question': 'What did the president say about Ketanji Brown Jackson',
'answer': " The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, one of the nation's top legal minds, to continue Justice Breyer's legacy of excellence, and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\n",
'sources': '../state_of_the_union.txt'}
What is returned from the VectorstoreIndexCreator is VectorStoreIndexWrapper, which provides these nice query and query_with_sources functionality. If we just wanted to access the vectorstore directly, we can also do that.
index.vectorstore
<langchain.vectorstores.chroma.Chroma at 0x119aa5940>
If we then want to access the VectorstoreRetriever, we can do that with:
index.vectorstore.as_retriever()
VectorStoreRetriever(vectorstore=<langchain.vectorstores.chroma.Chroma object at 0x119aa5940>, search_kwargs={})
Walkthrough#
Okay, so what’s actually going on? How is this index getting created?
A lot of the magic is being hid in this VectorstoreIndexCreator. What is this doing?
There are three main steps going on after the documents are loaded:
Splitting documents into chunks
Creating embeddings for each document
Storing documents and embeddings in a vectorstore
Let’s walk through this in code
documents = loader.load()
Next, we will split the documents into chunks.
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
We will then select which embeddings we want to use.
|
https://python.langchain.com/en/latest/modules/indexes/getting_started.html
|
4c7c0357569f-3
|
We will then select which embeddings we want to use.
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
We now create the vectorstore to use as the index.
from langchain.vectorstores import Chroma
db = Chroma.from_documents(texts, embeddings)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
So that’s creating the index. Then, we expose this index in a retriever interface.
retriever = db.as_retriever()
Then, as before, we create a chain and use it to answer questions!
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=retriever)
query = "What did the president say about Ketanji Brown Jackson"
qa.run(query)
" The President said that Judge Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He said she is a consensus builder and has received a broad range of support from organizations such as the Fraternal Order of Police and former judges appointed by Democrats and Republicans."
VectorstoreIndexCreator is just a wrapper around all this logic. It is configurable in the text splitter it uses, the embeddings it uses, and the vectorstore it uses. For example, you can configure it as below:
index_creator = VectorstoreIndexCreator(
vectorstore_cls=Chroma,
embedding=OpenAIEmbeddings(),
text_splitter=CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
)
|
https://python.langchain.com/en/latest/modules/indexes/getting_started.html
|
4c7c0357569f-4
|
)
Hopefully this highlights what is going on under the hood of VectorstoreIndexCreator. While we think it’s important to have a simple way to create indexes, we also think it’s important to understand what’s going on under the hood.
previous
Indexes
next
Document Loaders
Contents
One Line Index Creation
Walkthrough
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/getting_started.html
|
b93043893820-0
|
.rst
.pdf
Text Splitters
Text Splitters#
Note
Conceptual Guide
When you want to deal with long pieces of text, it is necessary to split up that text into chunks.
As simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together. What “semantically related” means could depend on the type of text.
This notebook showcases several ways to do that.
At a high level, text splitters work as following:
Split the text up into small, semantically meaningful chunks (often sentences).
Start combining these small chunks into a larger chunk until you reach a certain size (as measured by some function).
Once you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks).
That means there are two different axes along which you can customize your text splitter:
How the text is split
How the chunk size is measured
For an introduction to the default text splitter and generic functionality see:
Getting Started
Usage examples for the text splitters:
Character
LaTeX
Markdown
NLTK
Python code
Recursive Character
spaCy
tiktoken (OpenAI)
Most LLMs are constrained by the number of tokens that you can pass in, which is not the same as the number of characters.
In order to get a more accurate estimate, we can use tokenizers to count the number of tokens in the text.
We use this number inside the ..TextSplitter classes.
This implemented as the from_<tokenizer> methods of the ..TextSplitter classes:
Hugging Face tokenizer
tiktoken (OpenAI) tokenizer
previous
Twitter
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
|
https://python.langchain.com/en/latest/modules/indexes/text_splitters.html
|
b93043893820-1
|
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/text_splitters.html
|
b87e50bb16b8-0
|
.rst
.pdf
Document Loaders
Contents
Transform loaders
Public dataset or service loaders
Proprietary dataset or service loaders
Document Loaders#
Note
Conceptual Guide
Combining language models with your own text data is a powerful way to differentiate them.
The first step in doing this is to load the data into “Documents” - a fancy way of say some pieces of text.
The document loader is aimed at making this easy.
The following document loaders are provided:
Transform loaders#
These transform loaders transform data from a specific format into the Document format.
For example, there are transformers for CSV and SQL.
Mostly, these loaders input data from files but sometime from URLs.
A primary driver of a lot of these transformers is the Unstructured python package.
This package transforms many types of files - text, powerpoint, images, html, pdf, etc - into text data.
For detailed instructions on how to get set up with Unstructured, see installation guidelines here.
CoNLL-U
Copy Paste
CSV
Email
EPub
EverNote
Facebook Chat
File Directory
HTML
Images
Jupyter Notebook
JSON
Markdown
Microsoft PowerPoint
Microsoft Word
Open Document Format (ODT)
Pandas DataFrame
PDF
Sitemap
Subtitle
Telegram
TOML
Unstructured File
URL
Selenium URL Loader
Playwright URL Loader
WebBaseLoader
Weather
WhatsApp Chat
Public dataset or service loaders#
These datasets and sources are created for public domain and we use queries to search there
and download necessary documents.
For example, Hacker News service.
We don’t need any access permissions to these datasets and services.
Arxiv
AZLyrics
BiliBili
College Confidential
Gutenberg
Hacker News
HuggingFace dataset
iFixit
IMSDb
MediaWikiDump
Wikipedia
YouTube transcripts
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders.html
|
b87e50bb16b8-1
|
iFixit
IMSDb
MediaWikiDump
Wikipedia
YouTube transcripts
Proprietary dataset or service loaders#
These datasets and services are not from the public domain.
These loaders mostly transform data from specific formats of applications or cloud services,
for example Google Drive.
We need access tokens and sometime other parameters to get access to these datasets and services.
Airbyte JSON
Apify Dataset
AWS S3 Directory
AWS S3 File
Azure Blob Storage Container
Azure Blob Storage File
Blackboard
Blockchain
ChatGPT Data
Confluence
Diffbot
Discord
Docugami
DuckDB
Figma
GitBook
Git
Google BigQuery
Google Cloud Storage Directory
Google Cloud Storage File
Google Drive
Image captions
Iugu
Joplin
Microsoft OneDrive
Modern Treasury
Notion DB 2/2
Notion DB 1/2
Obsidian
Psychic
ReadTheDocs Documentation
Reddit
Roam
Slack
Spreedly
Stripe
2Markdown
Twitter
previous
Getting Started
next
CoNLL-U
Contents
Transform loaders
Public dataset or service loaders
Proprietary dataset or service loaders
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders.html
|
132ba36e3f68-0
|
.ipynb
.pdf
Self-querying
Contents
Creating a Pinecone index
Creating our self-querying retriever
Testing it out
Filter k
Self-querying#
In the notebook we’ll demo the SelfQueryRetriever, which, as the name suggests, has the ability to query itself. Specifically, given any natural language query, the retriever uses a query-constructing LLM chain to write a structured query and then applies that structured query to it’s underlying VectorStore. This allows the retriever to not only use the user-input query for semantic similarity comparison with the contents of stored documented, but to also extract filters from the user query on the metadata of stored documents and to execute those filters.
Creating a Pinecone index#
First we’ll want to create a Pinecone VectorStore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
To use Pinecone, you to have pinecone package installed and you must have an API key and an Environment. Here are the installation instructions.
NOTE: The self-query retriever requires you to have lark package installed.
# !pip install lark
#!pip install pinecone-client
import os
import pinecone
pinecone.init(api_key=os.environ["PINECONE_API_KEY"], environment=os.environ["PINECONE_ENV"])
/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pinecone/index.py:4: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)
from tqdm.autonotebook import tqdm
from langchain.schema import Document
from langchain.embeddings.openai import OpenAIEmbeddings
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html
|
132ba36e3f68-1
|
from langchain.schema import Document
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Pinecone
embeddings = OpenAIEmbeddings()
# create new index
pinecone.create_index("langchain-self-retriever-demo", dimension=1536)
docs = [
Document(page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": ["action", "science fiction"]}),
Document(page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}),
Document(page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}),
Document(page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}),
Document(page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}),
Document(page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={"year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": ["science fiction", "thriller"], "rating": 9.9})
]
vectorstore = Pinecone.from_documents(
docs, embeddings, index_name="langchain-self-retriever-demo"
)
Creating our self-querying retriever#
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html
|
132ba36e3f68-2
|
)
Creating our self-querying retriever#
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.llms import OpenAI
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
metadata_field_info=[
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating",
description="A 1-10 rating for the movie",
type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
Testing it out#
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
query='dinosaur' filter=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': ['action', 'science fiction'], 'rating': 7.7, 'year': 1993.0}),
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html
|
132ba36e3f68-3
|
Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995.0}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006.0}),
Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'director': 'Christopher Nolan', 'rating': 8.2, 'year': 2010.0})]
# This example only specifies a filter
retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006.0}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': ['science fiction', 'thriller'], 'rating': 9.9, 'year': 1979.0})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig')
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html
|
132ba36e3f68-4
|
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'director': 'Greta Gerwig', 'rating': 8.3, 'year': 2019.0})]
# This example specifies a composite filter
retriever.get_relevant_documents("What's a highly rated (above 8.5) science fiction film?")
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)])
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': ['science fiction', 'thriller'], 'rating': 9.9, 'year': 1979.0})]
# This example specifies a query and composite filter
retriever.get_relevant_documents("What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990.0), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005.0), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')])
[Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995.0})]
Filter k#
We can also use the self query retriever to specify k: the number of documents to fetch.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html
|
132ba36e3f68-5
|
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True
)
# This example only specifies a relevant query
retriever.get_relevant_documents("What are two movies about dinosaurs")
previous
Pinecone Hybrid Search
next
SVM
Contents
Creating a Pinecone index
Creating our self-querying retriever
Testing it out
Filter k
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html
|
49c74a4a18ec-0
|
.ipynb
.pdf
VectorStore
Contents
Maximum Marginal Relevance Retrieval
Similarity Score Threshold Retrieval
Specifying top k
VectorStore#
The index - and therefore the retriever - that LangChain has the most support for is the VectorStoreRetriever. As the name suggests, this retriever is backed heavily by a VectorStore.
Once you construct a VectorStore, its very easy to construct a retriever. Let’s walk through an example.
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = FAISS.from_documents(texts, embeddings)
Exiting: Cleaning up .chroma directory
retriever = db.as_retriever()
docs = retriever.get_relevant_documents("what did he say about ketanji brown jackson")
Maximum Marginal Relevance Retrieval#
By default, the vectorstore retriever uses similarity search. If the underlying vectorstore support maximum marginal relevance search, you can specify that as the search type.
retriever = db.as_retriever(search_type="mmr")
docs = retriever.get_relevant_documents("what did he say abotu ketanji brown jackson")
Similarity Score Threshold Retrieval#
You can also a retrieval method that sets a similarity score threshold and only returns documents with a score above that threshold
retriever = db.as_retriever(search_type="similarity_score_threshold", search_kwargs={"score_threshold": .5})
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vectorstore.html
|
49c74a4a18ec-1
|
docs = retriever.get_relevant_documents("what did he say abotu ketanji brown jackson")
Specifying top k#
You can also specify search kwargs like k to use when doing retrieval.
retriever = db.as_retriever(search_kwargs={"k": 1})
docs = retriever.get_relevant_documents("what did he say abotu ketanji brown jackson")
len(docs)
1
previous
Time Weighted VectorStore
next
Vespa
Contents
Maximum Marginal Relevance Retrieval
Similarity Score Threshold Retrieval
Specifying top k
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vectorstore.html
|
18b0cdbbd104-0
|
.ipynb
.pdf
Pinecone Hybrid Search
Contents
Setup Pinecone
Get embeddings and sparse encoders
Load Retriever
Add texts (if necessary)
Use Retriever
Pinecone Hybrid Search#
Pinecone is a vector database with broad functionality.
This notebook goes over how to use a retriever that under the hood uses Pinecone and Hybrid Search.
The logic of this retriever is taken from this documentaion
To use Pinecone, you must have an API key and an Environment.
Here are the installation instructions.
#!pip install pinecone-client pinecone-text
import os
import getpass
os.environ['PINECONE_API_KEY'] = getpass.getpass('Pinecone API Key:')
from langchain.retrievers import PineconeHybridSearchRetriever
os.environ['PINECONE_ENVIRONMENT'] = getpass.getpass('Pinecone Environment:')
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
Setup Pinecone#
You should only have to do this part once.
Note: it’s important to make sure that the “context” field that holds the document text in the metadata is not indexed. Currently you need to specify explicitly the fields you do want to index. For more information checkout Pinecone’s docs.
import os
import pinecone
api_key = os.getenv("PINECONE_API_KEY") or "PINECONE_API_KEY"
# find environment next to your API key in the Pinecone console
env = os.getenv("PINECONE_ENVIRONMENT") or "PINECONE_ENVIRONMENT"
index_name = "langchain-pinecone-hybrid-search"
pinecone.init(api_key=api_key, enviroment=env)
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pinecone_hybrid_search.html
|
18b0cdbbd104-1
|
pinecone.init(api_key=api_key, enviroment=env)
pinecone.whoami()
WhoAmIResponse(username='load', user_label='label', projectname='load-test')
# create the index
pinecone.create_index(
name = index_name,
dimension = 1536, # dimensionality of dense model
metric = "dotproduct", # sparse values supported only for dotproduct
pod_type = "s1",
metadata_config={"indexed": []} # see explaination above
)
Now that its created, we can use it
index = pinecone.Index(index_name)
Get embeddings and sparse encoders#
Embeddings are used for the dense vectors, tokenizer is used for the sparse vector
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
To encode the text to sparse values you can either choose SPLADE or BM25. For out of domain tasks we recommend using BM25.
For more information about the sparse encoders you can checkout pinecone-text library docs.
from pinecone_text.sparse import BM25Encoder
# or from pinecone_text.sparse import SpladeEncoder if you wish to work with SPLADE
# use default tf-idf values
bm25_encoder = BM25Encoder().default()
The above code is using default tfids values. It’s highly recommended to fit the tf-idf values to your own corpus. You can do it as follow:
corpus = ["foo", "bar", "world", "hello"]
# fit tf-idf values on your corpus
bm25_encoder.fit(corpus)
# store the values to a json file
bm25_encoder.dump("bm25_values.json")
# load to your BM25Encoder object
bm25_encoder = BM25Encoder().load("bm25_values.json")
Load Retriever#
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pinecone_hybrid_search.html
|
18b0cdbbd104-2
|
Load Retriever#
We can now construct the retriever!
retriever = PineconeHybridSearchRetriever(embeddings=embeddings, sparse_encoder=bm25_encoder, index=index)
Add texts (if necessary)#
We can optionally add texts to the retriever (if they aren’t already in there)
retriever.add_texts(["foo", "bar", "world", "hello"])
100%|██████████| 1/1 [00:02<00:00, 2.27s/it]
Use Retriever#
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
result[0]
Document(page_content='foo', metadata={})
previous
Metal
next
Self-querying
Contents
Setup Pinecone
Get embeddings and sparse encoders
Load Retriever
Add texts (if necessary)
Use Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pinecone_hybrid_search.html
|
ad6052707f12-0
|
.ipynb
.pdf
Vespa
Vespa#
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.
This notebook shows how to use Vespa.ai as a LangChain retriever.
In order to create a retriever, we use pyvespa to
create a connection a Vespa service.
#!pip install pyvespa
from vespa.application import Vespa
vespa_app = Vespa(url="https://doc-search.vespa.oath.cloud")
This creates a connection to a Vespa service, here the Vespa documentation search service.
Using pyvespa package, you can also connect to a
Vespa Cloud instance
or a local
Docker instance.
After connecting to the service, you can set up the retriever:
from langchain.retrievers.vespa_retriever import VespaRetriever
vespa_query_body = {
"yql": "select content from paragraph where userQuery()",
"hits": 5,
"ranking": "documentation",
"locale": "en-us"
}
vespa_content_field = "content"
retriever = VespaRetriever(vespa_app, vespa_query_body, vespa_content_field)
This sets up a LangChain retriever that fetches documents from the Vespa application.
Here, up to 5 results are retrieved from the content field in the paragraph document type,
using doumentation as the ranking method. The userQuery() is replaced with the actual query
passed from LangChain.
Please refer to the pyvespa documentation
for more information.
Now you can return the results and continue using the results in LangChain.
retriever.get_relevant_documents("what is vespa?")
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vespa.html
|
ad6052707f12-1
|
retriever.get_relevant_documents("what is vespa?")
previous
VectorStore
next
Weaviate Hybrid Search
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vespa.html
|
1f2b65e1e4d1-0
|
.ipynb
.pdf
SVM
Contents
Create New Retriever with Texts
Use Retriever
SVM#
Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.
This notebook goes over how to use a retriever that under the hood uses an SVM using scikit-learn package.
Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.ipynb
#!pip install scikit-learn
#!pip install lark
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.retrievers import SVMRetriever
from langchain.embeddings import OpenAIEmbeddings
Create New Retriever with Texts#
retriever = SVMRetriever.from_texts(["foo", "bar", "world", "hello", "foo bar"], OpenAIEmbeddings())
Use Retriever#
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
result
[Document(page_content='foo', metadata={}),
Document(page_content='foo bar', metadata={}),
Document(page_content='hello', metadata={}),
Document(page_content='world', metadata={})]
previous
Self-querying
next
TF-IDF
Contents
Create New Retriever with Texts
Use Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/svm.html
|
e425961cad5b-0
|
.ipynb
.pdf
Zep Memory
Contents
Retriever Example
Initialize the Zep Chat Message History Class and add a chat message history to the memory store
Use the Zep Retriever to vector search over the Zep memory
Zep Memory#
Retriever Example#
This notebook demonstrates how to search historical chat message histories using the Zep Long-term Memory Store.
We’ll demonstrate:
Adding conversation history to the Zep memory store.
Vector search over the conversation history.
More on Zep:
Zep stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs.
Key Features:
Long-term memory persistence, with access to historical messages irrespective of your summarization strategy.
Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies.
Vector search over memories, with messages automatically embedded on creation.
Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.
Python and JavaScript SDKs.
Zep’s Go Extractor model is easily extensible, with a simple, clean interface available to build new enrichment functionality, such as summarizers, entity extractors, embedders, and more.
Zep project: getzep/zep
from langchain.memory.chat_message_histories import ZepChatMessageHistory
from langchain.schema import HumanMessage, AIMessage
from uuid import uuid4
# Set this to your Zep server URL
ZEP_API_URL = "http://localhost:8000"
Initialize the Zep Chat Message History Class and add a chat message history to the memory store#
NOTE: Unlike other Retrievers, the content returned by the Zep Retriever is session/user specific. A session_id is required when instantiating the Retriever.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html
|
e425961cad5b-1
|
session_id = str(uuid4()) # This is a unique identifier for the user/session
# Set up Zep Chat History. We'll use this to add chat histories to the memory store
zep_chat_history = ZepChatMessageHistory(
session_id=session_id,
url=ZEP_API_URL,
)
# Preload some messages into the memory. The default message window is 12 messages. We want to push beyond this to demonstrate auto-summarization.
test_history = [
{"role": "human", "content": "Who was Octavia Butler?"},
{
"role": "ai",
"content": (
"Octavia Estelle Butler (June 22, 1947 – February 24, 2006) was an American"
" science fiction author."
),
},
{"role": "human", "content": "Which books of hers were made into movies?"},
{
"role": "ai",
"content": (
"The most well-known adaptation of Octavia Butler's work is the FX series"
" Kindred, based on her novel of the same name."
),
},
{"role": "human", "content": "Who were her contemporaries?"},
{
"role": "ai",
"content": (
"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R."
" Delany, and Joanna Russ."
),
},
{"role": "human", "content": "What awards did she win?"},
{
"role": "ai",
"content": (
"Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur"
" Fellowship."
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html
|
e425961cad5b-2
|
" Fellowship."
),
},
{
"role": "human",
"content": "Which other women sci-fi writers might I want to read?",
},
{
"role": "ai",
"content": "You might want to read Ursula K. Le Guin or Joanna Russ.",
},
{
"role": "human",
"content": (
"Write a short synopsis of Butler's book, Parable of the Sower. What is it"
" about?"
),
},
{
"role": "ai",
"content": (
"Parable of the Sower is a science fiction novel by Octavia Butler,"
" published in 1993. It follows the story of Lauren Olamina, a young woman"
" living in a dystopian future where society has collapsed due to"
" environmental disasters, poverty, and violence."
),
},
]
for msg in test_history:
zep_chat_history.append(
HumanMessage(content=msg["content"])
if msg["role"] == "human"
else AIMessage(content=msg["content"])
)
Use the Zep Retriever to vector search over the Zep memory#
Zep provides native vector search over historical conversation memory. Embedding happens automatically.
NOTE: Embedding of messages occurs asynchronously, so the first query may not return results. Subsequent queries will return results as the embeddings are generated.
from langchain.retrievers import ZepRetriever
zep_retriever = ZepRetriever(
session_id=session_id, # Ensure that you provide the session_id when instantiating the Retriever
url=ZEP_API_URL,
top_k=5,
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html
|
e425961cad5b-3
|
url=ZEP_API_URL,
top_k=5,
)
await zep_retriever.aget_relevant_documents("Who wrote Parable of the Sower?")
[Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7759001673780126, 'uuid': '3a82a02f-056e-4c6a-b960-67ebdf3b2b93', 'created_at': '2023-05-25T15:03:30.2041Z', 'role': 'human', 'token_count': 8}),
Document(page_content="Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.", metadata={'score': 0.7602262941130749, 'uuid': 'a2fc9c21-0897-46c8-bef7-6f5c0f71b04a', 'created_at': '2023-05-25T15:03:30.248065Z', 'role': 'ai', 'token_count': 27}),
Document(page_content='Who were her contemporaries?', metadata={'score': 0.757553366415519, 'uuid': '41f9c41a-a205-41e1-b48b-a0a4cd943fc8', 'created_at': '2023-05-25T15:03:30.243995Z', 'role': 'human', 'token_count': 8}),
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html
|
e425961cad5b-4
|
Document(page_content='Octavia Estelle Butler (June 22, 1947 – February 24, 2006) was an American science fiction author.', metadata={'score': 0.7546211059317948, 'uuid': '34678311-0098-4f1a-8fd4-5615ac692deb', 'created_at': '2023-05-25T15:03:30.231427Z', 'role': 'ai', 'token_count': 31}),
Document(page_content='Which books of hers were made into movies?', metadata={'score': 0.7496714959247069, 'uuid': '18046c3a-9666-4d3e-b4f0-43d1394732b7', 'created_at': '2023-05-25T15:03:30.236837Z', 'role': 'human', 'token_count': 11})]
We can also use the Zep sync API to retrieve results:
zep_retriever.get_relevant_documents("Who wrote Parable of the Sower?")
[Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', metadata={'score': 0.8897321402776546, 'uuid': '1c09603a-52c1-40d7-9d69-29f26256029c', 'created_at': '2023-05-25T15:03:30.268257Z', 'role': 'ai', 'token_count': 56}),
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html
|
e425961cad5b-5
|
Document(page_content="Write a short synopsis of Butler's book, Parable of the Sower. What is it about?", metadata={'score': 0.8857628682610436, 'uuid': 'f6706e8c-6c91-452f-8c1b-9559fd924657', 'created_at': '2023-05-25T15:03:30.265302Z', 'role': 'human', 'token_count': 23}),
Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7759670375149477, 'uuid': '3a82a02f-056e-4c6a-b960-67ebdf3b2b93', 'created_at': '2023-05-25T15:03:30.2041Z', 'role': 'human', 'token_count': 8}),
Document(page_content="Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.", metadata={'score': 0.7602854653476563, 'uuid': 'a2fc9c21-0897-46c8-bef7-6f5c0f71b04a', 'created_at': '2023-05-25T15:03:30.248065Z', 'role': 'ai', 'token_count': 27}),
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html
|
e425961cad5b-6
|
Document(page_content='You might want to read Ursula K. Le Guin or Joanna Russ.', metadata={'score': 0.7595293992240313, 'uuid': 'f22f2498-6118-4c74-8718-aa89ccd7e3d6', 'created_at': '2023-05-25T15:03:30.261198Z', 'role': 'ai', 'token_count': 18})]
previous
Wikipedia
next
Chains
Contents
Retriever Example
Initialize the Zep Chat Message History Class and add a chat message history to the memory store
Use the Zep Retriever to vector search over the Zep memory
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html
|
7e24fdfd7994-0
|
.ipynb
.pdf
Wikipedia
Contents
Installation
Examples
Running retriever
Question Answering on facts
Wikipedia#
Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.
This notebook shows how to retrieve wiki pages from wikipedia.org into the Document format that is used downstream.
Installation#
First, you need to install wikipedia python package.
#!pip install wikipedia
WikipediaRetriever has these arguments:
optional lang: default=”en”. Use it to search in a specific language part of Wikipedia
optional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.
optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), title, Summary. If True, other fields also downloaded.
get_relevant_documents() has one argument, query: free text which used to find documents in Wikipedia
Examples#
Running retriever#
from langchain.retrievers import WikipediaRetriever
retriever = WikipediaRetriever()
docs = retriever.get_relevant_documents(query='HUNTER X HUNTER')
docs[0].metadata # meta-information of the Document
{'title': 'Hunter × Hunter',
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/wikipedia.html
|
7e24fdfd7994-1
|
'summary': 'Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced "hunter hunter") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\'s shōnen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tankōbon volumes as of November 2022. The story focuses on a young boy named Gon Freecss who discovers that his father, who left him at a young age, is actually a world-renowned Hunter, a licensed professional who specializes in fantastical pursuits such as locating rare or unidentified animal species, treasure hunting, surveying unexplored enclaves, or hunting down lawless individuals. Gon departs on a journey to become a Hunter and eventually find his father. Along the way, Gon meets various other Hunters and encounters the paranormal.\nHunter × Hunter was adapted into a 62-episode anime television
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/wikipedia.html
|
7e24fdfd7994-2
|
Hunter was adapted into a 62-episode anime television series produced by Nippon Animation and directed by Kazuhiro Furuhashi, which ran on Fuji Television from October 1999 to March 2001. Three separate original video animations (OVAs) totaling 30 episodes were subsequently produced by Nippon Animation and released in Japan from 2002 to 2004. A second anime television series by Madhouse aired on Nippon Television from October 2011 to September 2014, totaling 148 episodes, with two animated theatrical films released in 2013. There are also numerous audio albums, video games, musicals, and other media based on Hunter × Hunter.\nThe manga has been translated into English and released in North America by Viz Media since April 2005. Both television series have been also licensed by Viz Media, with the first series having aired on the Funimation Channel in 2009 and the second series broadcast on Adult Swim\'s Toonami programming block from April 2016 to June 2019.\nHunter × Hunter has been a huge critical and financial success and
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/wikipedia.html
|
7e24fdfd7994-3
|
Hunter has been a huge critical and financial success and has become one of the best-selling manga series of all time, having over 84 million copies in circulation by July 2022.\n\n'}
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/wikipedia.html
|
7e24fdfd7994-4
|
docs[0].page_content[:400] # a content of the Document
'Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced "hunter hunter") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\'s shōnen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tankōbon volumes as of November 2022. The sto'
Question Answering on facts#
# get a token: https://platform.openai.com/account/api-keys
from getpass import getpass
OPENAI_API_KEY = getpass()
········
import os
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
model = ChatOpenAI(model_name='gpt-3.5-turbo') # switch to 'gpt-4'
qa = ConversationalRetrievalChain.from_llm(model,retriever=retriever)
questions = [
"What is Apify?",
"When the Monument to the Martyrs of the 1830 Revolution was created?",
"What is the Abhayagiri Vihāra?",
# "How big is Wikipédia en français?",
]
chat_history = []
for question in questions:
result = qa({"question": question, "chat_history": chat_history})
chat_history.append((question, result['answer']))
print(f"-> **Question**: {question} \n")
print(f"**Answer**: {result['answer']} \n")
-> **Question**: What is Apify?
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/wikipedia.html
|
7e24fdfd7994-5
|
-> **Question**: What is Apify?
**Answer**: Apify is a platform that allows you to easily automate web scraping, data extraction and web automation. It provides a cloud-based infrastructure for running web crawlers and other automation tasks, as well as a web-based tool for building and managing your crawlers. Additionally, Apify offers a marketplace for buying and selling pre-built crawlers and related services.
-> **Question**: When the Monument to the Martyrs of the 1830 Revolution was created?
**Answer**: Apify is a web scraping and automation platform that enables you to extract data from websites, turn unstructured data into structured data, and automate repetitive tasks. It provides a user-friendly interface for creating web scraping scripts without any coding knowledge. Apify can be used for various web scraping tasks such as data extraction, web monitoring, content aggregation, and much more. Additionally, it offers various features such as proxy support, scheduling, and integration with other tools to make web scraping and automation tasks easier and more efficient.
-> **Question**: What is the Abhayagiri Vihāra?
**Answer**: Abhayagiri Vihāra was a major monastery site of Theravada Buddhism that was located in Anuradhapura, Sri Lanka. It was founded in the 2nd century BCE and is considered to be one of the most important monastic complexes in Sri Lanka.
previous
Self-querying with Weaviate
next
Zep Memory
Contents
Installation
Examples
Running retriever
Question Answering on facts
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/wikipedia.html
|
fe884c51d3dd-0
|
.ipynb
.pdf
Self-querying with Chroma
Contents
Creating a Chroma vectorstore
Creating our self-querying retriever
Testing it out
Filter k
Self-querying with Chroma#
Chroma is a database for building AI applications with embeddings.
In the notebook we’ll demo the SelfQueryRetriever wrapped around a Chroma vector store.
Creating a Chroma vectorstore#
First we’ll want to create a Chroma VectorStore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
NOTE: The self-query retriever requires you to have lark installed (pip install lark). We also need the chromadb package.
#!pip install lark
#!pip install chromadb
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.schema import Document
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
embeddings = OpenAIEmbeddings()
docs = [
Document(page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}),
Document(page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}),
Document(page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}),
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html
|
fe884c51d3dd-1
|
Document(page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}),
Document(page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}),
Document(page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={"year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9})
]
vectorstore = Chroma.from_documents(
docs, embeddings
)
Using embedded DuckDB without persistence: data will be transient
Creating our self-querying retriever#
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.llms import OpenAI
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
metadata_field_info=[
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating",
description="A 1-10 rating for the movie",
type="float"
),
]
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html
|
fe884c51d3dd-2
|
type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
Testing it out#
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
query='dinosaur' filter=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}),
Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2})]
# This example only specifies a filter
retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}),
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html
|
fe884c51d3dd-3
|
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig')
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]
# This example specifies a composite filter
retriever.get_relevant_documents("What's a highly rated (above 8.5) science fiction film?")
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)])
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]
# This example specifies a query and composite filter
retriever.get_relevant_documents("What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html
|
fe884c51d3dd-4
|
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')])
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
Filter k#
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True
)
# This example only specifies a relevant query
retriever.get_relevant_documents("what are two movies about dinosaurs")
query='dinosaur' filter=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}),
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html
|
fe884c51d3dd-5
|
Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2})]
previous
ChatGPT Plugin
next
Cohere Reranker
Contents
Creating a Chroma vectorstore
Creating our self-querying retriever
Testing it out
Filter k
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html
|
c9595e89e64f-0
|
.ipynb
.pdf
Azure Cognitive Search Retriever
Contents
Set up Azure Cognitive Search
Using the Azure Cognitive Search Retriever
Azure Cognitive Search Retriever#
This notebook shows how to use Azure Cognitive Search (ACS) within LangChain.
Set up Azure Cognitive Search#
To set up ACS, please follow the instrcutions here.
Please note
the name of your ACS service,
the name of your ACS index,
your API key.
Your API key can be either Admin or Query key, but as we only read data it is recommended to use a Query key.
Using the Azure Cognitive Search Retriever#
import os
from langchain.retrievers import AzureCognitiveSearchRetriever
Set Service Name, Index Name and API key as environment variables (alternatively, you can pass them as arguments to AzureCognitiveSearchRetriever).
os.environ["AZURE_COGNITIVE_SEARCH_SERVICE_NAME"] = "<YOUR_ACS_SERVICE_NAME>"
os.environ["AZURE_COGNITIVE_SEARCH_INDEX_NAME"] ="<YOUR_ACS_INDEX_NAME>"
os.environ["AZURE_COGNITIVE_SEARCH_API_KEY"] = "<YOUR_API_KEY>"
Create the Retriever
retriever = AzureCognitiveSearchRetriever(content_key="content")
Now you can use retrieve documents from Azure Cognitive Search
retriever.get_relevant_documents("what is langchain")
previous
Arxiv
next
ChatGPT Plugin
Contents
Set up Azure Cognitive Search
Using the Azure Cognitive Search Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/azure-cognitive-search-retriever.html
|
dc504e5569ec-0
|
.ipynb
.pdf
Self-querying with Weaviate
Contents
Creating a Weaviate vectorstore
Creating our self-querying retriever
Testing it out
Filter k
Self-querying with Weaviate#
Creating a Weaviate vectorstore#
First we’ll want to create a Weaviate VectorStore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
NOTE: The self-query retriever requires you to have lark installed (pip install lark). We also need the weaviate-client package.
#!pip install lark weaviate-client
from langchain.schema import Document
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Weaviate
import os
embeddings = OpenAIEmbeddings()
docs = [
Document(page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}),
Document(page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}),
Document(page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}),
Document(page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}),
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate_self_query.html
|
dc504e5569ec-1
|
Document(page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}),
Document(page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={"year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9})
]
vectorstore = Weaviate.from_documents(
docs, embeddings, weaviate_url="http://127.0.0.1:8080"
)
Creating our self-querying retriever#
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.llms import OpenAI
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
metadata_field_info=[
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating",
description="A 1-10 rating for the movie",
type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate_self_query.html
|
dc504e5569ec-2
|
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
Testing it out#
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
query='dinosaur' filter=None limit=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'rating': None, 'year': 1995}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'genre': 'science fiction', 'rating': 9.9, 'year': 1979}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'genre': None, 'rating': 8.6, 'year': 2006})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'genre': None, 'rating': 8.3, 'year': 2019})]
Filter k#
We can also use the self query retriever to specify k: the number of documents to fetch.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate_self_query.html
|
dc504e5569ec-3
|
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True
)
# This example only specifies a relevant query
retriever.get_relevant_documents("what are two movies about dinosaurs")
query='dinosaur' filter=None limit=2
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'rating': None, 'year': 1995})]
previous
Weaviate Hybrid Search
next
Wikipedia
Contents
Creating a Weaviate vectorstore
Creating our self-querying retriever
Testing it out
Filter k
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate_self_query.html
|
507ff6ce963e-0
|
.ipynb
.pdf
ElasticSearch BM25
Contents
Create New Retriever
Add texts (if necessary)
Use Retriever
ElasticSearch BM25#
Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.
In information retrieval, Okapi BM25 (BM is an abbreviation of best matching) is a ranking function used by search engines to estimate the relevance of documents to a given search query. It is based on the probabilistic retrieval framework developed in the 1970s and 1980s by Stephen E. Robertson, Karen Spärck Jones, and others.
The name of the actual ranking function is BM25. The fuller name, Okapi BM25, includes the name of the first system to use it, which was the Okapi information retrieval system, implemented at London’s City University in the 1980s and 1990s. BM25 and its newer variants, e.g. BM25F (a version of BM25 that can take document structure and anchor text into account), represent TF-IDF-like retrieval functions used in document retrieval.
This notebook shows how to use a retriever that uses ElasticSearch and BM25.
For more information on the details of BM25 see this blog post.
#!pip install elasticsearch
from langchain.retrievers import ElasticSearchBM25Retriever
Create New Retriever#
elasticsearch_url="http://localhost:9200"
retriever = ElasticSearchBM25Retriever.create(elasticsearch_url, "langchain-index-4")
# Alternatively, you can load an existing index
# import elasticsearch
# elasticsearch_url="http://localhost:9200"
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/elastic_search_bm25.html
|
507ff6ce963e-1
|
# import elasticsearch
# elasticsearch_url="http://localhost:9200"
# retriever = ElasticSearchBM25Retriever(elasticsearch.Elasticsearch(elasticsearch_url), "langchain-index")
Add texts (if necessary)#
We can optionally add texts to the retriever (if they aren’t already in there)
retriever.add_texts(["foo", "bar", "world", "hello", "foo bar"])
['cbd4cb47-8d9f-4f34-b80e-ea871bc49856',
'f3bd2e24-76d1-4f9b-826b-ec4c0e8c7365',
'8631bfc8-7c12-48ee-ab56-8ad5f373676e',
'8be8374c-3253-4d87-928d-d73550a2ecf0',
'd79f457b-2842-4eab-ae10-77aa420b53d7']
Use Retriever#
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
result
[Document(page_content='foo', metadata={}),
Document(page_content='foo bar', metadata={})]
previous
Databerry
next
kNN
Contents
Create New Retriever
Add texts (if necessary)
Use Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/elastic_search_bm25.html
|
c006ad9fb6d5-0
|
.ipynb
.pdf
kNN
Contents
Create New Retriever with Texts
Use Retriever
kNN#
In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression.
This notebook goes over how to use a retriever that under the hood uses an kNN.
Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.ipynb
from langchain.retrievers import KNNRetriever
from langchain.embeddings import OpenAIEmbeddings
Create New Retriever with Texts#
retriever = KNNRetriever.from_texts(["foo", "bar", "world", "hello", "foo bar"], OpenAIEmbeddings())
Use Retriever#
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
result
[Document(page_content='foo', metadata={}),
Document(page_content='foo bar', metadata={}),
Document(page_content='hello', metadata={}),
Document(page_content='bar', metadata={})]
previous
ElasticSearch BM25
next
Metal
Contents
Create New Retriever with Texts
Use Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/knn.html
|
36eb9ff449a9-0
|
.ipynb
.pdf
TF-IDF
Contents
Create New Retriever with Texts
Create a New Retriever with Documents
Use Retriever
TF-IDF#
TF-IDF means term-frequency times inverse document-frequency.
This notebook goes over how to use a retriever that under the hood uses TF-IDF using scikit-learn package.
For more information on the details of TF-IDF see this blog post.
# !pip install scikit-learn
from langchain.retrievers import TFIDFRetriever
Create New Retriever with Texts#
retriever = TFIDFRetriever.from_texts(["foo", "bar", "world", "hello", "foo bar"])
Create a New Retriever with Documents#
You can now create a new retriever with the documents you created.
from langchain.schema import Document
retriever = TFIDFRetriever.from_documents([Document(page_content="foo"), Document(page_content="bar"), Document(page_content="world"), Document(page_content="hello"), Document(page_content="foo bar")])
Use Retriever#
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
result
[Document(page_content='foo', metadata={}),
Document(page_content='foo bar', metadata={}),
Document(page_content='hello', metadata={}),
Document(page_content='world', metadata={})]
previous
SVM
next
Time Weighted VectorStore
Contents
Create New Retriever with Texts
Create a New Retriever with Documents
Use Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/tf_idf.html
|
4edd9767f8de-0
|
.ipynb
.pdf
Databerry
Contents
Query
Databerry#
Databerry platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources).
Then your Datastores can be connected to ChatGPT via Plugins or any other Large Langue Model (LLM) via the Databerry API.
This notebook shows how to use Databerry’s retriever.
First, you will need to sign up for Databerry, create a datastore, add some data and get your datastore api endpoint url. You need the API Key.
Query#
Now that our index is set up, we can set up a retriever and start querying it.
from langchain.retrievers import DataberryRetriever
retriever = DataberryRetriever(
datastore_url="https://clg1xg2h80000l708dymr0fxc.databerry.ai/query",
# api_key="DATABERRY_API_KEY", # optional if datastore is public
# top_k=10 # optional
)
retriever.get_relevant_documents("What is Daftpage?")
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html
|
4edd9767f8de-1
|
)
retriever.get_relevant_documents("What is Daftpage?")
[Document(page_content='✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramGetting StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!DaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord', metadata={'source': 'https:/daftpage.com/help/getting-started', 'score': 0.8697265}),
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html
|
4edd9767f8de-2
|
Document(page_content="✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramHelp CenterWelcome to Daftpage’s help center—the one-stop shop for learning everything about building websites with Daftpage.Daftpage is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops🎨 Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at [email protected] the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.86570895}),
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html
|
4edd9767f8de-3
|
Document(page_content=" is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops🎨 Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at [email protected] the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.8645384})]
previous
Contextual Compression
next
ElasticSearch BM25
Contents
Query
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html
|
55c9f9175b11-0
|
.ipynb
.pdf
ChatGPT Plugin
Contents
Using the ChatGPT Retriever Plugin
ChatGPT Plugin#
OpenAI plugins connect ChatGPT to third-party applications. These plugins enable ChatGPT to interact with APIs defined by developers, enhancing ChatGPT’s capabilities and allowing it to perform a wide range of actions.
Plugins can allow ChatGPT to do things like:
Retrieve real-time information; e.g., sports scores, stock prices, the latest news, etc.
Retrieve knowledge-base information; e.g., company docs, personal notes, etc.
Perform actions on behalf of the user; e.g., booking a flight, ordering food, etc.
This notebook shows how to use the ChatGPT Retriever Plugin within LangChain.
# STEP 1: Load
# Load documents using LangChain's DocumentLoaders
# This is from https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/csv.html
from langchain.document_loaders.csv_loader import CSVLoader
loader = CSVLoader(file_path='../../document_loaders/examples/example_data/mlb_teams_2012.csv')
data = loader.load()
# STEP 2: Convert
# Convert Document to format expected by https://github.com/openai/chatgpt-retrieval-plugin
from typing import List
from langchain.docstore.document import Document
import json
def write_json(path: str, documents: List[Document])-> None:
results = [{"text": doc.page_content} for doc in documents]
with open(path, "w") as f:
json.dump(results, f, indent=2)
write_json("foo.json", data)
# STEP 3: Use
# Ingest this as you would any other json file in https://github.com/openai/chatgpt-retrieval-plugin/tree/main/scripts/process_json
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin.html
|
55c9f9175b11-1
|
Using the ChatGPT Retriever Plugin#
Okay, so we’ve created the ChatGPT Retriever Plugin, but how do we actually use it?
The below code walks through how to do that.
We want to use ChatGPTPluginRetriever so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.retrievers import ChatGPTPluginRetriever
retriever = ChatGPTPluginRetriever(url="http://0.0.0.0:8000", bearer_token="foo")
retriever.get_relevant_documents("alice's phone number")
[Document(page_content="This is Alice's phone number: 123-456-7890", lookup_str='', metadata={'id': '456_0', 'metadata': {'source': 'email', 'source_id': '567', 'url': None, 'created_at': '1609592400.0', 'author': 'Alice', 'document_id': '456'}, 'embedding': None, 'score': 0.925571561}, lookup_index=0),
Document(page_content='This is a document about something', lookup_str='', metadata={'id': '123_0', 'metadata': {'source': 'file', 'source_id': 'https://example.com/doc1', 'url': 'https://example.com/doc1', 'created_at': '1609502400.0', 'author': 'Alice', 'document_id': '123'}, 'embedding': None, 'score': 0.6987589}, lookup_index=0),
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin.html
|
55c9f9175b11-2
|
Document(page_content='Team: Angels "Payroll (millions)": 154.49 "Wins": 89', lookup_str='', metadata={'id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631_0', 'metadata': {'source': None, 'source_id': None, 'url': None, 'created_at': None, 'author': None, 'document_id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631'}, 'embedding': None, 'score': 0.697888613}, lookup_index=0)]
previous
Azure Cognitive Search Retriever
next
Self-querying with Chroma
Contents
Using the ChatGPT Retriever Plugin
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin.html
|
91d7c16812d8-0
|
.ipynb
.pdf
Weaviate Hybrid Search
Weaviate Hybrid Search#
Weaviate is an open source vector database.
Hybrid search is a technique that combines multiple search algorithms to improve the accuracy and relevance of search results. It uses the best features of both keyword-based search algorithms with vector search techniques.
The Hybrid search in Weaviate uses sparse and dense vectors to represent the meaning and context of search queries and documents.
This notebook shows how to use Weaviate hybrid search as a LangChain retriever.
Set up the retriever:
#!pip install weaviate-client
import weaviate
import os
WEAVIATE_URL = os.getenv("WEAVIATE_URL")
client = weaviate.Client(
url=WEAVIATE_URL,
auth_client_secret=weaviate.AuthApiKey(api_key=os.getenv("WEAVIATE_API_KEY")),
additional_headers={
"X-Openai-Api-Key": os.getenv("OPENAI_API_KEY"),
},
)
# client.schema.delete_all()
from langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetriever
from langchain.schema import Document
/workspaces/langchain/langchain/vectorstores/analyticdb.py:20: MovedIn20Warning: The ``declarative_base()`` function is now available as sqlalchemy.orm.declarative_base(). (deprecated since: 2.0) (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
Base = declarative_base() # type: Any
retriever = WeaviateHybridSearchRetriever(
client, index_name="LangChain", text_key="text"
)
Add some data:
docs = [
Document(
metadata={
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html
|
91d7c16812d8-1
|
)
Add some data:
docs = [
Document(
metadata={
"title": "Embracing The Future: AI Unveiled",
"author": "Dr. Rebecca Simmons",
},
page_content="A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.",
),
Document(
metadata={
"title": "Symbiosis: Harmonizing Humans and AI",
"author": "Prof. Jonathan K. Sterling",
},
page_content="Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.",
),
Document(
metadata={"title": "AI: The Ethical Quandary", "author": "Dr. Rebecca Simmons"},
page_content="In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.",
),
Document(
metadata={
"title": "Conscious Constructs: The Search for AI Sentience",
"author": "Dr. Samuel Cortez",
},
page_content="Dr. Cortez takes readers on a journey exploring the controversial topic of AI consciousness. The book provides compelling arguments for and against the possibility of true AI sentience.",
),
Document(
metadata={
"title": "Invisible Routines: Hidden AI in Everyday Life",
"author": "Prof. Jonathan K. Sterling",
},
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html
|
91d7c16812d8-2
|
"author": "Prof. Jonathan K. Sterling",
},
page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.",
),
]
retriever.add_documents(docs)
['eda16d7d-437d-4613-84ae-c2e38705ec7a',
'04b501bf-192b-4e72-be77-2fbbe7e67ebf',
'18a1acdb-23b7-4482-ab04-a6c2ed51de77',
'88e82cc3-c020-4b5a-b3c6-ca7cf3fc6a04',
'f6abd9d5-32ed-46c4-bd08-f8d0f7c9fc95']
Do a hybrid search:
retriever.get_relevant_documents("the ethical implications of AI")
[Document(page_content='In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.', metadata={}),
Document(page_content='A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.', metadata={}),
Document(page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", metadata={}),
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html
|
91d7c16812d8-3
|
Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={})]
Do a hybrid search with where filter:
retriever.get_relevant_documents(
"AI integration in society",
where_filter={
"path": ["author"],
"operator": "Equal",
"valueString": "Prof. Jonathan K. Sterling",
},
)
[Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={}),
Document(page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", metadata={})]
previous
Vespa
next
Self-querying with Weaviate
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html
|
f4d5f871bde4-0
|
.ipynb
.pdf
Arxiv
Contents
Installation
Examples
Running retriever
Question Answering on facts
Arxiv#
arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.
This notebook shows how to retrieve scientific articles from Arxiv.org into the Document format that is used downstream.
Installation#
First, you need to install arxiv python package.
#!pip install arxiv
ArxivRetriever has these arguments:
optional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.
optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), Title, Authors, Summary. If True, other fields also downloaded.
get_relevant_documents() has one argument, query: free text which used to find documents in Arxiv.org
Examples#
Running retriever#
from langchain.retrievers import ArxivRetriever
retriever = ArxivRetriever(load_max_docs=2)
docs = retriever.get_relevant_documents(query='1605.08386')
docs[0].metadata # meta-information of the Document
{'Published': '2016-05-26',
'Title': 'Heat-bath random walks with Markov bases',
'Authors': 'Caprice Stanley, Tobias Windisch',
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html
|
f4d5f871bde4-1
|
'Authors': 'Caprice Stanley, Tobias Windisch',
'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on\nfibers of a fixed integer matrix can be bounded from above by a constant. We\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditions on the set of moves so that the heat-bath random\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\ndimension.'}
docs[0].page_content[:400] # a content of the Document
'arXiv:1605.08386v1 [math.CO] 26 May 2016\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\nCAPRICE STANLEY AND TOBIAS WINDISCH\nAbstract. Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on fibers of a\nfixed integer matrix can be bounded from above by a constant. We then study the mixing\nbehaviour of heat-b'
Question Answering on facts#
# get a token: https://platform.openai.com/account/api-keys
from getpass import getpass
OPENAI_API_KEY = getpass()
import os
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
model = ChatOpenAI(model_name='gpt-3.5-turbo') # switch to 'gpt-4'
qa = ConversationalRetrievalChain.from_llm(model,retriever=retriever)
questions = [
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html
|
f4d5f871bde4-2
|
questions = [
"What are Heat-bath random walks with Markov base?",
"What is the ImageBind model?",
"How does Compositional Reasoning with Large Language Models works?",
]
chat_history = []
for question in questions:
result = qa({"question": question, "chat_history": chat_history})
chat_history.append((question, result['answer']))
print(f"-> **Question**: {question} \n")
print(f"**Answer**: {result['answer']} \n")
-> **Question**: What are Heat-bath random walks with Markov base?
**Answer**: I'm not sure, as I don't have enough context to provide a definitive answer. The term "Heat-bath random walks with Markov base" is not mentioned in the given text. Could you provide more information or context about where you encountered this term?
-> **Question**: What is the ImageBind model?
**Answer**: ImageBind is an approach developed by Facebook AI Research to learn a joint embedding across six different modalities, including images, text, audio, depth, thermal, and IMU data. The approach uses the binding property of images to align each modality's embedding to image embeddings and achieve an emergent alignment across all modalities. This enables novel multimodal capabilities, including cross-modal retrieval, embedding-space arithmetic, and audio-to-image generation, among others. The approach sets a new state-of-the-art on emergent zero-shot recognition tasks across modalities, outperforming specialist supervised models. Additionally, it shows strong few-shot recognition results and serves as a new way to evaluate vision models for visual and non-visual tasks.
-> **Question**: How does Compositional Reasoning with Large Language Models works?
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html
|
f4d5f871bde4-3
|
-> **Question**: How does Compositional Reasoning with Large Language Models works?
**Answer**: Compositional reasoning with large language models refers to the ability of these models to correctly identify and represent complex concepts by breaking them down into smaller, more basic parts and combining them in a structured way. This involves understanding the syntax and semantics of language and using that understanding to build up more complex meanings from simpler ones.
In the context of the paper "Does CLIP Bind Concepts? Probing Compositionality in Large Image Models", the authors focus specifically on the ability of a large pretrained vision and language model (CLIP) to encode compositional concepts and to bind variables in a structure-sensitive way. They examine CLIP's ability to compose concepts in a single-object setting, as well as in situations where concept binding is needed.
The authors situate their work within the tradition of research on compositional distributional semantics models (CDSMs), which seek to bridge the gap between distributional models and formal semantics by building architectures which operate over vectors yet still obey traditional theories of linguistic composition. They compare the performance of CLIP with several architectures from research on CDSMs to evaluate its ability to encode and reason about compositional concepts.
questions = [
"What are Heat-bath random walks with Markov base? Include references to answer.",
]
chat_history = []
for question in questions:
result = qa({"question": question, "chat_history": chat_history})
chat_history.append((question, result['answer']))
print(f"-> **Question**: {question} \n")
print(f"**Answer**: {result['answer']} \n")
-> **Question**: What are Heat-bath random walks with Markov base? Include references to answer.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html
|
f4d5f871bde4-4
|
**Answer**: Heat-bath random walks with Markov base (HB-MB) is a class of stochastic processes that have been studied in the field of statistical mechanics and condensed matter physics. In these processes, a particle moves in a lattice by making a transition to a neighboring site, which is chosen according to a probability distribution that depends on the energy of the particle and the energy of its surroundings.
The HB-MB process was introduced by Bortz, Kalos, and Lebowitz in 1975 as a way to simulate the dynamics of interacting particles in a lattice at thermal equilibrium. The method has been used to study a variety of physical phenomena, including phase transitions, critical behavior, and transport properties.
References:
Bortz, A. B., Kalos, M. H., & Lebowitz, J. L. (1975). A new algorithm for Monte Carlo simulation of Ising spin systems. Journal of Computational Physics, 17(1), 10-18.
Binder, K., & Heermann, D. W. (2010). Monte Carlo simulation in statistical physics: an introduction. Springer Science & Business Media.
previous
Retrievers
next
Azure Cognitive Search Retriever
Contents
Installation
Examples
Running retriever
Question Answering on facts
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html
|
240db575d60d-0
|
.ipynb
.pdf
Contextual Compression
Contents
Contextual Compression
Using a vanilla vector store retriever
Adding contextual compression with an LLMChainExtractor
More built-in compressors: filters
LLMChainFilter
EmbeddingsFilter
Stringing compressors and document transformers together
Contextual Compression#
This notebook introduces the concept of DocumentCompressors and the ContextualCompressionRetriever. The core idea is simple: given a specific query, we should be able to return only the documents relevant to that query, and only the parts of those documents that are relevant. The ContextualCompressionsRetriever is a wrapper for another retriever that iterates over the initial output of the base retriever and filters and compresses those initial documents, so that only the most relevant information is returned.
# Helper function for printing docs
def pretty_print_docs(docs):
print(f"\n{'-' * 100}\n".join([f"Document {i+1}:\n\n" + d.page_content for i, d in enumerate(docs)]))
Using a vanilla vector store retriever#
Let’s start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can see that given an example question our retriever returns one or two relevant docs and a few irrelevant docs. And even the relevant docs have a lot of irrelevant information in them.
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.document_loaders import TextLoader
from langchain.vectorstores import FAISS
documents = TextLoader('../../../state_of_the_union.txt').load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
|
240db575d60d-1
|
texts = text_splitter.split_documents(documents)
retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever()
docs = retriever.get_relevant_documents("What did the president say about Ketanji Brown Jackson")
pretty_print_docs(docs)
Document 1:
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
----------------------------------------------------------------------------------------------------
Document 2:
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling.
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
|
240db575d60d-2
|
We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.
----------------------------------------------------------------------------------------------------
Document 3:
And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong.
As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential.
While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.
And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things.
So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together.
First, beat the opioid epidemic.
----------------------------------------------------------------------------------------------------
Document 4:
Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers.
And as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up.
That ends on my watch.
Medicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect.
We’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees.
Let’s pass the Paycheck Fairness Act and paid leave.
Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
|
240db575d60d-3
|
Let’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.
Adding contextual compression with an LLMChainExtractor#
Now let’s wrap our base retriever with a ContextualCompressionRetriever. We’ll add an LLMChainExtractor, which will iterate over the initially returned documents and extract from each only the content that is relevant to the query.
from langchain.llms import OpenAI
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import LLMChainExtractor
llm = OpenAI(temperature=0)
compressor = LLMChainExtractor.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(base_compressor=compressor, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence."
----------------------------------------------------------------------------------------------------
Document 2:
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
More built-in compressors: filters#
LLMChainFilter#
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
|
240db575d60d-4
|
More built-in compressors: filters#
LLMChainFilter#
The LLMChainFilter is slightly simpler but more robust compressor that uses an LLM chain to decide which of the initially retrieved documents to filter out and which ones to return, without manipulating the document contents.
from langchain.retrievers.document_compressors import LLMChainFilter
_filter = LLMChainFilter.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(base_compressor=_filter, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
EmbeddingsFilter#
Making an extra LLM call over each retrieved document is expensive and slow. The EmbeddingsFilter provides a cheaper and faster option by embedding the documents and query and only returning those documents which have sufficiently similar embeddings to the query.
from langchain.embeddings import OpenAIEmbeddings
from langchain.retrievers.document_compressors import EmbeddingsFilter
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
|
240db575d60d-5
|
from langchain.retrievers.document_compressors import EmbeddingsFilter
embeddings = OpenAIEmbeddings()
embeddings_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)
compression_retriever = ContextualCompressionRetriever(base_compressor=embeddings_filter, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
----------------------------------------------------------------------------------------------------
Document 2:
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
|
240db575d60d-6
|
We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling.
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.
We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.
----------------------------------------------------------------------------------------------------
Document 3:
And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong.
As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential.
While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.
And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things.
So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together.
First, beat the opioid epidemic.
Stringing compressors and document transformers together#
Using the DocumentCompressorPipeline we can also easily combine multiple compressors in sequence. Along with compressors we can add BaseDocumentTransformers to our pipeline, which don’t perform any contextual compression but simply perform some transformation on a set of documents. For example TextSplitters can be used as document transformers to split documents into smaller pieces, and the EmbeddingsRedundantFilter can be used to filter out redundant documents based on embedding similarity between documents.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
|
240db575d60d-7
|
Below we create a compressor pipeline by first splitting our docs into smaller chunks, then removing redundant documents, and then filtering based on relevance to the query.
from langchain.document_transformers import EmbeddingsRedundantFilter
from langchain.retrievers.document_compressors import DocumentCompressorPipeline
from langchain.text_splitter import CharacterTextSplitter
splitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=0, separator=". ")
redundant_filter = EmbeddingsRedundantFilter(embeddings=embeddings)
relevant_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)
pipeline_compressor = DocumentCompressorPipeline(
transformers=[splitter, redundant_filter, relevant_filter]
)
compression_retriever = ContextualCompressionRetriever(base_compressor=pipeline_compressor, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson
----------------------------------------------------------------------------------------------------
Document 2:
As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential.
While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year
----------------------------------------------------------------------------------------------------
Document 3:
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder
previous
Cohere Reranker
next
Databerry
Contents
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
|
240db575d60d-8
|
previous
Cohere Reranker
next
Databerry
Contents
Contextual Compression
Using a vanilla vector store retriever
Adding contextual compression with an LLMChainExtractor
More built-in compressors: filters
LLMChainFilter
EmbeddingsFilter
Stringing compressors and document transformers together
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
|
dd64104fb6e4-0
|
.ipynb
.pdf
Cohere Reranker
Contents
Set up the base vector store retriever
Doing reranking with CohereRerank
Cohere Reranker#
Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.
This notebook shows how to use Cohere’s rerank endpoint in a retriever. This builds on top of ideas in the ContextualCompressionRetriever.
#!pip install cohere
#!pip install faiss
# OR (depending on Python version)
#!pip install faiss-cpu
# get a new token: https://dashboard.cohere.ai/
import os
import getpass
os.environ['COHERE_API_KEY'] = getpass.getpass('Cohere API Key:')
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
# Helper function for printing docs
def pretty_print_docs(docs):
print(f"\n{'-' * 100}\n".join([f"Document {i+1}:\n\n" + d.page_content for i, d in enumerate(docs)]))
Set up the base vector store retriever#
Let’s start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs.
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.document_loaders import TextLoader
from langchain.vectorstores import FAISS
documents = TextLoader('../../../state_of_the_union.txt').load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)
texts = text_splitter.split_documents(documents)
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html
|
dd64104fb6e4-1
|
texts = text_splitter.split_documents(documents)
retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever(search_kwargs={"k": 20})
query = "What did the president say about Ketanji Brown Jackson"
docs = retriever.get_relevant_documents(query)
pretty_print_docs(docs)
Document 1:
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
----------------------------------------------------------------------------------------------------
Document 2:
As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential.
While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.
----------------------------------------------------------------------------------------------------
Document 3:
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
----------------------------------------------------------------------------------------------------
Document 4:
He met the Ukrainian people.
From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html
|
dd64104fb6e4-2
|
Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.
In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.
----------------------------------------------------------------------------------------------------
Document 5:
I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.
I’ve worked on these issues a long time.
I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.
So let’s not abandon our streets. Or choose between safety and equal justice.
----------------------------------------------------------------------------------------------------
Document 6:
Vice President Harris and I ran for office with a new economic vision for America.
Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up
and the middle out, not from the top down.
Because we know that when the middle class grows, the poor have a ladder up and the wealthy do very well.
America used to have the best roads, bridges, and airports on Earth.
Now our infrastructure is ranked 13th in the world.
----------------------------------------------------------------------------------------------------
Document 7:
And tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud.
By the end of this year, the deficit will be down to less than half what it was before I took office.
The only president ever to cut the deficit by more than one trillion dollars in a single year.
Lowering your costs also means demanding more competition.
I’m a capitalist, but capitalism without competition isn’t capitalism.
It’s exploitation—and it drives up prices.
----------------------------------------------------------------------------------------------------
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html
|
dd64104fb6e4-3
|
It’s exploitation—and it drives up prices.
----------------------------------------------------------------------------------------------------
Document 8:
For the past 40 years we were told that if we gave tax breaks to those at the very top, the benefits would trickle down to everyone else.
But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century.
Vice President Harris and I ran for office with a new economic vision for America.
----------------------------------------------------------------------------------------------------
Document 9:
All told, we created 369,000 new manufacturing jobs in America just last year.
Powered by people I’ve met like JoJo Burgess, from generations of union steelworkers from Pittsburgh, who’s here with us tonight.
As Ohio Senator Sherrod Brown says, “It’s time to bury the label “Rust Belt.”
It’s time.
But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills.
----------------------------------------------------------------------------------------------------
Document 10:
I’m also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve.
And fourth, let’s end cancer as we know it.
This is personal to me and Jill, to Kamala, and to so many of you.
Cancer is the #2 cause of death in America–second only to heart disease.
----------------------------------------------------------------------------------------------------
Document 11:
He will never extinguish their love of freedom. He will never weaken the resolve of the free world.
We meet tonight in an America that has lived through two of the hardest years this nation has ever faced.
The pandemic has been punishing.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html
|
dd64104fb6e4-4
|
The pandemic has been punishing.
And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more.
I understand.
----------------------------------------------------------------------------------------------------
Document 12:
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.
Last year COVID-19 kept us apart. This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
And with an unwavering resolve that freedom will always triumph over tyranny.
----------------------------------------------------------------------------------------------------
Document 13:
I know.
One of those soldiers was my son Major Beau Biden.
We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops.
But I’m committed to finding out everything we can.
Committed to military families like Danielle Robinson from Ohio.
The widow of Sergeant First Class Heath Robinson.
He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq.
----------------------------------------------------------------------------------------------------
Document 14:
And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things.
So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together.
First, beat the opioid epidemic.
There is so much we can do. Increase funding for prevention, treatment, harm reduction, and recovery.
----------------------------------------------------------------------------------------------------
Document 15:
Third, support our veterans.
Veterans are the best of us.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html
|
dd64104fb6e4-5
|
Third, support our veterans.
Veterans are the best of us.
I’ve always believed that we have a sacred obligation to equip all those we send to war and care for them and their families when they come home.
My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free.
Our troops in Iraq and Afghanistan faced many dangers.
----------------------------------------------------------------------------------------------------
Document 16:
When we invest in our workers, when we build the economy from the bottom up and the middle out together, we can do something we haven’t done in a long time: build a better America.
For more than two years, COVID-19 has impacted every decision in our lives and the life of the nation.
And I know you’re tired, frustrated, and exhausted.
But I also know this.
----------------------------------------------------------------------------------------------------
Document 17:
Now is the hour.
Our moment of responsibility.
Our test of resolve and conscience, of history itself.
It is in this moment that our character is formed. Our purpose is found. Our future is forged.
Well I know this nation.
We will meet the test.
To protect freedom and liberty, to expand fairness and opportunity.
We will save democracy.
As hard as these times have been, I am more optimistic about America today than I have been my whole life.
----------------------------------------------------------------------------------------------------
Document 18:
He didn’t know how to stop fighting, and neither did she.
Through her pain she found purpose to demand we do better.
Tonight, Danielle—we are.
The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits.
And tonight, I’m announcing we’re expanding eligibility to veterans suffering from nine respiratory cancers.
----------------------------------------------------------------------------------------------------
Document 19:
I understand.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html
|
dd64104fb6e4-6
|
----------------------------------------------------------------------------------------------------
Document 19:
I understand.
I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it.
That’s why one of the first things I did as President was fight to pass the American Rescue Plan.
Because people were hurting. We needed to act, and we did.
Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis.
----------------------------------------------------------------------------------------------------
Document 20:
So let’s not abandon our streets. Or choose between safety and equal justice.
Let’s come together to protect our communities, restore trust, and hold law enforcement accountable.
That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.
Doing reranking with CohereRerank#
Now let’s wrap our base retriever with a ContextualCompressionRetriever. We’ll add an CohereRerank, uses the Cohere rerank endpoint to rerank the returned results.
from langchain.llms import OpenAI
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import CohereRerank
llm = OpenAI(temperature=0)
compressor = CohereRerank()
compression_retriever = ContextualCompressionRetriever(base_compressor=compressor, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html
|
dd64104fb6e4-7
|
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
----------------------------------------------------------------------------------------------------
Document 2:
I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.
I’ve worked on these issues a long time.
I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.
So let’s not abandon our streets. Or choose between safety and equal justice.
----------------------------------------------------------------------------------------------------
Document 3:
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
You can of course use this retriever within a QA pipeline
from langchain.chains import RetrievalQA
chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0), retriever=compression_retriever)
chain({"query": query})
{'query': 'What did the president say about Ketanji Brown Jackson',
'result': " The president said that Ketanji Brown Jackson is one of the nation's top legal minds and that she is a consensus builder who has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."}
previous
Self-querying with Chroma
next
Contextual Compression
Contents
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html
|
dd64104fb6e4-8
|
previous
Self-querying with Chroma
next
Contextual Compression
Contents
Set up the base vector store retriever
Doing reranking with CohereRerank
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html
|
393100a837cf-0
|
.ipynb
.pdf
Metal
Contents
Ingest Documents
Query
Metal#
Metal is a managed service for ML Embeddings.
This notebook shows how to use Metal’s retriever.
First, you will need to sign up for Metal and get an API key. You can do so here
# !pip install metal_sdk
from metal_sdk.metal import Metal
API_KEY = ""
CLIENT_ID = ""
INDEX_ID = ""
metal = Metal(API_KEY, CLIENT_ID, INDEX_ID);
Ingest Documents#
You only need to do this if you haven’t already set up an index
metal.index( {"text": "foo1"})
metal.index( {"text": "foo"})
{'data': {'id': '642739aa7559b026b4430e42',
'text': 'foo',
'createdAt': '2023-03-31T19:51:06.748Z'}}
Query#
Now that our index is set up, we can set up a retriever and start querying it.
from langchain.retrievers import MetalRetriever
retriever = MetalRetriever(metal, params={"limit": 2})
retriever.get_relevant_documents("foo1")
[Document(page_content='foo1', metadata={'dist': '1.19209289551e-07', 'id': '642739a17559b026b4430e40', 'createdAt': '2023-03-31T19:50:57.853Z'}),
Document(page_content='foo1', metadata={'dist': '4.05311584473e-06', 'id': '642738f67559b026b4430e3c', 'createdAt': '2023-03-31T19:48:06.769Z'})]
previous
kNN
next
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/metal.html
|
393100a837cf-1
|
previous
kNN
next
Pinecone Hybrid Search
Contents
Ingest Documents
Query
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/metal.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.