id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
422cc08f47b4-2 | collection_name="my_documents",
)
Qdrant Cloud#
If you prefer not to keep yourself busy with managing the infrastructure, you can choose to set up a fully-managed Qdrant cluster on Qdrant Cloud. There is a free forever 1GB cluster included for trying out. The main difference with using a managed version of Qdrant is that you’ll need to provide an API key to secure your deployment from being accessed publicly.
url = "<---qdrant cloud cluster url here --->"
api_key = "<---api key here--->"
qdrant = Qdrant.from_documents(
docs, embeddings,
url, prefer_grpc=True, api_key=api_key,
collection_name="my_documents",
)
Reusing the same collection#
Both Qdrant.from_texts and Qdrant.from_documents methods are great to start using Qdrant with LangChain, but they are going to destroy the collection and create it from scratch! If you want to reuse the existing collection, you can always create an instance of Qdrant on your own and pass the QdrantClient instance with the connection details.
del qdrant
import qdrant_client
client = qdrant_client.QdrantClient(
path="/tmp/local_qdrant", prefer_grpc=True
)
qdrant = Qdrant(
client=client, collection_name="my_documents",
embeddings=embeddings
)
Similarity search#
The simplest scenario for using Qdrant vector store is to perform a similarity search. Under the hood, our query will be encoded with the embedding_function and used to find similar documents in Qdrant collection.
query = "What did the president say about Ketanji Brown Jackson"
found_docs = qdrant.similarity_search(query) | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html |
422cc08f47b4-3 | found_docs = qdrant.similarity_search(query)
print(found_docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Similarity search with score#
Sometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result.
The returned distance score is cosine distance. Therefore, a lower score is better.
query = "What did the president say about Ketanji Brown Jackson"
found_docs = qdrant.similarity_search_with_score(query)
document, score = found_docs[0]
print(document.page_content)
print(f"\nScore: {score}")
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html |
422cc08f47b4-4 | One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Score: 0.8153784913324512
Metadata filtering#
Qdrant has an extensive filtering system with rich type support. It is also possible to use the filters in Langchain, by passing an additional param to both the similarity_search_with_score and similarity_search methods.
from qdrant_client.http import models as rest
query = "What did the president say about Ketanji Brown Jackson"
found_docs = qdrant.similarity_search_with_score(query, filter=rest.Filter(...))
Maximum marginal relevance search (MMR)#
If you’d like to look up for some similar documents, but you’d also like to receive diverse results, MMR is method you should consider. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.
query = "What did the president say about Ketanji Brown Jackson"
found_docs = qdrant.max_marginal_relevance_search(query, k=2, fetch_k=10)
for i, doc in enumerate(found_docs):
print(f"{i + 1}.", doc.page_content, "\n")
1. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html |
422cc08f47b4-5 | Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
2. We can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together.
I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera.
They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun.
Officer Mora was 27 years old.
Officer Rivera was 22.
Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers.
I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.
I’ve worked on these issues a long time.
I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.
Qdrant as a Retriever#
Qdrant, as all the other vector stores, is a LangChain Retriever, by using cosine similarity.
retriever = qdrant.as_retriever()
retriever | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html |
422cc08f47b4-6 | retriever = qdrant.as_retriever()
retriever
VectorStoreRetriever(vectorstore=<langchain.vectorstores.qdrant.Qdrant object at 0x7fc4e5720a00>, search_type='similarity', search_kwargs={})
It might be also specified to use MMR as a search strategy, instead of similarity.
retriever = qdrant.as_retriever(search_type="mmr")
retriever
VectorStoreRetriever(vectorstore=<langchain.vectorstores.qdrant.Qdrant object at 0x7fc4e5720a00>, search_type='mmr', search_kwargs={})
query = "What did the president say about Ketanji Brown Jackson"
retriever.get_relevant_documents(query)[0]
Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})
Customizing Qdrant# | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html |
422cc08f47b4-7 | Customizing Qdrant#
Qdrant stores your vector embeddings along with the optional JSON-like payload. Payloads are optional, but since LangChain assumes the embeddings are generated from the documents, we keep the context data, so you can extract the original texts as well.
By default, your document is going to be stored in the following payload structure:
{
"page_content": "Lorem ipsum dolor sit amet",
"metadata": {
"foo": "bar"
}
}
You can, however, decide to use different keys for the page content and metadata. That’s useful if you already have a collection that you’d like to reuse. You can always change the
Qdrant.from_documents(
docs, embeddings,
location=":memory:",
collection_name="my_documents_2",
content_payload_key="my_page_content_key",
metadata_payload_key="my_meta",
)
<langchain.vectorstores.qdrant.Qdrant at 0x7fc4e2baa230>
previous
Pinecone
next
Redis
Contents
Connecting to Qdrant from LangChain
Local mode
In-memory
On-disk storage
On-premise server deployment
Qdrant Cloud
Reusing the same collection
Similarity search
Similarity search with score
Metadata filtering
Maximum marginal relevance search (MMR)
Qdrant as a Retriever
Customizing Qdrant
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html |
73ca9f9979b6-0 | .ipynb
.pdf
DocArrayHnswSearch
Contents
Setup
Using DocArrayHnswSearch
Similarity search
Similarity search with score
DocArrayHnswSearch#
DocArrayHnswSearch is a lightweight Document Index implementation provided by Docarray that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in hnswlib, and stores all other data in SQLite.
This notebook shows how to use functionality related to the DocArrayHnswSearch.
Setup#
Uncomment the below cells to install docarray and get/set your OpenAI api key if you haven’t already done so.
# !pip install "docarray[hnswlib]"
# Get an OpenAI token: https://platform.openai.com/account/api-keys
# import os
# from getpass import getpass
# OPENAI_API_KEY = getpass()
# os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
Using DocArrayHnswSearch#
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import DocArrayHnswSearch
from langchain.document_loaders import TextLoader
documents = TextLoader('../../../state_of_the_union.txt').load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = DocArrayHnswSearch.from_documents(docs, embeddings, work_dir='hnswlib_store/', n_dim=1536)
Similarity search#
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
print(docs[0].page_content) | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/docarray_hnsw.html |
73ca9f9979b6-1 | docs = db.similarity_search(query)
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Similarity search with score#
The returned distance score is cosine distance. Therefore, a lower score is better.
docs = db.similarity_search_with_score(query)
docs[0] | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/docarray_hnsw.html |
73ca9f9979b6-2 | docs = db.similarity_search_with_score(query)
docs[0]
(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={}),
0.36962226)
import shutil
# delete the dir
shutil.rmtree('hnswlib_store')
previous
Deep Lake
next
DocArrayInMemorySearch
Contents
Setup
Using DocArrayHnswSearch
Similarity search
Similarity search with score
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/docarray_hnsw.html |
07a6c8341fbf-0 | .ipynb
.pdf
Atlas
Atlas#
Atlas is a platform for interacting with both small and internet scale unstructured datasets by Nomic.
This notebook shows you how to use functionality related to the AtlasDB vectorstore.
!pip install spacy
!python3 -m spacy download en_core_web_sm
!pip install nomic
import time
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import SpacyTextSplitter
from langchain.vectorstores import AtlasDB
from langchain.document_loaders import TextLoader
ATLAS_TEST_API_KEY = '7xDPkYXSYDc1_ErdTPIcoAR9RNd8YDlkS3nVNXcVoIMZ6'
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = SpacyTextSplitter(separator='|')
texts = []
for doc in text_splitter.split_documents(documents):
texts.extend(doc.page_content.split('|'))
texts = [e.strip() for e in texts]
db = AtlasDB.from_texts(texts=texts,
name='test_index_'+str(time.time()), # unique name for your vector store
description='test_index', #a description for your vector store
api_key=ATLAS_TEST_API_KEY,
index_kwargs={'build_topic_model': True})
db.project.wait_for_project_lock()
db.project
test_index_1677255228.136989
A description for your project 508 datums inserted.
1 index built.
Projections
test_index_1677255228.136989_index. Status Completed. view online
Projection ID: db996d77-8981-48a0-897a-ff2c22bbf541
Hide embedded project | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/atlas.html |
07a6c8341fbf-1 | Hide embedded project
Explore on atlas.nomic.ai
previous
Annoy
next
AwaDB
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/atlas.html |
263f67ef3852-0 | .ipynb
.pdf
MyScale
Contents
Setting up envrionments
Get connection info and data schema
Filtering
Similarity search with score
Deleting your data
MyScale#
MyScale is a cloud-based database optimized for AI applications and solutions, built on the open-source ClickHouse.
This notebook shows how to use functionality related to the MyScale vector database.
Setting up envrionments#
!pip install clickhouse-connect
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
There are two ways to set up parameters for myscale index.
Environment Variables
Before you run the app, please set the environment variable with export:
export MYSCALE_URL='<your-endpoints-url>' MYSCALE_PORT=<your-endpoints-port> MYSCALE_USERNAME=<your-username> MYSCALE_PASSWORD=<your-password> ...
You can easily find your account, password and other info on our SaaS. For details please refer to this document
Every attributes under MyScaleSettings can be set with prefix MYSCALE_ and is case insensitive.
Create MyScaleSettings object with parameters
from langchain.vectorstores import MyScale, MyScaleSettings
config = MyScaleSetting(host="<your-backend-url>", port=8443, ...)
index = MyScale(embedding_function, config)
index.add_documents(...)
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import MyScale
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load() | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/myscale.html |
263f67ef3852-1 | loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
for d in docs:
d.metadata = {'some': 'metadata'}
docsearch = MyScale.from_documents(docs, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
Inserting data...: 100%|██████████| 42/42 [00:18<00:00, 2.21it/s]
print(docs[0].page_content)
As Frances Haugen, who is here with us tonight, has shown, we must hold social media platforms accountable for the national experiment they’re conducting on our children for profit.
It’s time to strengthen privacy protections, ban targeted advertising to children, demand tech companies stop collecting personal data on our children.
And let’s get all Americans the mental health services they need. More people they can turn to for help, and full parity between physical and mental health care.
Third, support our veterans.
Veterans are the best of us.
I’ve always believed that we have a sacred obligation to equip all those we send to war and care for them and their families when they come home.
My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free.
Our troops in Iraq and Afghanistan faced many dangers.
Get connection info and data schema#
print(str(docsearch))
Filtering#
You can have direct access to myscale SQL where statement. You can write WHERE clause following standard SQL. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/myscale.html |
263f67ef3852-2 | NOTE: Please be aware of SQL injection, this interface must not be directly called by end-user.
If you custimized your column_map under your setting, you search with filter like this:
from langchain.vectorstores import MyScale, MyScaleSettings
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
for i, d in enumerate(docs):
d.metadata = {'doc_id': i}
docsearch = MyScale.from_documents(docs, embeddings)
Inserting data...: 100%|██████████| 42/42 [00:15<00:00, 2.69it/s]
Similarity search with score#
The returned distance score is cosine distance. Therefore, a lower score is better.
meta = docsearch.metadata_column
output = docsearch.similarity_search_with_relevance_scores('What did the president say about Ketanji Brown Jackson?',
k=4, where_str=f"{meta}.doc_id<10")
for d, dist in output:
print(dist, d.metadata, d.page_content[:20] + '...')
0.252379834651947 {'doc_id': 6, 'some': ''} And I’m taking robus...
0.25022566318511963 {'doc_id': 1, 'some': ''} Groups of citizens b...
0.2469480037689209 {'doc_id': 8, 'some': ''} And so many families...
0.2428302764892578 {'doc_id': 0, 'some': 'metadata'} As Frances Haugen, w... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/myscale.html |
263f67ef3852-3 | Deleting your data#
docsearch.drop()
previous
Commented out until further notice
next
OpenSearch
Contents
Setting up envrionments
Get connection info and data schema
Filtering
Similarity search with score
Deleting your data
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/myscale.html |
b09b872537fb-0 | .ipynb
.pdf
Milvus
Milvus#
Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models.
This notebook shows how to use functionality related to the Milvus vector database.
To run, you should have a Milvus instance up and running.
!pip install pymilvus
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
OpenAI API Key:········
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Milvus
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vector_db = Milvus.from_documents(
docs,
embeddings,
connection_args={"host": "127.0.0.1", "port": "19530"},
)
query = "What did the president say about Ketanji Brown Jackson"
docs = vector_db.similarity_search(query)
docs[0].page_content | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/milvus.html |
b09b872537fb-1 | docs = vector_db.similarity_search(query)
docs[0].page_content
'Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.'
previous
MatchingEngine
next
Commented out until further notice
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/milvus.html |
697837f9a78d-0 | .ipynb
.pdf
Weaviate
Contents
Weaviate
Similarity search with score
Persistance
Retriever options
Retriever options
MMR
Question Answering with Sources
Weaviate#
Weaviate is an open-source vector database. It allows you to store data objects and vector embeddings from your favorite ML-models, and scale seamlessly into billions of data objects.
This notebook shows how to use functionality related to the Weaviatevector database.
See the Weaviate installation instructions.
!pip install weaviate-client
Requirement already satisfied: weaviate-client in /workspaces/langchain/.venv/lib/python3.9/site-packages (3.19.1)
Requirement already satisfied: requests<2.29.0,>=2.28.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (2.28.2)
Requirement already satisfied: validators<=0.21.0,>=0.18.2 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (0.20.0)
Requirement already satisfied: tqdm<5.0.0,>=4.59.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (4.65.0)
Requirement already satisfied: authlib>=1.1.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (1.2.0)
Requirement already satisfied: cryptography>=3.2 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from authlib>=1.1.0->weaviate-client) (40.0.2) | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-1 | Requirement already satisfied: charset-normalizer<4,>=2 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (3.1.0)
Requirement already satisfied: idna<4,>=2.5 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (1.26.15)
Requirement already satisfied: certifi>=2017.4.17 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (2023.5.7)
Requirement already satisfied: decorator>=3.4.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from validators<=0.21.0,>=0.18.2->weaviate-client) (5.1.1)
Requirement already satisfied: cffi>=1.12 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from cryptography>=3.2->authlib>=1.1.0->weaviate-client) (1.15.1) | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-2 | Requirement already satisfied: pycparser in /workspaces/langchain/.venv/lib/python3.9/site-packages (from cffi>=1.12->cryptography>=3.2->authlib>=1.1.0->weaviate-client) (2.21)
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
WEAVIATE_URL = getpass.getpass("WEAVIATE_URL:")
os.environ["WEAVIATE_API_KEY"] = getpass.getpass("WEAVIATE_API_KEY:")
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Weaviate
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader("../../../state_of_the_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = Weaviate.from_documents(docs, embeddings, weaviate_url=WEAVIATE_URL, by_text=False)
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-3 | Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Similarity search with score#
Sometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result.
The returned distance score is cosine distance. Therefore, a lower score is better.
docs = db.similarity_search_with_score(query, by_text=False)
docs[0] | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-4 | (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'_additional': {'vector': [-0.015289668, -0.011418287, -0.018540842, 0.00274522, 0.008310737, 0.014179829, 0.0080104275, -0.0010217049, -0.022327352, -0.0055002323, 0.018958665, 0.0020548347, -0.0044393567, -0.021609223, -0.013709779, -0.004543812, 0.025722157, 0.01821442, 0.031728342, -0.031388864, -0.01051083, -0.029978717, 0.011555385, 0.0009751897, 0.014675993, -0.02102166, 0.0301354, -0.031754456, 0.013526983, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-5 | -0.031754456, 0.013526983, -0.03392191, 0.002800712, -0.0027778621, -0.024259781, -0.006202043, -0.019950991, 0.0176138, -0.0001134321, 0.008343379, 0.034209162, -0.027654583, 0.03149332, -0.0008389079, 0.0053696632, -0.0024644958, -0.016582303, 0.0066720927, -0.005036711, -0.035514854, 0.002942706, 0.02958701, 0.032825127, 0.015694432, -0.019846536, -0.024520919, -0.021974817, -0.0063293483, -0.01081114, -0.0084282495, 0.003025944, -0.010210521, 0.008780787, 0.014793505, -0.006486031, 0.011966679, 0.01774437, -0.006985459, -0.015459408, 0.01625588, -0.016007798, 0.01706541, 0.035567082, 0.0029900377, 0.021543937, -0.0068483613, 0.040868197, -0.010909067, -0.03339963, 0.010954766, -0.014689049, -0.021596165, 0.0025607906, -0.01599474, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-6 | 0.0025607906, -0.01599474, -0.017757427, -0.0041651614, 0.010752384, 0.0053598704, -0.00019248774, 0.008480477, -0.010517359, -0.005017126, 0.0020434097, 0.011699011, 0.0051379027, 0.021687564, -0.010830725, 0.020734407, -0.006606808, 0.029769806, 0.02817686, -0.047318324, 0.024338122, -0.001150642, -0.026231378, -0.012325744, -0.0318328, -0.0094989175, -0.00897664, 0.004736402, 0.0046482678, 0.0023241339, -0.005826656, 0.0072531262, 0.015498579, -0.0077819317, -0.011953622, -0.028934162, -0.033974137, -0.01574666, 0.0086306315, -0.029299757, 0.030213742, -0.0033148287, 0.013448641, -0.013474754, 0.015851116, 0.0076578907, -0.037421167, -0.015185213, 0.010719741, -0.014636821, 0.0001918757, 0.011783881, 0.0036330915, -0.02132197, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-7 | 0.0036330915, -0.02132197, 0.0031010215, 0.0024334856, -0.0033229894, 0.050086394, 0.0031973163, -0.01115062, 0.004837593, 0.01298512, -0.018645298, -0.02992649, 0.004837593, 0.0067634913, 0.02992649, 0.0145062525, 0.00566018, -0.0017055618, -0.0056667086, 0.012697867, 0.0150677, -0.007559964, -0.01991182, -0.005268472, -0.008650217, -0.008702445, 0.027550127, 0.0018296026, 0.0018589807, -0.033295177, 0.0036265631, -0.0060290387, 0.014349569, 0.019898765, 0.00023339267, 0.0034568228, -0.018958665, 0.012031963, 0.005186866, 0.020747464, -0.03817847, 0.028202975, -0.01340947, 0.00091643346, 0.014884903, -0.02314994, -0.024468692, 0.0004859627, 0.018828096, 0.012906778, 0.027941836, 0.027550127, -0.015028529, 0.018606128, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-8 | -0.015028529, 0.018606128, 0.03449641, -0.017757427, -0.016020855, -0.012142947, 0.025304336, 0.00821281, -0.0025461016, -0.01902395, -0.635507, -0.030083172, 0.0177052, -0.0104912445, 0.012502013, -0.0010747487, 0.00465806, 0.020825805, -0.006887532, 0.013892576, -0.019977106, 0.029952602, 0.0012004217, -0.015211326, -0.008708973, -0.017809656, 0.008578404, -0.01612531, 0.022614606, -0.022327352, -0.032616217, 0.0050693536, -0.020629952, -0.01357921, 0.011477043, 0.0013938275, -0.0052390937, 0.0142581705, -0.013200559, 0.013252786, -0.033582427, 0.030579336, -0.011568441, 0.0038387382, 0.049564116, 0.016791213, -0.01991182, 0.010889481, -0.0028251936, 0.035932675, -0.02183119, -0.008611047, 0.025121538, 0.008349908, 0.00035641342, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-9 | 0.008349908, 0.00035641342, 0.009028868, 0.007631777, -0.01298512, -0.0015350056, 0.009982024, -0.024207553, -0.003332782, 0.006283649, 0.01868447, -0.010732798, -0.00876773, -0.0075273216, -0.016530076, 0.018175248, 0.016020855, -0.00067284, 0.013461698, -0.0065904865, -0.017809656, -0.014741276, 0.016582303, -0.0088526, 0.0046482678, 0.037473395, -0.02237958, 0.010112594, 0.022549322, 9.680491e-05, -0.0059082615, 0.020747464, -0.026923396, 0.01162067, -0.0074816225, 0.00024277734, 0.011842638, 0.016921783, -0.019285088, 0.005565517, 0.0046907025, 0.018109964, 0.0028676286, -0.015080757, -0.01536801, 0.0024726565, 0.020943318, 0.02187036, 0.0037767177, 0.018997835, -0.026766712, 0.005026919, 0.015942514, 0.0097469995, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-10 | 0.015942514, 0.0097469995, -0.0067830766, 0.023828901, -0.01523744, -0.0121494755, 0.00744898, 0.010445545, -0.011006993, -0.0032789223, 0.020394927, -0.017796598, -0.0029116957, 0.02318911, -0.031754456, -0.018188305, -0.031441092, -0.030579336, 0.0011832844, 0.0065023527, -0.027053965, 0.009198609, 0.022079272, -0.027785152, 0.005846241, 0.013500868, 0.016699815, 0.010445545, -0.025265165, -0.004396922, 0.0076774764, 0.014597651, -0.009851455, -0.03637661, 0.0004745379, -0.010112594, -0.009205136, 0.01578583, 0.015211326, -0.0011653311, -0.0015847852, 0.01489796, -0.01625588, -0.0029067993, -0.011411758, 0.0046286825, 0.0036330915, -0.0034143878, 0.011894866, -0.03658552, 0.007266183, -0.015172156, -0.02038187, -0.033739112, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-11 | -0.02038187, -0.033739112, 0.0018948873, -0.011379116, -0.0020923733, -0.014075373, 0.01970291, 0.0020352493, -0.0075273216, -0.02136114, 0.0027974476, -0.009577259, -0.023815846, 0.024847344, 0.014675993, -0.019454828, -0.013670608, 0.011059221, -0.005438212, 0.0406854, 0.0006218364, -0.024494806, -0.041259903, 0.022013986, -0.0040019494, -0.0052097156, 0.015798887, 0.016190596, 0.0003794671, -0.017444061, 0.012325744, 0.024769, 0.029482553, -0.0046547963, -0.015955571, -0.018397218, -0.0102431625, 0.020577725, 0.016190596, -0.02038187, 0.030030945, -0.01115062, 0.0032560725, -0.014819618, 0.005647123, -0.0032560725, 0.0038909658, 0.013311543, 0.024285894, -0.0045699263, -0.010112594, 0.009237779, 0.008728559, 0.0423828, 0.010909067, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-12 | 0.0423828, 0.010909067, 0.04225223, -0.031806685, -0.013696723, -0.025787441, 0.00838255, -0.008715502, 0.006776548, 0.01825359, -0.014480138, -0.014427911, -0.017600743, -0.030004831, 0.0145845935, 0.013762007, -0.013226673, 0.004168425, 0.0047951583, -0.026923396, 0.014675993, 0.0055851024, 0.015616091, -0.012306159, 0.007670948, 0.038439605, -0.015759716, 0.00016178355, 0.01076544, -0.008232395, -0.009942854, 0.018801982, -0.0025314125, 0.030709906, -0.001442791, -0.042617824, -0.007409809, -0.013109161, 0.031101612, 0.016229765, 0.006162872, 0.017901054, -0.0063619902, -0.0054577976, 0.01872364, -0.0032430156, 0.02966535, 0.006495824, 0.0011008625, -0.00024318536, -0.007011573, -0.002746852, -0.004298995, 0.007710119, 0.03407859, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-13 | 0.007710119, 0.03407859, -0.008898299, -0.008565348, 0.030527107, -0.0003027576, 0.025082368, 0.0405026, 0.03867463, 0.0014117807, -0.024076983, 0.003933401, -0.009812284, 0.00829768, -0.0074293944, 0.0061530797, -0.016647588, -0.008147526, -0.015629148, 0.02055161, 0.000504324, 0.03157166, 0.010112594, -0.009009283, 0.026557801, -0.013997031, -0.0071878415, 0.009414048, -0.03480978, 0.006626393, 0.013827291, -0.011444401, -0.011823053, -0.0042957305, -0.016229765, -0.014192886, 0.026531687, -0.012534656, -0.0056569157, -0.0010331298, 0.007977786, 0.0033654245, -0.017352663, 0.034626983, -0.011803466, 0.009035396, 0.0005288057, 0.020421041, 0.013115689, -0.0152504975, -0.0111114485, 0.032355078, 0.0025542623, -0.0030226798, -0.00074261305, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-14 | -0.0030226798, -0.00074261305, 0.030892702, -0.026218321, 0.0062803845, -0.018031623, -0.021504767, -0.012834964, 0.009009283, -0.0029198565, -0.014349569, -0.020434098, 0.009838398, -0.005993132, -0.013618381, -0.031597774, -0.019206747, 0.00086583785, 0.15835446, 0.033765227, 0.00893747, 0.015119928, -0.019128405, 0.0079582, -0.026270548, -0.015877228, 0.014153715, -0.011960151, 0.007853745, 0.006972402, -0.014101488, 0.02456009, 0.015119928, -0.0018850947, 0.019010892, -0.0046188897, -0.0050954674, -0.03548874, -0.01608614, -0.00324628, 0.009466276, 0.031911142, 7.033402e-05, -0.025095424, 0.020225188, 0.014832675, 0.023228282, -0.011829581, -0.011300774, -0.004073763, 0.0032544404, -0.0025983294, -0.020943318, 0.019650683, -0.0074424515, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-15 | 0.019650683, -0.0074424515, -0.0030977572, 0.0073379963, -0.00012455089, 0.010230106, -0.0007254758, -0.0025052987, -0.009681715, 0.03439196, -0.035123147, -0.0028806855, 0.012828437, 0.00018646932, 0.0066133365, 0.025539361, -0.00055736775, -0.025356563, -0.004537284, -0.007031158, 0.015825002, -0.013076518, 0.00736411, -0.00075689406, 0.0076578907, -0.019337315, -0.0024187965, -0.0110331075, -0.01187528, 0.0013048771, 0.0009711094, -0.027863493, -0.020616895, -0.0024481746, -0.0040802914, 0.014571536, -0.012306159, -0.037630077, 0.012652168, 0.009068039, -0.0018263385, 0.0371078, -0.0026831995, 0.011333417, -0.011548856, -0.0059049972, -0.025186824, 0.0069789304, -0.010993936, -0.0009066408, 0.0002619547, 0.01727432, -0.008082241, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-16 | 0.01727432, -0.008082241, -0.018645298, 0.024507863, 0.0030895968, -0.0014656406, 0.011137563, -0.025513247, -0.022967143, -0.002033617, 0.006887532, 0.016621474, -0.019337315, -0.0030618508, 0.0014697209, -0.011679426, -0.003597185, -0.0049844836, -0.012332273, 0.009068039, 0.009407519, 0.027080078, -0.011215905, -0.0062542707, -0.0013114056, -0.031911142, 0.011209376, 0.009903682, -0.007351053, 0.021335026, -0.005510025, 0.0062053073, -0.010869896, -0.0045601334, 0.017561574, -0.024847344, 0.04115545, -0.00036457402, -0.0061400225, 0.013037347, -0.005480647, 0.005947433, 0.020799693, 0.014702106, 0.03272067, 0.026701428, -0.015550806, -0.036193814, -0.021126116, -0.005412098, -0.013076518, 0.027080078, 0.012900249, -0.0073379963, -0.015119928, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-17 | -0.0073379963, -0.015119928, -0.019781252, 0.0062346854, -0.03266844, 0.025278222, -0.022797402, -0.0028415148, 0.021452539, -0.023162996, 0.005170545, -0.022314297, 0.011215905, -0.009838398, -0.00033233972, 0.0019650683, 0.0026326037, 0.009753528, -0.0029639236, 0.021126116, 0.01944177, -0.00044883206, -0.00961643, 0.008846072, -0.0035775995, 0.02352859, -0.0020956376, 0.0053468137, 0.013305014, 0.0006418298, 0.023802789, 0.013122218, -0.0031548813, -0.027471786, 0.005046504, 0.008545762, 0.011261604, -0.01357921, -0.01110492, -0.014845733, -0.035384286, -0.02550019, 0.008154054, -0.0058331843, -0.008702445, -0.007311882, -0.006525202, 0.03817847, 0.00372449, 0.022914914, -0.0018981516, 0.031545546, -0.01051083, 0.013801178, -0.006296706, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-18 | 0.013801178, -0.006296706, -0.00025052988, -0.01795328, -0.026296662, 0.0017659501, 0.021883417, 0.0028937424, 0.00495837, -0.011888337, -0.008950527, -0.012058077, 0.020316586, 0.00804307, -0.0068483613, -0.0038387382, 0.019715967, -0.025069311, -0.000797697, -0.04507253, -0.009179023, -0.016242823, 0.013553096, -0.0019014158, 0.010223578, 0.0062934416, -5.5644974e-05, -0.038282923, -0.038544063, -0.03162389, -0.006815719, 0.009936325, 0.014192886, 0.02277129, -0.006972402, -0.029769806, 0.034862008, 0.01217559, -0.0037179615, 0.0008666539, 0.008924413, -0.026296662, -0.012678281, 0.014480138, 0.020734407, -0.012103776, -0.037499506, 0.022131499, 0.015028529, -0.033843566, 0.00020187242, 0.002650557, -0.0015113399, 0.021570051, -0.008284623, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-19 | 0.021570051, -0.008284623, -0.003793039, -0.013422526, -0.009655601, -0.0016614947, -0.02388113, 0.00114901, 0.0034405016, 0.02796795, -0.039118566, 0.0023975791, -0.010608757, 0.00093438674, 0.0017382042, -0.02047327, 0.026283605, -0.020799693, 0.005947433, -0.014349569, 0.009890626, -0.022719061, -0.017248206, 0.0042565595, 0.022327352, -0.015681375, -0.013840348, 6.502964e-05, 0.015485522, -0.002678303, -0.0047984226, -0.012182118, -0.001512972, 0.013931747, -0.009642544, 0.012652168, -0.012932892, -0.027759038, -0.01085031, 0.0050236546, -0.009675186, -0.00893747, -0.0051770736, 0.036011018, 0.003528636, -0.001008648, -0.015811944, -0.008865656, 0.012364916, 0.016621474, -0.01340947, 0.03219839, 0.032955695, -0.021517823, 0.00372449, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-20 | -0.021517823, 0.00372449, -0.045124754, 0.015589978, -0.033582427, -0.01642562, -0.009609901, -0.031179955, 0.0012591778, -0.011176733, -0.018658355, -0.015224383, 0.014884903, 0.013083046, 0.0063587264, -0.008238924, -0.008917884, -0.003877909, 0.022836573, -0.004374072, -0.031127727, 0.02604858, -0.018136078, 0.000769951, -0.002312709, -0.025095424, -0.010621814, 0.013207087, 0.013944804, -0.0070899143, -0.022183727, -0.0028088724, -0.011424815, 0.026087752, -0.0058625625, -0.020186016, -0.010217049, 0.015315781, -0.012580355, 0.01374895, 0.004948577, -0.0021854038, 0.023215225, 0.00207442, 0.029639237, 0.01391869, -0.015811944, -0.005356606, -0.022327352, -0.021844247, -0.008310737, -0.020786636, -0.022484036, 0.011411758, 0.005826656, 0.012188647, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-21 | 0.005826656, 0.012188647, -0.020394927, -0.0013024289, -0.027315103, -0.017000126, -0.0010600596, -0.0019014158, 0.016712872, 0.0012673384, 0.02966535, 0.02911696, -0.03081436, 0.025552418, 0.0014215735, -0.02510848, 0.020277414, -0.02672754, 0.01829276, 0.03381745, -0.013957861, 0.0049094064, 0.033556316, 0.005167281, 0.0176138, 0.014140658, -0.0043708077, -0.0095446175, 0.012952477, 0.007853745, -0.01034109, 0.01804468, 0.0038322096, -0.04959023, 0.0023078127, 0.0053794556, -0.015106871, -0.03225062, -0.010073422, 0.007285768, 0.0056079524, -0.009002754, -0.014362626, 0.010909067, 0.009779641, -0.02796795, 0.013246258, 0.025474075, -0.001247753, 0.02442952, 0.012802322, -0.032276735, 0.0029802448, 0.014179829, 0.010321504, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-22 | 0.014179829, 0.010321504, 0.0053337566, -0.017156808, -0.010439017, 0.034444187, -0.010393318, -0.006042096, -0.018566957, 0.004517698, -0.011228961, -0.009015812, -0.02089109, 0.022484036, 0.0029867734, -0.029064732, -0.010236635, -0.0006761042, -0.029038617, 0.004367544, -0.012293102, 0.0017528932, -0.023358852, 0.02217067, 0.012606468, -0.008160583, -0.0104912445, -0.0034894652, 0.011078807, 0.00050922035, 0.015759716, 0.23774062, -0.0019291617, 0.006218364, 0.013762007, -0.029900376, 0.018188305, 0.0092965355, 0.0040574414, -0.014976301, -0.006228157, -0.016647588, 0.0035188433, -0.01919369, 0.0037506039, 0.029247528, -0.014532366, -0.049773026, -0.019624569, -0.034783665, -0.015028529, 0.0097469995, 0.016281994, 0.0047135525, -0.011294246, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-23 | 0.0047135525, -0.011294246, 0.011477043, 0.015485522, 0.03426139, 0.014323455, 0.011052692, -0.008362965, -0.037969556, -0.00252162, -0.013709779, -0.0030292084, -0.016569246, -0.013879519, 0.0011849166, -0.0016925049, 0.009753528, 0.008349908, -0.008245452, 0.033007924, -0.0035873922, -0.025461018, 0.016791213, 0.05410793, -0.005950697, -0.011672897, -0.0072335405, 0.013814235, -0.0593307, -0.008624103, 0.021400312, 0.034235276, 0.015642203, -0.020068504, 0.03136275, 0.012567298, -0.010419431, 0.027445672, -0.031754456, 0.014219, -0.0075403787, 0.03812624, 0.0009988552, 0.038752973, -0.018005509, 0.013670608, 0.045882057, -0.018841153, -0.031650003, 0.010628343, -0.00459604, -0.011999321, -0.028202975, -0.018593071, 0.029743692, 0.021857304, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-24 | 0.029743692, 0.021857304, 0.01438874, 0.00014128008, -0.006156344, -0.006691678, 0.01672593, -0.012821908, -0.0024367499, -0.03219839, 0.0058233915, -0.0056405943, -0.009381405, 0.0064044255, 0.013905633, -0.011228961, -0.0013481282, -0.014023146, 0.00016239559, -0.0051901303, 0.0025265163, 0.023619989, -0.021517823, 0.024703717, -0.025643816, 0.040189236, 0.016295051, -0.0040411204, -0.0113595305, 0.0029981981, -0.015589978, 0.026479458, 0.0067439056, -0.035775993, -0.010550001, -0.014767391, -0.009897154, -0.013944804, -0.0147543335, 0.015798887, -0.02456009, -0.0018850947, 0.024442578, 0.0019715966, -0.02422061, -0.02945644, -0.003443766, 0.0004945313, 0.0011522742, -0.020773578, -0.011777353, 0.008173639, -0.012325744, -0.021348083, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-25 | -0.012325744, -0.021348083, 0.0036461484, 0.0063228197, 0.00028970066, -0.0036200345, -0.021596165, -0.003949722, -0.0006034751, 0.007305354, -0.023424136, 0.004834329, -0.008833014, -0.013435584, 0.0026097542, -0.0012240873, -0.0028349862, -0.01706541, 0.027863493, -0.026414175, -0.011783881, 0.014075373, -0.005634066, -0.006313027, -0.004638475, -0.012495484, 0.022836573, -0.022719061, -0.031284407, -0.022405695, -0.017352663, 0.021113059, -0.03494035, 0.002772966, 0.025643816, -0.0064240107, -0.009897154, 0.0020711557, -0.16409951, 0.009688243, 0.010393318, 0.0033262535, 0.011059221, -0.012919835, 0.0014493194, -0.021857304, -0.0075730206, -0.0020695236, 0.017822713, 0.017417947, -0.034835894, -0.009159437, -0.0018573486, -0.0024840813, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-26 | -0.0018573486, -0.0024840813, -0.022444865, 0.0055687814, 0.0037767177, 0.0033915383, 0.0301354, -0.012227817, 0.0021854038, -0.042878963, 0.021517823, -0.010419431, -0.0051183174, 0.01659536, 0.0017333078, -0.00727924, -0.0020026069, -0.0012493852, 0.031441092, 0.0017431005, 0.008702445, -0.0072335405, -0.020081561, -0.012423672, -0.0042239176, 0.031049386, 0.04324456, 0.02550019, 0.014362626, -0.0107393265, -0.0037538682, -0.0061791935, -0.006737377, 0.011548856, -0.0166737, -0.012828437, -0.003375217, -0.01642562, -0.011424815, 0.007181313, 0.017600743, -0.0030226798, -0.014192886, 0.0128937205, -0.009975496, 0.0051444313, -0.0044654706, -0.008826486, 0.004158633, 0.004971427, -0.017835768, 0.025017083, -0.021792019, 0.013657551, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-27 | -0.021792019, 0.013657551, -0.01872364, 0.009100681, -0.0079582, -0.011640254, -0.01093518, -0.0147543335, -0.005000805, 0.02345025, -0.028908048, 0.0104912445, -0.00753385, 0.017561574, -0.012025435, 0.042670052, -0.0041978033, 0.0013056932, -0.009263893, -0.010941708, -0.004471999, 0.01008648, -0.002578744, -0.013931747, 0.018619185, -0.04029369, -0.00025909848, 0.0030063589, 0.003149985, 0.011091864, 0.006495824, 0.00026583098, 0.0045503406, -0.007586078, -0.0007475094, -0.016856499, -0.003528636, 0.038282923, -0.0010494508, 0.024494806, 0.012593412, 0.032433417, -0.003203845, 0.005947433, -0.019937934, -0.00017800271, 0.027706811, 0.03047488, 0.02047327, 0.0019258976, -0.0068940604, -0.0014990991, 0.013305014, -0.007690533, 0.058808424, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-28 | -0.007690533, 0.058808424, -0.0016859764, -0.0044622063, -0.0037734534, 0.01578583, -0.0018459238, -0.1196015, -0.0007075225, 0.0030341048, 0.012306159, -0.0068483613, 0.01851473, 0.015315781, 0.031388864, -0.015563863, 0.04776226, -0.008199753, -0.02591801, 0.00546759, -0.004915935, 0.0050824108, 0.0027011528, -0.009205136, -0.016712872, -0.0033409426, 0.0043218443, -0.018279705, 0.00876773, 0.0050138617, -0.009688243, -0.017783541, -0.018645298, -0.010380261, 0.018606128, 0.0077492893, 0.007324939, -0.012704396, -0.002692992, -0.01259994, -0.0076970616, -0.013814235, -0.0004365912, -0.023606932, -0.020186016, 0.025330449, -0.00991674, -0.0048278007, -0.019350372, 0.015433294, -0.0056144805, -0.0034927295, -0.00043455104, 0.008611047, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-29 | -0.00043455104, 0.008611047, 0.025748271, 0.022353467, -0.020747464, -0.015759716, 0.029038617, -0.000377631, -0.028725252, 0.018109964, -0.0016125311, -0.022719061, -0.009133324, -0.033060152, 0.011248547, -0.0019797573, -0.007181313, 0.0018867267, 0.0070899143, 0.004077027, 0.0055328747, -0.014245113, -0.021217514, -0.006750434, -0.038230695, 0.013233202, 0.014219, -0.017692143, 0.024742888, -0.008833014, -0.00753385, -0.026923396, -0.0021527617, 0.013135274, -0.018070793, -0.013500868, -0.0016696552, 0.011568441, -0.03230285, 0.023646105, 0.0111114485, -0.015172156, 0.0257091, 0.0045699263, -0.00919208, 0.021517823, 0.037838988, 0.00787333, -0.007755818, -0.028281316, 0.011170205, -0.005412098, -0.016321165, 0.009929797, 0.004609097, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-30 | 0.009929797, 0.004609097, -0.03047488, 0.002688096, -0.07264877, 0.024455635, -0.020930262, -0.015381066, -0.0033148287, 0.027236762, 0.0014501355, -0.014101488, -0.024076983, 0.026218321, -0.009009283, 0.019624569, 0.0020646274, -0.009081096, -0.01565526, -0.003358896, 0.048571788, -0.004857179, 0.022444865, 0.024181439, 0.00080708164, 0.024873456, 3.463147e-05, 0.0010535312, -0.017940223, 0.0012159267, -0.011065749, 0.008258509, -0.018527785, -0.022797402, 0.012377972, -0.002087477, 0.010791554, 0.022288183, 0.0048604426, -0.032590102, 0.013709779, 0.004922463, 0.020055447, -0.0150677, -0.0057222005, -0.036246043, 0.0021364405, 0.021387255, -0.013435584, 0.010732798, 0.0075534354, -0.00061612396, -0.002018928, -0.004432828, -0.032746784, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-31 | -0.004432828, -0.032746784, 0.025513247, -0.0025852725, 0.014467081, -0.008617575, -0.019755138, 0.003966043, -0.0033915383, 0.0004088452, -0.025173767, 0.02796795, 0.0023763615, 0.0052358294, 0.017796598, 0.014806561, 0.0150024155, -0.005859298, 0.01259994, 0.021726735, -0.026466403, -0.017457118, -0.0025493659, 0.0070899143, 0.02668837, 0.015485522, -0.011588027, 0.01906312, -0.003388274, -0.010210521, 0.020956375, 0.028620796, -0.018540842, 0.0025722156, 0.0110331075, -0.003992157, 0.020930262, 0.008487006, 0.0016557822, -0.0009882465, 0.0062640635, -0.016242823, -0.0007785196, -0.0007213955, 0.018971723, 0.021687564, 0.0039464575, -0.01574666, 0.011783881, -0.0019797573, -0.013383356, -0.002706049, 0.0037734534, 0.020394927, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-32 | 0.0037734534, 0.020394927, -0.00021931567, 0.0041814824, 0.025121538, -0.036246043, -0.019428715, -0.023802789, 0.014845733, 0.015420238, 0.019650683, 0.008186696, 0.025304336, -0.03204171, 0.01774437, 0.0021233836, -0.008434778, -0.0059441687, 0.038335152, 0.022653777, -0.0066002794, 0.02149171, 0.015093814, 0.025382677, -0.007579549, 0.0030357367, -0.0014117807, -0.015341896, 0.014545423, 0.007135614, -0.0113595305, -0.04387129, 0.016308108, -0.008186696, -0.013370299, -0.014297341, 0.017431004, -0.022666834, 0.039458048, 0.0032005806, -0.02081275, 0.008526176, -0.0019307939, 0.024024757, 0.009068039, 0.00953156, 0.010608757, 0.013801178, 0.035932675, -0.015185213, -0.0038322096, -0.012462842, -0.03655941, 0.0013946436, 0.00025726235, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-33 | 0.0013946436, 0.00025726235, 0.008016956, -0.0042565595, 0.008447835, 0.0038191527, -0.014702106, 0.02196176, 0.0052097156, -0.010869896, 0.0051640165, 0.030840475, -0.041468814, 0.009250836, -0.018997835, 0.020107675, 0.008421721, -0.016373392, 0.004602568, 0.0327729, -0.00812794, 0.001581521, 0.019350372, 0.016112253, 0.02132197, 0.00043944738, -0.01472822, -0.025735214, -0.03313849, 0.0033817457, 0.028855821, -0.016033912, 0.0050791465, -0.01808385]}, 'source': '../../../state_of_the_union.txt'}), | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-34 | 0.8154189703772676)
Persistance#
Anything uploaded to weaviate is automatically persistent into the database. You do not need to call any specific method or pass any param for this to happen.
Retriever options#
Retriever options#
This section goes over different options for how to use Weaviate as a retriever.
MMR#
In addition to using similarity search in the retriever object, you can also use mmr.
retriever = db.as_retriever(search_type="mmr")
retriever.get_relevant_documents(query)[0]
Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})
Question Answering with Sources#
This section goes over how to do question-answering with sources over an Index. It does this by using the RetrievalQAWithSourcesChain, which does the lookup of the documents from an Index.
from langchain.chains import RetrievalQAWithSourcesChain
from langchain import OpenAI | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
697837f9a78d-35 | from langchain.chains import RetrievalQAWithSourcesChain
from langchain import OpenAI
with open("../../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
docsearch = Weaviate.from_texts(
texts,
embeddings,
weaviate_url=WEAVIATE_URL,
by_text=False,
metadatas=[{"source": f"{i}-pl"} for i in range(len(texts))],
)
chain = RetrievalQAWithSourcesChain.from_chain_type(
OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever()
)
chain(
{"question": "What did the president say about Justice Breyer"},
return_only_outputs=True,
)
{'answer': " The president honored Justice Breyer for his service and mentioned his legacy of excellence. He also nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to continue Justice Breyer's legacy.\n",
'sources': '31-pl, 34-pl'}
previous
Vectara
next
Zilliz
Contents
Weaviate
Similarity search with score
Persistance
Retriever options
Retriever options
MMR
Question Answering with Sources
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
869fee634a86-0 | .ipynb
.pdf
Zilliz
Zilliz#
Zilliz Cloud is a fully managed service on cloud for LF AI Milvus®,
This notebook shows how to use functionality related to the Zilliz Cloud managed vector database.
To run, you should have a Zilliz Cloud instance up and running. Here are the installation instructions
!pip install pymilvus
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
OpenAI API Key:········
# replace
ZILLIZ_CLOUD_URI = "" # example: "https://in01-17f69c292d4a5sa.aws-us-west-2.vectordb.zillizcloud.com:19536"
ZILLIZ_CLOUD_USERNAME = "" # example: "username"
ZILLIZ_CLOUD_PASSWORD = "" # example: "*********"
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Milvus
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vector_db = Milvus.from_documents(
docs,
embeddings,
connection_args={
"uri": ZILLIZ_CLOUD_URI,
"user": ZILLIZ_CLOUD_USERNAME,
"password": ZILLIZ_CLOUD_PASSWORD,
"secure": True | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/zilliz.html |
869fee634a86-1 | "password": ZILLIZ_CLOUD_PASSWORD,
"secure": True
}
)
query = "What did the president say about Ketanji Brown Jackson"
docs = vector_db.similarity_search(query)
docs[0].page_content
'Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.'
previous
Weaviate
next
Retrievers
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/zilliz.html |
335384951c0f-0 | .ipynb
.pdf
Tigris
Contents
Initialize Tigris vector store
Similarity Search
Similarity Search with score (vector distance)
Tigris#
Tigris is an open source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications.
Tigris eliminates the infrastructure complexity of managing, operating, and synchronizing multiple tools, allowing you to focus on building great applications instead.
This notebook guides you how to use Tigris as your VectorStore
Pre requisites
An OpenAI account. You can sign up for an account here
Sign up for a free Tigris account. Once you have signed up for the Tigris account, create a new project called vectordemo. Next, make a note of the Uri for the region you’ve created your project in, the clientId and clientSecret. You can get all this information from the Application Keys section of the project.
Let’s first install our dependencies:
!pip install tigrisdb openapi-schema-pydantic openai tiktoken
We will load the OpenAI api key and Tigris credentials in our environment
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
os.environ['TIGRIS_PROJECT'] = getpass.getpass('Tigris Project Name:')
os.environ['TIGRIS_CLIENT_ID'] = getpass.getpass('Tigris Client Id:')
os.environ['TIGRIS_CLIENT_SECRET'] = getpass.getpass('Tigris Client Secret:')
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Tigris
from langchain.document_loaders import TextLoader
Initialize Tigris vector store#
Let’s import our test dataset: | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/tigris.html |
335384951c0f-1 | Initialize Tigris vector store#
Let’s import our test dataset:
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vector_store = Tigris.from_documents(docs, embeddings, index_name="my_embeddings")
Similarity Search#
query = "What did the president say about Ketanji Brown Jackson"
found_docs = vector_store.similarity_search(query)
print(found_docs)
Similarity Search with score (vector distance)#
query = "What did the president say about Ketanji Brown Jackson"
result = vector_store.similarity_search_with_score(query)
for (doc, score) in result:
print(f"document={doc}, score={score}")
previous
Tair
next
Typesense
Contents
Initialize Tigris vector store
Similarity Search
Similarity Search with score (vector distance)
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/tigris.html |
b308e69412e9-0 | .ipynb
.pdf
Chroma
Contents
Similarity search with score
Persistance
Initialize PeristedChromaDB
Persist the Database
Load the Database from disk, and create the chain
Retriever options
MMR
Updating a Document
Chroma#
Chroma is a database for building AI applications with embeddings.
This notebook shows how to use functionality related to the Chroma vector database.
!pip install chromadb
# get a token: https://platform.openai.com/account/api-keys
from getpass import getpass
OPENAI_API_KEY = getpass()
········
import os
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = Chroma.from_documents(docs, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
Using embedded DuckDB without persistence: data will be transient
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/chroma.html |
b308e69412e9-1 | Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Similarity search with score#
The returned distance score is cosine distance. Therefore, a lower score is better.
docs = db.similarity_search_with_score(query)
docs[0]
(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}),
0.3949805498123169)
Persistance#
The below steps cover how to persist a ChromaDB instance
Initialize PeristedChromaDB# | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/chroma.html |
b308e69412e9-2 | Initialize PeristedChromaDB#
Create embeddings for each chunk and insert into the Chroma vector database. The persist_directory argument tells ChromaDB where to store the database when it’s persisted.
# Embed and store the texts
# Supplying a persist_directory will store the embeddings on disk
persist_directory = 'db'
embedding = OpenAIEmbeddings()
vectordb = Chroma.from_documents(documents=docs, embedding=embedding, persist_directory=persist_directory)
Running Chroma using direct local API.
No existing DB found in db, skipping load
No existing DB found in db, skipping load
Persist the Database#
We should call persist() to ensure the embeddings are written to disk.
vectordb.persist()
vectordb = None
Persisting DB to disk, putting it in the save folder db
PersistentDuckDB del, about to run persist
Persisting DB to disk, putting it in the save folder db
Load the Database from disk, and create the chain#
Be sure to pass the same persist_directory and embedding_function as you did when you instantiated the database. Initialize the chain we will use for question answering.
# Now we can load the persisted database from disk, and use it as normal.
vectordb = Chroma(persist_directory=persist_directory, embedding_function=embedding)
Running Chroma using direct local API.
loaded in 4 embeddings
loaded in 1 collections
Retriever options#
This section goes over different options for how to use Chroma as a retriever.
MMR#
In addition to using similarity search in the retriever object, you can also use mmr.
retriever = db.as_retriever(search_type="mmr")
retriever.get_relevant_documents(query)[0] | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/chroma.html |
b308e69412e9-3 | retriever.get_relevant_documents(query)[0]
Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})
Updating a Document#
The update_document function allows you to modify the content of a document in the Chroma instance after it has been added. Let’s see an example of how to use this function.
# Import Document class
from langchain.docstore.document import Document
# Initial document content and id
initial_content = "This is an initial document content"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
new_db = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=OpenAIEmbeddings(), # using the same embeddings as before
ids=[document_id],
) | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/chroma.html |
b308e69412e9-4 | ids=[document_id],
)
At this point, we have a new Chroma instance with a single document “This is an initial document content” with id “doc1”. Now, let’s update the content of the document.
# Updated document content
updated_content = "This is the updated document content"
# Create a new Document instance with the updated content
updated_doc = Document(page_content=updated_content, metadata={"page": "1"})
# Update the document in the Chroma instance by passing the document id and the updated document
new_db.update_document(document_id=document_id, document=updated_doc)
# Now, let's retrieve the updated document using similarity search
output = new_db.similarity_search(updated_content, k=1)
# Print the content of the retrieved document
print(output[0].page_content, output[0].metadata)
This is the updated document content {'page': '1'}
previous
AwaDB
next
ClickHouse Vector Search
Contents
Similarity search with score
Persistance
Initialize PeristedChromaDB
Persist the Database
Load the Database from disk, and create the chain
Retriever options
MMR
Updating a Document
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/chroma.html |
772beae91490-0 | .ipynb
.pdf
Tair
Tair#
Tair is a cloud native in-memory database service developed by Alibaba Cloud.
It provides rich data models and enterprise-grade capabilities to support your real-time online scenarios while maintaining full compatibility with open source Redis. Tair also introduces persistent memory-optimized instances that are based on the new non-volatile memory (NVM) storage medium.
This notebook shows how to use functionality related to the Tair vector database.
To run, you should have a Tair instance up and running.
from langchain.embeddings.fake import FakeEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Tair
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = FakeEmbeddings(size=128)
Connect to Tair using the TAIR_URL environment variable
export TAIR_URL="redis://{username}:{password}@{tair_address}:{tair_port}"
or the keyword argument tair_url.
Then store documents and embeddings into Tair.
tair_url = "redis://localhost:6379"
# drop first if index already exists
Tair.drop_index(tair_url=tair_url)
vector_store = Tair.from_documents(
docs,
embeddings,
tair_url=tair_url
)
Query similar documents.
query = "What did the president say about Ketanji Brown Jackson"
docs = vector_store.similarity_search(query)
docs[0] | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/tair.html |
772beae91490-1 | docs = vector_store.similarity_search(query)
docs[0]
Document(page_content='We’re going after the criminals who stole billions in relief money meant for small businesses and millions of Americans. \n\nAnd tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud. \n\nBy the end of this year, the deficit will be down to less than half what it was before I took office. \n\nThe only president ever to cut the deficit by more than one trillion dollars in a single year. \n\nLowering your costs also means demanding more competition. \n\nI’m a capitalist, but capitalism without competition isn’t capitalism. \n\nIt’s exploitation—and it drives up prices. \n\nWhen corporations don’t have to compete, their profits go up, your prices go up, and small businesses and family farmers and ranchers go under. \n\nWe see it happening with ocean carriers moving goods in and out of America. \n\nDuring the pandemic, these foreign-owned companies raised prices by as much as 1,000% and made record profits.', metadata={'source': '../../../state_of_the_union.txt'})
previous
Supabase (Postgres)
next
Tigris
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/tair.html |
422d8487663c-0 | .ipynb
.pdf
Pinecone
Pinecone#
Pinecone is a vector database with broad functionality.
This notebook shows how to use functionality related to the Pinecone vector database.
To use Pinecone, you must have an API key.
Here are the installation instructions.
!pip install pinecone-client
import os
import getpass
PINECONE_API_KEY = getpass.getpass('Pinecone API Key:')
PINECONE_ENV = getpass.getpass('Pinecone Environment:')
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Pinecone
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
import pinecone
# initialize pinecone
pinecone.init(
api_key=PINECONE_API_KEY, # find at app.pinecone.io
environment=PINECONE_ENV # next to api key in console
)
index_name = "langchain-demo"
docsearch = Pinecone.from_documents(docs, embeddings, index_name=index_name)
# if you already have an index, you can load it like this
# docsearch = Pinecone.from_existing_index(index_name, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query) | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pinecone.html |
422d8487663c-1 | docs = docsearch.similarity_search(query)
print(docs[0].page_content)
previous
PGVector
next
Qdrant
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pinecone.html |
90990b44916d-0 | .ipynb
.pdf
Commented out until further notice
Commented out until further notice#
MongoDB Atlas Vector Search
MongoDB Atlas is a document database managed in the cloud. It also enables Lucene and its vector search feature.
This notebook shows how to use the functionality related to the MongoDB Atlas Vector Search feature where you can store your embeddings in MongoDB documents and create a Lucene vector index to perform a KNN search.
It uses the knnBeta Operator available in MongoDB Atlas Search. This feature is in early access and available only for evaluation purposes, to validate functionality, and to gather feedback from a small closed group of early access users. It is not recommended for production deployments as we may introduce breaking changes.
To use MongoDB Atlas, you must have first deployed a cluster. Free clusters are available.
Here is the MongoDB Atlas quick start.
!pip install pymongo
import os
MONGODB_ATLAS_URI = os.environ['MONGODB_ATLAS_URI']
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key. Make sure the environment variable OPENAI_API_KEY is set up before proceeding.
Now, let’s create a Lucene vector index on your cluster. In the below example, embedding is the name of the field that contains the embedding vector. Please refer to the documentation to get more details on how to define an Atlas Search index.
You can name the index langchain_demo and create the index on the namespace lanchain_db.langchain_col. Finally, write the following definition in the JSON editor:
{
"mappings": {
"dynamic": true,
"fields": {
"embedding": {
"dimensions": 1536,
"similarity": "cosine",
"type": "knnVector"
}
}
}
} | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/mongodb_atlas_vector_search.html |
90990b44916d-1 | }
}
}
}
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import MongoDBAtlasVectorSearch
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
from pymongo import MongoClient
# initialize MongoDB python client
client = MongoClient(MONGODB_ATLAS_CONNECTION_STRING)
db_name = "lanchain_db"
collection_name = "langchain_col"
collection = client[db_name][collection_name]
index_name = "langchain_demo"
# insert the documents in MongoDB Atlas with their embedding
docsearch = MongoDBAtlasVectorSearch.from_documents(
docs,
embeddings,
collection=collection,
index_name=index_name
)
# perform a similarity search between the embedding of the query and the embeddings of the documents
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
print(docs[0].page_content)
You can reuse vector index you created before, make sure environment variable OPENAI_API_KEY is set up, then create another file.
from pymongo import MongoClient
from langchain.vectorstores import MongoDBAtlasVectorSearch
from langchain.embeddings.openai import OpenAIEmbeddings
import os
MONGODB_ATLAS_URI = os.environ['MONGODB_ATLAS_URI']
# initialize MongoDB python client
client = MongoClient(MONGODB_ATLAS_URI)
db_name = "langchain_db"
collection_name = "langchain_col" | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/mongodb_atlas_vector_search.html |
90990b44916d-2 | db_name = "langchain_db"
collection_name = "langchain_col"
collection = client[db_name][collection_name]
index_name = "langchain_index"
# initialize vector store
vectorStore = MongoDBAtlasVectorSearch(
collection, OpenAIEmbeddings(), index_name=index_name)
# perform a similarity search between the embedding of the query and the embeddings of the documents
query = "What did the president say about Ketanji Brown Jackson"
docs = vectorStore.similarity_search(query)
print(docs[0].page_content)
previous
Milvus
next
MyScale
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/mongodb_atlas_vector_search.html |
9ac1abf616a3-0 | .ipynb
.pdf
SKLearnVectorStore
Contents
Basic usage
Load a sample document corpus
Create the SKLearnVectorStore, index the document corpus and run a sample query
Saving and loading a vector store
Clean-up
SKLearnVectorStore#
scikit-learn is an open source collection of machine learning algorithms, including some implementations of the k nearest neighbors. SKLearnVectorStore wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format.
This notebook shows how to use the SKLearnVectorStore vector database.
%pip install scikit-learn
# # if you plan to use bson serialization, install also:
# %pip install bson
# # if you plan to use parquet serialization, install also:
%pip install pandas pyarrow
To use OpenAI embeddings, you will need an OpenAI key. You can get one at https://platform.openai.com/account/api-keys or feel free to use any other embeddings.
import os
from getpass import getpass
os.environ['OPENAI_API_KEY'] = getpass('Enter your OpenAI key:')
Basic usage#
Load a sample document corpus#
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import SKLearnVectorStore
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
Create the SKLearnVectorStore, index the document corpus and run a sample query#
import tempfile
persist_path = os.path.join(tempfile.gettempdir(), 'union.parquet') | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/sklearn.html |
9ac1abf616a3-1 | persist_path = os.path.join(tempfile.gettempdir(), 'union.parquet')
vector_store = SKLearnVectorStore.from_documents(
documents=docs,
embedding=embeddings,
persist_path=persist_path, # persist_path and serializer are optional
serializer='parquet'
)
query = "What did the president say about Ketanji Brown Jackson"
docs = vector_store.similarity_search(query)
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Saving and loading a vector store#
vector_store.persist()
print('Vector store was persisted to', persist_path)
Vector store was persisted to /var/folders/6r/wc15p6m13nl_nl_n_xfqpc5c0000gp/T/union.parquet
vector_store2 = SKLearnVectorStore(
embedding=embeddings,
persist_path=persist_path,
serializer='parquet'
)
print('A new instance of vector store was loaded from', persist_path) | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/sklearn.html |
9ac1abf616a3-2 | )
print('A new instance of vector store was loaded from', persist_path)
A new instance of vector store was loaded from /var/folders/6r/wc15p6m13nl_nl_n_xfqpc5c0000gp/T/union.parquet
docs = vector_store2.similarity_search(query)
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Clean-up#
os.remove(persist_path)
previous
SingleStoreDB vector search
next
Supabase (Postgres)
Contents
Basic usage
Load a sample document corpus
Create the SKLearnVectorStore, index the document corpus and run a sample query
Saving and loading a vector store
Clean-up
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/sklearn.html |
711d468a4096-0 | .ipynb
.pdf
Deep Lake
Contents
Retrieval Question/Answering
Attribute based filtering in metadata
Choosing distance function
Maximal Marginal relevance
Delete dataset
Deep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or in memory
Creating dataset on AWS S3
Deep Lake API
Transfer local dataset to cloud
Deep Lake#
Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes.
This notebook showcases basic functionality related to Deep Lake. While Deep Lake can store embeddings, it is capable of storing any type of data. It is a fully fledged serverless data lake with version control, query engine and streaming dataloader to deep learning frameworks.
For more information, please see the Deep Lake documentation or api reference
!pip install openai deeplake tiktoken
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import DeepLake
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
embeddings = OpenAIEmbeddings()
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings() | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
711d468a4096-1 | docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
Create a dataset locally at ./deeplake/, then run similiarity search. The Deeplake+LangChain integration uses Deep Lake datasets under the hood, so dataset and vector store are used interchangeably. To create a dataset in your own cloud, or in the Deep Lake storage, adjust the path accordingly.
db = DeepLake(dataset_path="./my_deeplake/", embedding_function=embeddings)
db.add_documents(docs)
# or shorter
# db = DeepLake.from_documents(docs, dataset_path="./my_deeplake/", embedding=embeddings, overwrite=True)
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
/home/leo/.local/lib/python3.10/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.3.2) is available. It's recommended that you update to the latest version using `pip install -U deeplake`.
warnings.warn(
./my_deeplake/ loaded successfully.
Evaluating ingest: 100%|██████████████████████████████████████| 1/1 [00:07<00:00
Dataset(path='./my_deeplake/', tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (42, 1536) float32 None
ids text (42, 1) str None
metadata json (42, 1) str None
text text (42, 1) str None | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
711d468a4096-2 | text text (42, 1) str None
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Later, you can reload the dataset without recomputing embeddings
db = DeepLake(dataset_path="./my_deeplake/", embedding_function=embeddings, read_only=True)
docs = db.similarity_search(query)
./my_deeplake/ loaded successfully.
Deep Lake Dataset in ./my_deeplake/ already exists, loading from the storage
Dataset(path='./my_deeplake/', read_only=True, tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (42, 1536) float32 None
ids text (42, 1) str None
metadata json (42, 1) str None
text text (42, 1) str None | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
711d468a4096-3 | text text (42, 1) str None
Deep Lake, for now, is single writer and multiple reader. Setting read_only=True helps to avoid acquring the writer lock.
Retrieval Question/Answering#
from langchain.chains import RetrievalQA
from langchain.llms import OpenAIChat
qa = RetrievalQA.from_chain_type(llm=OpenAIChat(model='gpt-3.5-turbo'), chain_type='stuff', retriever=db.as_retriever())
/home/leo/.local/lib/python3.10/site-packages/langchain/llms/openai.py:624: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
warnings.warn(
query = 'What did the president say about Ketanji Brown Jackson'
qa.run(query)
'The president nominated Ketanji Brown Jackson to serve on the United States Supreme Court. He described her as a former top litigator in private practice, a former federal public defender, a consensus builder, and from a family of public school educators and police officers. He also mentioned that she has received broad support from various groups since being nominated.'
Attribute based filtering in metadata#
import random
for d in docs:
d.metadata['year'] = random.randint(2012, 2014)
db = DeepLake.from_documents(docs, embeddings, dataset_path="./my_deeplake/", overwrite=True)
./my_deeplake/ loaded successfully.
Evaluating ingest: 100%|██████████| 1/1 [00:04<00:00
Dataset(path='./my_deeplake/', tensors=['embedding', 'ids', 'metadata', 'text']) | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
711d468a4096-4 | tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (4, 1536) float32 None
ids text (4, 1) str None
metadata json (4, 1) str None
text text (4, 1) str None
db.similarity_search('What did the president say about Ketanji Brown Jackson', filter={'year': 2013})
100%|██████████| 4/4 [00:00<00:00, 1080.24it/s]
[Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013}), | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
711d468a4096-5 | Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013})]
Choosing distance function#
Distance function L2 for Euclidean, L1 for Nuclear, Max l-infinity distnace, cos for cosine similarity, dot for dot product
db.similarity_search('What did the president say about Ketanji Brown Jackson?', distance_metric='cos') | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
711d468a4096-6 | [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013}), | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
711d468a4096-7 | Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2012}), | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
711d468a4096-8 | Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013}), | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
711d468a4096-9 | Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2012})]
Maximal Marginal relevance#
Using maximal marginal relevance
db.max_marginal_relevance_search('What did the president say about Ketanji Brown Jackson?') | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
711d468a4096-10 | [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013}), | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
711d468a4096-11 | Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2012}), | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
711d468a4096-12 | Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2012}), | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
711d468a4096-13 | Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013})]
Delete dataset#
db.delete_dataset()
and if delete fails you can also force delete
DeepLake.force_delete_by_path("./my_deeplake")
Deep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or in memory#
By default deep lake datasets are stored locally, in case you want to store them in memory, in the Deep Lake Managed DB, or in any object storage, you can provide the corresponding path to the dataset. You can retrieve your user token from app.activeloop.ai
os.environ['ACTIVELOOP_TOKEN'] = getpass.getpass('Activeloop Token:')
# Embed and store the texts
username = "<username>" # your username on app.activeloop.ai | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
711d468a4096-14 | username = "<username>" # your username on app.activeloop.ai
dataset_path = f"hub://{username}/langchain_test" # could be also ./local/path (much faster locally), s3://bucket/path/to/dataset, gcs://path/to/dataset, etc.
embedding = OpenAIEmbeddings()
db = DeepLake(dataset_path=dataset_path, embedding_function=embeddings, overwrite=True)
db.add_documents(docs)
Your Deep Lake dataset has been successfully created!
The dataset is private so make sure you are logged in!
This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test
hub://davitbun/langchain_test loaded successfully.
Evaluating ingest: 100%|██████████| 1/1 [00:14<00:00
Dataset(path='hub://davitbun/langchain_test', tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (4, 1536) float32 None
ids text (4, 1) str None
metadata json (4, 1) str None
text text (4, 1) str None
['d6d6ccb4-e187-11ed-b66d-41c5f7b85421',
'd6d6ccb5-e187-11ed-b66d-41c5f7b85421',
'd6d6ccb6-e187-11ed-b66d-41c5f7b85421', | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
711d468a4096-15 | 'd6d6ccb7-e187-11ed-b66d-41c5f7b85421']
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Creating dataset on AWS S3#
dataset_path = f"s3://BUCKET/langchain_test" # could be also ./local/path (much faster locally), hub://bucket/path/to/dataset, gcs://path/to/dataset, etc.
embedding = OpenAIEmbeddings()
db = DeepLake.from_documents(docs, dataset_path=dataset_path, embedding=embeddings, overwrite=True, creds = {
'aws_access_key_id': os.environ['AWS_ACCESS_KEY_ID'],
'aws_secret_access_key': os.environ['AWS_SECRET_ACCESS_KEY'],
'aws_session_token': os.environ['AWS_SESSION_TOKEN'], # Optional
})
s3://hub-2.0-datasets-n/langchain_test loaded successfully. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
711d468a4096-16 | })
s3://hub-2.0-datasets-n/langchain_test loaded successfully.
Evaluating ingest: 100%|██████████| 1/1 [00:10<00:00
\
Dataset(path='s3://hub-2.0-datasets-n/langchain_test', tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (4, 1536) float32 None
ids text (4, 1) str None
metadata json (4, 1) str None
text text (4, 1) str None
Deep Lake API#
you can access the Deep Lake dataset at db.ds
# get structure of the dataset
db.ds.summary()
Dataset(path='hub://davitbun/langchain_test', tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (4, 1536) float32 None
ids text (4, 1) str None
metadata json (4, 1) str None
text text (4, 1) str None
# get embeddings numpy array
embeds = db.ds.embedding.numpy()
Transfer local dataset to cloud#
Copy already created dataset to the cloud. You can also transfer from cloud to local.
import deeplake
username = "davitbun" # your username on app.activeloop.ai | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
711d468a4096-17 | username = "davitbun" # your username on app.activeloop.ai
source = f"hub://{username}/langchain_test" # could be local, s3, gcs, etc.
destination = f"hub://{username}/langchain_test_copy" # could be local, s3, gcs, etc.
deeplake.deepcopy(src=source, dest=destination, overwrite=True)
Copying dataset: 100%|██████████| 56/56 [00:38<00:00
This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test_copy
Your Deep Lake dataset has been successfully created!
The dataset is private so make sure you are logged in!
Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text'])
db = DeepLake(dataset_path=destination, embedding_function=embeddings)
db.add_documents(docs)
This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test_copy
/
hub://davitbun/langchain_test_copy loaded successfully.
Deep Lake Dataset in hub://davitbun/langchain_test_copy already exists, loading from the storage
Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (4, 1536) float32 None
ids text (4, 1) str None
metadata json (4, 1) str None | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
711d468a4096-18 | metadata json (4, 1) str None
text text (4, 1) str None
Evaluating ingest: 100%|██████████| 1/1 [00:31<00:00
-
Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (8, 1536) float32 None
ids text (8, 1) str None
metadata json (8, 1) str None
text text (8, 1) str None
['ad42f3fe-e188-11ed-b66d-41c5f7b85421',
'ad42f3ff-e188-11ed-b66d-41c5f7b85421',
'ad42f400-e188-11ed-b66d-41c5f7b85421',
'ad42f401-e188-11ed-b66d-41c5f7b85421']
previous
ClickHouse Vector Search
next
DocArrayHnswSearch
Contents
Retrieval Question/Answering
Attribute based filtering in metadata
Choosing distance function
Maximal Marginal relevance
Delete dataset
Deep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or in memory
Creating dataset on AWS S3
Deep Lake API
Transfer local dataset to cloud
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
1a8b4e3f2da2-0 | .ipynb
.pdf
PGVector
Contents
Similarity search with score
Similarity Search with Euclidean Distance (Default)
Working with vectorstore in PG
Uploading a vectorstore in PG
Retrieving a vectorstore in PG
PGVector#
PGVector is an open-source vector similarity search for Postgres
It supports:
exact and approximate nearest neighbor search
L2 distance, inner product, and cosine distance
This notebook shows how to use the Postgres vector database (PGVector).
See the installation instruction.
# Pip install necessary package
!pip install pgvector
!pip install openai
!pip install psycopg2-binary
!pip install tiktoken
Requirement already satisfied: pgvector in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (0.1.8)
Requirement already satisfied: numpy in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from pgvector) (1.24.3)
Requirement already satisfied: openai in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (0.27.7)
Requirement already satisfied: requests>=2.20 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from openai) (2.28.2)
Requirement already satisfied: tqdm in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from openai) (4.65.0)
Requirement already satisfied: aiohttp in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from openai) (3.8.4) | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html |
1a8b4e3f2da2-1 | Requirement already satisfied: charset-normalizer<4,>=2 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.20->openai) (3.1.0)
Requirement already satisfied: idna<4,>=2.5 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.20->openai) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.20->openai) (1.26.15)
Requirement already satisfied: certifi>=2017.4.17 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.20->openai) (2023.5.7)
Requirement already satisfied: attrs>=17.3.0 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (23.1.0)
Requirement already satisfied: multidict<7.0,>=4.5 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (6.0.4)
Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (4.0.2) | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html |
1a8b4e3f2da2-2 | Requirement already satisfied: yarl<2.0,>=1.0 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (1.9.2)
Requirement already satisfied: frozenlist>=1.1.1 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (1.3.3)
Requirement already satisfied: aiosignal>=1.1.2 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (1.3.1)
Requirement already satisfied: psycopg2-binary in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (2.9.6)
Requirement already satisfied: tiktoken in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (0.4.0)
Requirement already satisfied: regex>=2022.1.18 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from tiktoken) (2023.5.5)
Requirement already satisfied: requests>=2.26.0 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from tiktoken) (2.28.2)
Requirement already satisfied: charset-normalizer<4,>=2 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.26.0->tiktoken) (3.1.0) | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html |
1a8b4e3f2da2-3 | Requirement already satisfied: idna<4,>=2.5 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.26.0->tiktoken) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.26.0->tiktoken) (1.26.15)
Requirement already satisfied: certifi>=2017.4.17 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.26.0->tiktoken) (2023.5.7)
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
OpenAI API Key:········
## Loading Environment Variables
from typing import List, Tuple
from dotenv import load_dotenv
load_dotenv()
False
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores.pgvector import PGVector
from langchain.document_loaders import TextLoader
from langchain.docstore.document import Document
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
## PGVector needs the connection string to the database.
## We will load it from the environment variables.
import os | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html |
1a8b4e3f2da2-4 | ## We will load it from the environment variables.
import os
CONNECTION_STRING = PGVector.connection_string_from_db_params(
driver=os.environ.get("PGVECTOR_DRIVER", "psycopg2"),
host=os.environ.get("PGVECTOR_HOST", "localhost"),
port=int(os.environ.get("PGVECTOR_PORT", "5432")),
database=os.environ.get("PGVECTOR_DATABASE", "postgres"),
user=os.environ.get("PGVECTOR_USER", "postgres"),
password=os.environ.get("PGVECTOR_PASSWORD", "postgres"),
)
## Example
# postgresql+psycopg2://username:password@localhost:5432/database_name
# ## PGVector needs the connection string to the database.
# ## We will load it from the environment variables.
# import os
# CONNECTION_STRING = PGVector.connection_string_from_db_params(
# driver=os.environ.get("PGVECTOR_DRIVER", "psycopg2"),
# host=os.environ.get("PGVECTOR_HOST", "localhost"),
# port=int(os.environ.get("PGVECTOR_PORT", "5432")),
# database=os.environ.get("PGVECTOR_DATABASE", "rd-embeddings"),
# user=os.environ.get("PGVECTOR_USER", "admin"),
# password=os.environ.get("PGVECTOR_PASSWORD", "password"),
# )
# ## Example
# # postgresql+psycopg2://username:password@localhost:5432/database_name
Similarity search with score#
Similarity Search with Euclidean Distance (Default)#
# The PGVector Module will try to create a table with the name of the collection. So, make sure that the collection name is unique and the user has the
# permission to create a table.
db = PGVector.from_documents(
embedding=embeddings,
documents=docs,
collection_name="state_of_the_union", | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html |
1a8b4e3f2da2-5 | documents=docs,
collection_name="state_of_the_union",
connection_string=CONNECTION_STRING,
)
query = "What did the president say about Ketanji Brown Jackson"
docs_with_score: List[Tuple[Document, float]] = db.similarity_search_with_score(query)
for doc, score in docs_with_score:
print("-" * 80)
print("Score: ", score)
print(doc.page_content)
print("-" * 80)
--------------------------------------------------------------------------------
Score: 0.6076804864602984
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Score: 0.6076804864602984
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html |
1a8b4e3f2da2-6 | Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Score: 0.659062774389974
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling.
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.
We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Score: 0.659062774389974
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html |
1a8b4e3f2da2-7 | And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling.
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.
We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.
--------------------------------------------------------------------------------
Working with vectorstore in PG#
Uploading a vectorstore in PG#
data=docs
api_key=os.environ['OPENAI_API_KEY']
db = PGVector.from_documents(
documents=docs,
embedding=embeddings,
collection_name=collection_name,
connection_string=connection_string,
distance_strategy=DistanceStrategy.COSINE,
openai_api_key=api_key,
pre_delete_collection=False
)
Retrieving a vectorstore in PG#
connection_string = CONNECTION_STRING
embedding=embeddings
collection_name="state_of_the_union"
from langchain.vectorstores.pgvector import DistanceStrategy
store = PGVector(
connection_string=connection_string,
embedding_function=embedding,
collection_name=collection_name,
distance_strategy=DistanceStrategy.COSINE
)
retriever = store.as_retriever()
print(retriever)
vectorstore=<langchain.vectorstores.pgvector.PGVector object at 0x7fe9a1b1c670> search_type='similarity' search_kwargs={}
# When we have an existing PG VEctor
DEFAULT_DISTANCE_STRATEGY = DistanceStrategy.EUCLIDEAN
db1 = PGVector.from_existing_index(
embedding=embeddings, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html |
1a8b4e3f2da2-8 | db1 = PGVector.from_existing_index(
embedding=embeddings,
collection_name="state_of_the_union",
distance_strategy=DEFAULT_DISTANCE_STRATEGY,
pre_delete_collection = False,
connection_string=CONNECTION_STRING,
)
query = "What did the president say about Ketanji Brown Jackson"
docs_with_score: List[Tuple[Document, float]] = db1.similarity_search_with_score(query)
print(docs_with_score) | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html |
Subsets and Splits