id
stringlengths
14
15
text
stringlengths
23
2.21k
source
stringlengths
52
97
767d5b975f00-3
your application code.If running in Google Colab authenticate with google.colab.google.auth otherwise follow one of the supported methods to make sure that you Application Default Credentials are properly set.import sysif "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user()Configure and use the Enterprise Search retriever​The Enterprise Search retriever is implemented in the langchain.retriever.GoogleCloudEntepriseSearchRetriever class. The get_relevan_documents method returns a list of langchain.schema.Document documents where the page_content field of each document is populated with either an extractive segment or an extractive answer that matches a query. The metadata field is populated with metadata (if any) of a document from which the segments or answers were extracted.An extractive answer is verbatim text that is returned with each search result. It is extracted directly from the original document. Extractive answers are typically displayed near the top of web pages to provide an end user with a brief answer that is contextually relevant to their query. Extractive answers are available for website and unstructured search.An extractive segment is verbatim text that is returned with each search result. An extractive segment is usually more verbose than an extractive answer. Extractive segments can be displayed as an answer to a query, and can be used to perform post-processing tasks and as input for large language models to generate answers or new text. Extractive segments are available for unstructured search.For more information about extractive segments and extractive answers refer to product documentation.When creating an instance of the retriever you can specify a number of parameters that control which Enterprise data store to access and how a natural language query is processed, including configurations for extractive answers and segments.The mandatory parameters are:project_id - Your Google Cloud PROJECT_IDsearch_engine_id - The ID of the data store you want to use. The project_id and search_engine_id parameters can
https://python.langchain.com/docs/integrations/retrievers/google_cloud_enterprise_search
767d5b975f00-4
The ID of the data store you want to use. The project_id and search_engine_id parameters can be provided explicitly in the retriever's constructor or through the environment variables - PROJECT_ID and SEARCH_ENGINE_ID.You can also configure a number of optional parameters, including:max_documents - The maximum number of documents used to provide extractive segments or extractive answersget_extractive_answers - By default, the retriever is configured to return extractive segments. Set this field to True to return extractive answersmax_extractive_answer_count - The maximum number of extractive answers returned in each search result.
https://python.langchain.com/docs/integrations/retrievers/google_cloud_enterprise_search
767d5b975f00-5
At most 5 answers will be returnedmax_extractive_segment_count - The maximum number of extractive segments returned in each search result. Currently one segment will be returnedfilter - The filter expression that allows you filter the search results based on the metadata associated with the documents in the searched data store. query_expansion_condition - Specification to determine under which conditions query expansion should occur. 0 - Unspecified query expansion condition. In this case, server behavior defaults to disabled. 1 - Disabled query expansion. Only the exact search query is used, even if SearchResponse.total_size is zero.
https://python.langchain.com/docs/integrations/retrievers/google_cloud_enterprise_search
767d5b975f00-6
2 - Automatic query expansion built by the Search API.Configure and use the retriever with extractve segments​from langchain.retrievers import GoogleCloudEnterpriseSearchRetrieverPROJECT_ID = "<YOUR PROJECT ID>" # Set to your Project IDSEARCH_ENGINE_ID = "<YOUR SEARCH ENGINE ID>" # Set to your data store IDretriever = GoogleCloudEnterpriseSearchRetriever( project_id=PROJECT_ID, search_engine_id=SEARCH_ENGINE_ID, max_documents=3,)query = "What are Alphabet's Other Bets?"result = retriever.get_relevant_documents(query)for doc in result: print(doc)Configure and use the retriever with extractve answers​retriever = GoogleCloudEnterpriseSearchRetriever( project_id=PROJECT_ID, search_engine_id=SEARCH_ENGINE_ID, max_documents=3, max_extractive_answer_count=3, get_extractive_answers=True,)query = "What are Alphabet's Other Bets?"result = retriever.get_relevant_documents(query)for doc in result: print(doc)PreviousElasticSearch BM25NextkNNInstall pre-requisitesConfigure access to Google Cloud and Google Cloud Enterprise SearchSet or create a Google Cloud poject and turn on Gen App BuilderCreate and populate an unstructured data storeSet credentials to access Enterprise Search APIConfigure and use the Enterprise Search retrieverConfigure and use the retriever with extractve segmentsConfigure and use the retriever with extractve answersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/retrievers/google_cloud_enterprise_search
3b0143863767-0
ChatGPT Plugin | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/retrievers/chatgpt-plugin
3b0143863767-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversAmazon KendraArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArray RetrieverElasticSearch BM25Google Cloud Enterprise SearchkNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedSVMTF-IDFVespaWeaviate Hybrid SearchWikipediaZepText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsRetrieversChatGPT PluginOn this pageChatGPT PluginOpenAI plugins connect ChatGPT to third-party applications. These plugins enable ChatGPT to interact with APIs defined by developers, enhancing ChatGPT's capabilities and allowing it to perform a wide range of actions.Plugins can allow ChatGPT to do things like:Retrieve real-time information; e.g., sports scores, stock prices, the latest news, etc.Retrieve knowledge-base information; e.g., company docs, personal notes, etc.Perform actions on behalf of the user; e.g., booking a flight, ordering food, etc.This notebook shows how to use the ChatGPT Retriever Plugin within LangChain.# STEP 1: Load# Load documents using LangChain's DocumentLoaders# This is from https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/csv.htmlfrom langchain.document_loaders.csv_loader import CSVLoaderloader = CSVLoader( file_path="../../document_loaders/examples/example_data/mlb_teams_2012.csv")data = loader.load()# STEP 2: Convert# Convert Document to format expected by https://github.com/openai/chatgpt-retrieval-pluginfrom typing import Listfrom langchain.docstore.document import Documentimport jsondef
https://python.langchain.com/docs/integrations/retrievers/chatgpt-plugin
3b0143863767-2
typing import Listfrom langchain.docstore.document import Documentimport jsondef write_json(path: str, documents: List[Document]) -> None: results = [{"text": doc.page_content} for doc in documents] with open(path, "w") as f: json.dump(results, f, indent=2)write_json("foo.json", data)# STEP 3: Use# Ingest this as you would any other json file in https://github.com/openai/chatgpt-retrieval-plugin/tree/main/scripts/process_jsonUsing the ChatGPT Retriever Plugin​Okay, so we've created the ChatGPT Retriever Plugin, but how do we actually use it?The below code walks through how to do that.We want to use ChatGPTPluginRetriever so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ········from langchain.retrievers import ChatGPTPluginRetrieverretriever = ChatGPTPluginRetriever(url="http://0.0.0.0:8000", bearer_token="foo")retriever.get_relevant_documents("alice's phone number") [Document(page_content="This is Alice's phone number: 123-456-7890", lookup_str='', metadata={'id': '456_0', 'metadata': {'source': 'email', 'source_id': '567', 'url': None, 'created_at': '1609592400.0', 'author': 'Alice', 'document_id': '456'}, 'embedding': None, 'score': 0.925571561}, lookup_index=0),
https://python.langchain.com/docs/integrations/retrievers/chatgpt-plugin
3b0143863767-3
'score': 0.925571561}, lookup_index=0), Document(page_content='This is a document about something', lookup_str='', metadata={'id': '123_0', 'metadata': {'source': 'file', 'source_id': 'https://example.com/doc1', 'url': 'https://example.com/doc1', 'created_at': '1609502400.0', 'author': 'Alice', 'document_id': '123'}, 'embedding': None, 'score': 0.6987589}, lookup_index=0), Document(page_content='Team: Angels "Payroll (millions)": 154.49 "Wins": 89', lookup_str='', metadata={'id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631_0', 'metadata': {'source': None, 'source_id': None, 'url': None, 'created_at': None, 'author': None, 'document_id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631'}, 'embedding': None, 'score': 0.697888613}, lookup_index=0)]PreviousChaindeskNextCohere RerankerUsing the ChatGPT Retriever PluginCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/retrievers/chatgpt-plugin
9e5d720c0896-0
Cohere Reranker | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker
9e5d720c0896-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversAmazon KendraArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArray RetrieverElasticSearch BM25Google Cloud Enterprise SearchkNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedSVMTF-IDFVespaWeaviate Hybrid SearchWikipediaZepText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsRetrieversCohere RerankerOn this pageCohere RerankerCohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.This notebook shows how to use Cohere's rerank endpoint in a retriever. This builds on top of ideas in the ContextualCompressionRetriever.#!pip install cohere#!pip install faiss# OR (depending on Python version)#!pip install faiss-cpu# get a new token: https://dashboard.cohere.ai/import osimport getpassos.environ["COHERE_API_KEY"] = getpass.getpass("Cohere API Key:")os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")# Helper function for printing docsdef pretty_print_docs(docs): print( f"\n{'-' * 100}\n".join( [f"Document {i+1}:\n\n" + d.page_content for i, d in enumerate(docs)] ) )Set up the base vector store retriever​Let's start by initializing a simple vector store retriever
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker
9e5d720c0896-2
base vector store retriever​Let's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs.from langchain.text_splitter import RecursiveCharacterTextSplitterfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.document_loaders import TextLoaderfrom langchain.vectorstores import FAISSdocuments = TextLoader("../../../state_of_the_union.txt").load()text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)texts = text_splitter.split_documents(documents)retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever( search_kwargs={"k": 20})query = "What did the president say about Ketanji Brown Jackson"docs = retriever.get_relevant_documents(query)pretty_print_docs(docs) Document 1: One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ---------------------------------------------------------------------------------------------------- Document 2: As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker
9e5d720c0896-3
preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. ---------------------------------------------------------------------------------------------------- Document 3: A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. ---------------------------------------------------------------------------------------------------- Document 4: He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.� The Ukrainian Ambassador to the United States is here tonight. ---------------------------------------------------------------------------------------------------- Document 5: I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker
9e5d720c0896-4
know the neighborhood, and who can restore trust and safety. So let’s not abandon our streets. Or choose between safety and equal justice. ---------------------------------------------------------------------------------------------------- Document 6: Vice President Harris and I ran for office with a new economic vision for America. Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up and the middle out, not from the top down. Because we know that when the middle class grows, the poor have a ladder up and the wealthy do very well. America used to have the best roads, bridges, and airports on Earth. Now our infrastructure is ranked 13th in the world. ---------------------------------------------------------------------------------------------------- Document 7: And tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud. By the end of this year, the deficit will be down to less than half what it was before I took office. The only president ever to cut the deficit by more than one trillion dollars in a single year. Lowering your costs also means demanding more competition. I’m a capitalist, but capitalism without competition isn’t capitalism. It’s exploitation—and it drives up prices. ---------------------------------------------------------------------------------------------------- Document 8: For the past 40 years we were told that if we gave tax breaks to those at the very top, the benefits would
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker
9e5d720c0896-5
years we were told that if we gave tax breaks to those at the very top, the benefits would trickle down to everyone else. But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. Vice President Harris and I ran for office with a new economic vision for America. ---------------------------------------------------------------------------------------------------- Document 9: All told, we created 369,000 new manufacturing jobs in America just last year. Powered by people I’ve met like JoJo Burgess, from generations of union steelworkers from Pittsburgh, who’s here with us tonight. As Ohio Senator Sherrod Brown says, “It’s time to bury the label “Rust Belt.� It’s time. But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. ---------------------------------------------------------------------------------------------------- Document 10: I’m also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve. And fourth, let’s end cancer as we know it. This is personal to me and Jill, to Kamala, and to so many of you. Cancer is the #2 cause of death in America–second only to heart disease. ---------------------------------------------------------------------------------------------------- Document 11:
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker
9e5d720c0896-6
only to heart disease. ---------------------------------------------------------------------------------------------------- Document 11: He will never extinguish their love of freedom. He will never weaken the resolve of the free world. We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. The pandemic has been punishing. And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. I understand. ---------------------------------------------------------------------------------------------------- Document 12: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. ---------------------------------------------------------------------------------------------------- Document 13: I know. One of those soldiers was my son Major Beau Biden. We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. But I’m committed to finding out everything we can. Committed to military families
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker
9e5d720c0896-7
to finding out everything we can. Committed to military families like Danielle Robinson from Ohio. The widow of Sergeant First Class Heath Robinson. He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. ---------------------------------------------------------------------------------------------------- Document 14: And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. First, beat the opioid epidemic. There is so much we can do. Increase funding for prevention, treatment, harm reduction, and recovery. ---------------------------------------------------------------------------------------------------- Document 15: Third, support our veterans. Veterans are the best of us. I’ve always believed that we have a sacred obligation to equip all those we send to war and care for them and their families when they come home. My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free. Our troops in Iraq and Afghanistan faced many dangers. ---------------------------------------------------------------------------------------------------- Document 16: When we invest in our workers, when we build the economy from the bottom up and the middle out together, we can do something we haven’t done in a long time: build a better America.
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker
9e5d720c0896-8
done in a long time: build a better America. For more than two years, COVID-19 has impacted every decision in our lives and the life of the nation. And I know you’re tired, frustrated, and exhausted. But I also know this. ---------------------------------------------------------------------------------------------------- Document 17: Now is the hour. Our moment of responsibility. Our test of resolve and conscience, of history itself. It is in this moment that our character is formed. Our purpose is found. Our future is forged. Well I know this nation. We will meet the test. To protect freedom and liberty, to expand fairness and opportunity. We will save democracy. As hard as these times have been, I am more optimistic about America today than I have been my whole life. ---------------------------------------------------------------------------------------------------- Document 18: He didn’t know how to stop fighting, and neither did she. Through her pain she found purpose to demand we do better. Tonight, Danielle—we are. The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits. And tonight, I’m announcing we’re expanding eligibility to veterans suffering from nine respiratory cancers. ---------------------------------------------------------------------------------------------------- Document 19:
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker
9e5d720c0896-9
suffering from nine respiratory cancers. ---------------------------------------------------------------------------------------------------- Document 19: I understand. I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. That’s why one of the first things I did as President was fight to pass the American Rescue Plan. Because people were hurting. We needed to act, and we did. Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis. ---------------------------------------------------------------------------------------------------- Document 20: So let’s not abandon our streets. Or choose between safety and equal justice. Let’s come together to protect our communities, restore trust, and hold law enforcement accountable. That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.Doing reranking with CohereRerank​Now let's wrap our base retriever with a ContextualCompressionRetriever. We'll add an CohereRerank, uses the Cohere rerank endpoint to rerank the returned results.from langchain.llms import OpenAIfrom langchain.retrievers import ContextualCompressionRetrieverfrom langchain.retrievers.document_compressors import CohereRerankllm = OpenAI(temperature=0)compressor = CohereRerank()compression_retriever = ContextualCompressionRetriever( base_compressor=compressor, base_retriever=retriever)compressed_docs =
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker
9e5d720c0896-10
base_compressor=compressor, base_retriever=retriever)compressed_docs = compression_retriever.get_relevant_documents( "What did the president say about Ketanji Jackson Brown")pretty_print_docs(compressed_docs) Document 1: One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ---------------------------------------------------------------------------------------------------- Document 2: I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. So let’s not abandon our streets. Or choose between safety and equal justice. ---------------------------------------------------------------------------------------------------- Document 3: A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker
9e5d720c0896-11
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.You can of course use this retriever within a QA pipelinefrom langchain.chains import RetrievalQAchain = RetrievalQA.from_chain_type( llm=OpenAI(temperature=0), retriever=compression_retriever)chain({"query": query}) {'query': 'What did the president say about Ketanji Brown Jackson', 'result': " The president said that Ketanji Brown Jackson is one of the nation's top legal minds and that she is a consensus builder who has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."}PreviousChatGPT PluginNextDocArray RetrieverSet up the base vector store retrieverDoing reranking with CohereRerankCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker
97078d20669d-0
Pinecone Hybrid Search | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/retrievers/pinecone_hybrid_search
97078d20669d-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversAmazon KendraArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArray RetrieverElasticSearch BM25Google Cloud Enterprise SearchkNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedSVMTF-IDFVespaWeaviate Hybrid SearchWikipediaZepText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsRetrieversPinecone Hybrid SearchOn this pagePinecone Hybrid SearchPinecone is a vector database with broad functionality.This notebook goes over how to use a retriever that under the hood uses Pinecone and Hybrid Search.The logic of this retriever is taken from this documentaionTo use Pinecone, you must have an API key and an Environment.
https://python.langchain.com/docs/integrations/retrievers/pinecone_hybrid_search
97078d20669d-2
Here are the installation instructions.#!pip install pinecone-client pinecone-textimport osimport getpassos.environ["PINECONE_API_KEY"] = getpass.getpass("Pinecone API Key:")from langchain.retrievers import PineconeHybridSearchRetrieveros.environ["PINECONE_ENVIRONMENT"] = getpass.getpass("Pinecone Environment:")We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")Setup Pinecone​You should only have to do this part once.Note: it's important to make sure that the "context" field that holds the document text in the metadata is not indexed. Currently you need to specify explicitly the fields you do want to index. For more information checkout Pinecone's docs.import osimport pineconeapi_key = os.getenv("PINECONE_API_KEY") or "PINECONE_API_KEY"# find environment next to your API key in the Pinecone consoleenv = os.getenv("PINECONE_ENVIRONMENT") or "PINECONE_ENVIRONMENT"index_name = "langchain-pinecone-hybrid-search"pinecone.init(api_key=api_key, environment=env)pinecone.whoami() WhoAmIResponse(username='load', user_label='label', projectname='load-test')# create the indexpinecone.create_index( name=index_name, dimension=1536, # dimensionality of dense model metric="dotproduct", # sparse values supported only for dotproduct pod_type="s1", metadata_config={"indexed": []}, # see explaination above)Now that its created, we can use itindex = pinecone.Index(index_name)Get embeddings and sparse encoders​Embeddings are used for
https://python.langchain.com/docs/integrations/retrievers/pinecone_hybrid_search
97078d20669d-3
embeddings and sparse encoders​Embeddings are used for the dense vectors, tokenizer is used for the sparse vectorfrom langchain.embeddings import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()To encode the text to sparse values you can either choose SPLADE or BM25. For out of domain tasks we recommend using BM25.For more information about the sparse encoders you can checkout pinecone-text library docs.from pinecone_text.sparse import BM25Encoder# or from pinecone_text.sparse import SpladeEncoder if you wish to work with SPLADE# use default tf-idf valuesbm25_encoder = BM25Encoder().default()The above code is using default tfids values. It's highly recommended to fit the tf-idf values to your own corpus. You can do it as follow:corpus = ["foo", "bar", "world", "hello"]# fit tf-idf values on your corpusbm25_encoder.fit(corpus)# store the values to a json filebm25_encoder.dump("bm25_values.json")# load to your BM25Encoder objectbm25_encoder = BM25Encoder().load("bm25_values.json")Load Retriever​We can now construct the retriever!retriever = PineconeHybridSearchRetriever( embeddings=embeddings, sparse_encoder=bm25_encoder, index=index)Add texts (if necessary)​We can optionally add texts to the retriever (if they aren't already in there)retriever.add_texts(["foo", "bar", "world", "hello"]) 100%|██████████| 1/1 [00:02<00:00, 2.27s/it]Use
https://python.langchain.com/docs/integrations/retrievers/pinecone_hybrid_search
97078d20669d-4
[00:02<00:00, 2.27s/it]Use Retriever​We can now use the retriever!result = retriever.get_relevant_documents("foo")result[0] Document(page_content='foo', metadata={})PreviousMetalNextPubMedSetup PineconeGet embeddings and sparse encodersLoad RetrieverAdd texts (if necessary)Use RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/retrievers/pinecone_hybrid_search
8c7b118da278-0
Chaindesk | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/retrievers/chaindesk
8c7b118da278-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversAmazon KendraArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArray RetrieverElasticSearch BM25Google Cloud Enterprise SearchkNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedSVMTF-IDFVespaWeaviate Hybrid SearchWikipediaZepText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsRetrieversChaindeskOn this pageChaindeskChaindesk platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources).
https://python.langchain.com/docs/integrations/retrievers/chaindesk
8c7b118da278-2
Then your Datastores can be connected to ChatGPT via Plugins or any other Large Langue Model (LLM) via the Chaindesk API.This notebook shows how to use Chaindesk's retriever.First, you will need to sign up for Chaindesk, create a datastore, add some data and get your datastore api endpoint url. You need the API Key.Query​Now that our index is set up, we can set up a retriever and start querying it.from langchain.retrievers import ChaindeskRetrieverretriever = ChaindeskRetriever( datastore_url="https://clg1xg2h80000l708dymr0fxc.chaindesk.ai/query", # api_key="CHAINDESK_API_KEY", # optional if datastore is public # top_k=10 # optional)retriever.get_relevant_documents("What is Daftpage?") [Document(page_content='✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramGetting StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!DaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord', metadata={'source': 'https:/daftpage.com/help/getting-started', 'score': 0.8697265}), Document(page_content="✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramHelp CenterWelcome to Daftpage’s help center—the one-stop shop for learning everything about building websites with
https://python.langchain.com/docs/integrations/retrievers/chaindesk
8c7b118da278-3
help center—the one-stop shop for learning everything about building websites with Daftpage.Daftpage is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 DropsğŸ�¨ Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at [email protected] the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.86570895}), Document(page_content=" is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 DropsğŸ�¨ Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at
https://python.langchain.com/docs/integrations/retrievers/chaindesk
8c7b118da278-4
Skeleton modeCant find the answer you're looking for?mail us at [email protected] the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.8645384})]PreviousBM25NextChatGPT PluginQueryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/retrievers/chaindesk
930fa87bfa4e-0
BM25 | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/retrievers/bm25
930fa87bfa4e-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversAmazon KendraArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArray RetrieverElasticSearch BM25Google Cloud Enterprise SearchkNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedSVMTF-IDFVespaWeaviate Hybrid SearchWikipediaZepText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsRetrieversBM25On this pageBM25BM25 also known as the Okapi BM25, is a ranking function used in information retrieval systems to estimate the relevance of documents to a given search query.This notebook goes over how to use a retriever that under the hood uses BM25 using rank_bm25 package.# !pip install rank_bm25from langchain.retrievers import BM25Retriever /workspaces/langchain/.venv/lib/python3.10/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.10) is available. It's recommended that you update to the latest version using `pip install -U deeplake`. warnings.warn(Create New Retriever with Texts​retriever = BM25Retriever.from_texts(["foo", "bar", "world", "hello", "foo bar"])Create a New Retriever with Documents​You can now create a new retriever with the documents you created.from langchain.schema import Documentretriever = BM25Retriever.from_documents( [ Document(page_content="foo"),
https://python.langchain.com/docs/integrations/retrievers/bm25
930fa87bfa4e-2
[ Document(page_content="foo"), Document(page_content="bar"), Document(page_content="world"), Document(page_content="hello"), Document(page_content="foo bar"), ])Use Retriever​We can now use the retriever!result = retriever.get_relevant_documents("foo")result [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='world', metadata={})]PreviousAzure Cognitive SearchNextChaindeskCreate New Retriever with TextsCreate a New Retriever with DocumentsUse RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/retrievers/bm25
d716d83358e3-0
Weaviate Hybrid Search | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/retrievers/weaviate-hybrid
d716d83358e3-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversAmazon KendraArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArray RetrieverElasticSearch BM25Google Cloud Enterprise SearchkNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedSVMTF-IDFVespaWeaviate Hybrid SearchWikipediaZepText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsRetrieversWeaviate Hybrid SearchWeaviate Hybrid SearchWeaviate is an open source vector database.Hybrid search is a technique that combines multiple search algorithms to improve the accuracy and relevance of search results. It uses the best features of both keyword-based search algorithms with vector search techniques.The Hybrid search in Weaviate uses sparse and dense vectors to represent the meaning and context of search queries and documents.This notebook shows how to use Weaviate hybrid search as a LangChain retriever.Set up the retriever:#!pip install weaviate-clientimport weaviateimport osWEAVIATE_URL = os.getenv("WEAVIATE_URL")auth_client_secret = (weaviate.AuthApiKey(api_key=os.getenv("WEAVIATE_API_KEY")),)client = weaviate.Client( url=WEAVIATE_URL, additional_headers={ "X-Openai-Api-Key": os.getenv("OPENAI_API_KEY"), },)# client.schema.delete_all()from langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetrieverfrom langchain.schema import Document retriever = WeaviateHybridSearchRetriever(
https://python.langchain.com/docs/integrations/retrievers/weaviate-hybrid
d716d83358e3-2
Document retriever = WeaviateHybridSearchRetriever( client=client, index_name="LangChain", text_key="text", attributes=[], create_schema_if_missing=True,)Add some data:docs = [ Document( metadata={ "title": "Embracing The Future: AI Unveiled", "author": "Dr. Rebecca Simmons", }, page_content="A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.", ), Document( metadata={ "title": "Symbiosis: Harmonizing Humans and AI", "author": "Prof. Jonathan K. Sterling", }, page_content="Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.", ), Document( metadata={"title": "AI: The Ethical Quandary", "author": "Dr. Rebecca Simmons"}, page_content="In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.", ), Document(
https://python.langchain.com/docs/integrations/retrievers/weaviate-hybrid
d716d83358e3-3
society at large.", ), Document( metadata={ "title": "Conscious Constructs: The Search for AI Sentience", "author": "Dr. Samuel Cortez", }, page_content="Dr. Cortez takes readers on a journey exploring the controversial topic of AI consciousness. The book provides compelling arguments for and against the possibility of true AI sentience.", ), Document( metadata={ "title": "Invisible Routines: Hidden AI in Everyday Life", "author": "Prof. Jonathan K. Sterling", }, page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", ),]retriever.add_documents(docs) ['3a27b0a5-8dbb-4fee-9eba-8b6bc2c252be', 'eeb9fd9b-a3ac-4d60-a55b-a63a25d3b907', '7ebbdae7-1061-445f-a046-1989f2343d8f', 'c2ab315b-3cab-467f-b23a-b26ed186318d',
https://python.langchain.com/docs/integrations/retrievers/weaviate-hybrid
d716d83358e3-4
'b83765f2-e5d2-471f-8c02-c3350ade4c4f']Do a hybrid search:retriever.get_relevant_documents("the ethical implications of AI") [Document(page_content='In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.', metadata={}), Document(page_content='A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.', metadata={}), Document(page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", metadata={}), Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={})]Do a hybrid search with where filter:retriever.get_relevant_documents( "AI integration in society", where_filter={ "path": ["author"], "operator": "Equal", "valueString": "Prof. Jonathan K. Sterling", },) [Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={}), Document(page_content="In his follow-up to
https://python.langchain.com/docs/integrations/retrievers/weaviate-hybrid
d716d83358e3-5
manner.', metadata={}), Document(page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", metadata={})]Do a hybrid search with scores:retriever.get_relevant_documents( "AI integration in society", score=True,) [Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={'_additional': {'explainScore': '(bm25)\n(hybrid) Document eeb9fd9b-a3ac-4d60-a55b-a63a25d3b907 contributed 0.00819672131147541 to the score\n(hybrid) Document eeb9fd9b-a3ac-4d60-a55b-a63a25d3b907 contributed 0.00819672131147541 to the score', 'score': '0.016393442'}}), Document(page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", metadata={'_additional': {'explainScore': '(bm25)\n(hybrid) Document b83765f2-e5d2-471f-8c02-c3350ade4c4f contributed 0.0078125 to the score\n(hybrid) Document b83765f2-e5d2-471f-8c02-c3350ade4c4f contributed 0.008064516129032258 to the score',
https://python.langchain.com/docs/integrations/retrievers/weaviate-hybrid
d716d83358e3-6
contributed 0.008064516129032258 to the score', 'score': '0.015877016'}}), Document(page_content='In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.', metadata={'_additional': {'explainScore': '(bm25)\n(hybrid) Document 7ebbdae7-1061-445f-a046-1989f2343d8f contributed 0.008064516129032258 to the score\n(hybrid) Document 7ebbdae7-1061-445f-a046-1989f2343d8f contributed 0.0078125 to the score', 'score': '0.015877016'}}), Document(page_content='A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.', metadata={'_additional': {'explainScore': '(vector) [-0.0071824766 -0.0006682752 0.001723625 -0.01897258 -0.0045127636 0.0024410256 -0.020503938 0.013768672 0.009520169 -0.037972264]... \n(hybrid) Document 3a27b0a5-8dbb-4fee-9eba-8b6bc2c252be contributed 0.007936507936507936 to the score', 'score': '0.007936508'}})]PreviousVespaNextWikipediaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/retrievers/weaviate-hybrid
e192fd5b7631-0
LOTR (Merger Retriever) | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/retrievers/merger_retriever
e192fd5b7631-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversAmazon KendraArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArray RetrieverElasticSearch BM25Google Cloud Enterprise SearchkNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedSVMTF-IDFVespaWeaviate Hybrid SearchWikipediaZepText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsRetrieversLOTR (Merger Retriever)On this pageLOTR (Merger Retriever)Lord of the Retrievers, also known as MergerRetriever, takes a list of retrievers as input and merges the results of their get_relevant_documents() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers.The MergerRetriever class can be used to improve the accuracy of document retrieval in a number of ways. First, it can combine the results of multiple retrievers, which can help to reduce the risk of bias in the results. Second, it can rank the results of the different retrievers, which can help to ensure that the most relevant documents are returned first.import osimport chromadbfrom langchain.retrievers.merger_retriever import MergerRetrieverfrom langchain.vectorstores import Chromafrom langchain.embeddings import HuggingFaceEmbeddingsfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.document_transformers import ( EmbeddingsRedundantFilter, EmbeddingsClusteringFilter,)from langchain.retrievers.document_compressors import DocumentCompressorPipelinefrom
https://python.langchain.com/docs/integrations/retrievers/merger_retriever
e192fd5b7631-2
langchain.retrievers.document_compressors import DocumentCompressorPipelinefrom langchain.retrievers import ContextualCompressionRetriever# Get 3 diff embeddings.all_mini = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")multi_qa_mini = HuggingFaceEmbeddings(model_name="multi-qa-MiniLM-L6-dot-v1")filter_embeddings = OpenAIEmbeddings()ABS_PATH = os.path.dirname(os.path.abspath(__file__))DB_DIR = os.path.join(ABS_PATH, "db")# Instantiate 2 diff cromadb indexs, each one with a diff embedding.client_settings = chromadb.config.Settings( is_persistent=True, persist_directory=DB_DIR, anonymized_telemetry=False,)db_all = Chroma( collection_name="project_store_all", persist_directory=DB_DIR, client_settings=client_settings, embedding_function=all_mini,)db_multi_qa = Chroma( collection_name="project_store_multi", persist_directory=DB_DIR, client_settings=client_settings, embedding_function=multi_qa_mini,)# Define 2 diff retrievers with 2 diff embeddings and diff search type.retriever_all = db_all.as_retriever( search_type="similarity", search_kwargs={"k": 5, "include_metadata": True})retriever_multi_qa = db_multi_qa.as_retriever( search_type="mmr", search_kwargs={"k": 5, "include_metadata": True})# The Lord of the Retrievers will hold the ouput of boths retrievers and can be used as any other# retriever on different types of chains.lotr = MergerRetriever(retrievers=[retriever_all, retriever_multi_qa])Remove redundant results from the merged
https://python.langchain.com/docs/integrations/retrievers/merger_retriever
e192fd5b7631-3
retriever_multi_qa])Remove redundant results from the merged retrievers.​# We can remove redundant results from both retrievers using yet another embedding.# Using multiples embeddings in diff steps could help reduce biases.filter = EmbeddingsRedundantFilter(embeddings=filter_embeddings)pipeline = DocumentCompressorPipeline(transformers=[filter])compression_retriever = ContextualCompressionRetriever( base_compressor=pipeline, base_retriever=lotr)Pick a representative sample of documents from the merged retrievers.​# This filter will divide the documents vectors into clusters or "centers" of meaning.# Then it will pick the closest document to that center for the final results.# By default the result document will be ordered/grouped by clusters.filter_ordered_cluster = EmbeddingsClusteringFilter( embeddings=filter_embeddings, num_clusters=10, num_closest=1,)# If you want the final document to be ordered by the original retriever scores# you need to add the "sorted" parameter.filter_ordered_by_retriever = EmbeddingsClusteringFilter( embeddings=filter_embeddings, num_clusters=10, num_closest=1, sorted=True,)pipeline = DocumentCompressorPipeline(transformers=[filter_ordered_by_retriever])compression_retriever = ContextualCompressionRetriever( base_compressor=pipeline, base_retriever=lotr)Re-order results to avoid performance degradation.​No matter the architecture of your model, there is a sustancial performance degradation when you include 10+ retrieved documents.
https://python.langchain.com/docs/integrations/retrievers/merger_retriever
e192fd5b7631-4
In brief: When models must access relevant information in the middle of long contexts, then tend to ignore the provided documents. See: https://arxiv.org/abs//2307.03172# You can use an additional document transformer to reorder documents after removing redudance.from langchain.document_transformers import LongContextReorderfilter = EmbeddingsRedundantFilter(embeddings=filter_embeddings)reordering = LongContextReorder()pipeline = DocumentCompressorPipeline(transformers=[filter, reordering])compression_retriever_reordered = ContextualCompressionRetriever( base_compressor=pipeline, base_retriever=lotr)PreviouskNNNextMetalRemove redundant results from the merged retrievers.Pick a representative sample of documents from the merged retrievers.Re-order results to avoid performance degradation.CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/retrievers/merger_retriever
0da178c9509c-0
kNN | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/retrievers/knn
0da178c9509c-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversAmazon KendraArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArray RetrieverElasticSearch BM25Google Cloud Enterprise SearchkNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedSVMTF-IDFVespaWeaviate Hybrid SearchWikipediaZepText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsRetrieverskNNOn this pagekNNIn statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression.This notebook goes over how to use a retriever that under the hood uses an kNN.Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.htmlfrom langchain.retrievers import KNNRetrieverfrom langchain.embeddings import OpenAIEmbeddingsCreate New Retriever with Texts​retriever = KNNRetriever.from_texts( ["foo", "bar", "world", "hello", "foo bar"], OpenAIEmbeddings())Use Retriever​We can now use the retriever!result = retriever.get_relevant_documents("foo")result [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='bar', metadata={})]PreviousGoogle Cloud Enterprise SearchNextLOTR
https://python.langchain.com/docs/integrations/retrievers/knn
0da178c9509c-2
Document(page_content='bar', metadata={})]PreviousGoogle Cloud Enterprise SearchNextLOTR (Merger Retriever)Create New Retriever with TextsUse RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/retrievers/knn
b00519abbbd2-0
PubMed | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/retrievers/pubmed
b00519abbbd2-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversAmazon KendraArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArray RetrieverElasticSearch BM25Google Cloud Enterprise SearchkNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedSVMTF-IDFVespaWeaviate Hybrid SearchWikipediaZepText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsRetrieversPubMedPubMedThis notebook goes over how to use PubMed as a retrieverPubMed® comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.from langchain.retrievers import PubMedRetrieverretriever = PubMedRetriever()retriever.get_relevant_documents("chatgpt") [Document(page_content='', metadata={'uid': '37268021', 'title': 'Dermatology in the wake of an AI revolution: who gets a say?', 'pub_date': '<Year>2023</Year><Month>May</Month><Day>31</Day>'}), Document(page_content='', metadata={'uid': '37267643', 'title': 'What is ChatGPT and what do we do with it? Implications of the age of AI for nursing and midwifery practice and education: An editorial.', 'pub_date': '<Year>2023</Year><Month>May</Month><Day>30</Day>'}), Document(page_content='The nursing field has undergone notable changes over time and is projected to undergo further modifications in the future, owing
https://python.langchain.com/docs/integrations/retrievers/pubmed
b00519abbbd2-2
nursing field has undergone notable changes over time and is projected to undergo further modifications in the future, owing to the advent of sophisticated technologies and growing healthcare needs. The advent of ChatGPT, an AI-powered language model, is expected to exert a significant influence on the nursing profession, specifically in the domains of patient care and instruction. The present article delves into the ramifications of ChatGPT within the nursing domain and accentuates its capacity and constraints to transform the discipline.', metadata={'uid': '37266721', 'title': 'The Impact of ChatGPT on the Nursing Profession: Revolutionizing Patient Care and Education.', 'pub_date': '<Year>2023</Year><Month>Jun</Month><Day>02</Day>'})]PreviousPinecone Hybrid SearchNextSVMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/retrievers/pubmed
6a6ad270345a-0
Arxiv | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/retrievers/arxiv
6a6ad270345a-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversAmazon KendraArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArray RetrieverElasticSearch BM25Google Cloud Enterprise SearchkNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedSVMTF-IDFVespaWeaviate Hybrid SearchWikipediaZepText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsRetrieversArxivOn this pageArxivarXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.This notebook shows how to retrieve scientific articles from Arxiv.org into the Document format that is used downstream.Installation​First, you need to install arxiv python package.#!pip install arxivArxivRetriever has these arguments:optional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), Title, Authors, Summary. If True, other fields also downloaded.get_relevant_documents() has one argument, query: free text which used to find documents in Arxiv.orgExamples​Running retriever​from langchain.retrievers import ArxivRetrieverretriever = ArxivRetriever(load_max_docs=2)docs =
https://python.langchain.com/docs/integrations/retrievers/arxiv
6a6ad270345a-2
= ArxivRetriever(load_max_docs=2)docs = retriever.get_relevant_documents(query="1605.08386")docs[0].metadata # meta-information of the Document {'Published': '2016-05-26', 'Title': 'Heat-bath random walks with Markov bases', 'Authors': 'Caprice Stanley, Tobias Windisch', 'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on\nfibers of a fixed integer matrix can be bounded from above by a constant. We\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditions on the set of moves so that the heat-bath random\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\ndimension.'}docs[0].page_content[:400] # a content of the Document 'arXiv:1605.08386v1 [math.CO] 26 May 2016\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\nCAPRICE STANLEY AND TOBIAS WINDISCH\nAbstract. Graphs on lattice points are studied whose edges come from a �nite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on �bers of a\n�xed integer matrix can be bounded from above by a constant. We then study the mixing\nbehaviour of heat-b'Question Answering on facts​# get a token: https://platform.openai.com/account/api-keysfrom getpass import getpassOPENAI_API_KEY = getpass()
https://python.langchain.com/docs/integrations/retrievers/arxiv
6a6ad270345a-3
getpass import getpassOPENAI_API_KEY = getpass() ········import osos.environ["OPENAI_API_KEY"] = OPENAI_API_KEYfrom langchain.chat_models import ChatOpenAIfrom langchain.chains import ConversationalRetrievalChainmodel = ChatOpenAI(model_name="gpt-3.5-turbo") # switch to 'gpt-4'qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)questions = [ "What are Heat-bath random walks with Markov base?", "What is the ImageBind model?", "How does Compositional Reasoning with Large Language Models works?",]chat_history = []for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What are Heat-bath random walks with Markov base? **Answer**: I'm not sure, as I don't have enough context to provide a definitive answer. The term "Heat-bath random walks with Markov base" is not mentioned in the given text. Could you provide more information or context about where you encountered this term? -> **Question**: What is the ImageBind model? **Answer**: ImageBind is an approach developed by Facebook AI Research to learn a joint embedding across six different modalities, including images, text, audio, depth, thermal, and IMU data. The approach uses the binding property of images to align each
https://python.langchain.com/docs/integrations/retrievers/arxiv
6a6ad270345a-4
depth, thermal, and IMU data. The approach uses the binding property of images to align each modality's embedding to image embeddings and achieve an emergent alignment across all modalities. This enables novel multimodal capabilities, including cross-modal retrieval, embedding-space arithmetic, and audio-to-image generation, among others. The approach sets a new state-of-the-art on emergent zero-shot recognition tasks across modalities, outperforming specialist supervised models. Additionally, it shows strong few-shot recognition results and serves as a new way to evaluate vision models for visual and non-visual tasks. -> **Question**: How does Compositional Reasoning with Large Language Models works? **Answer**: Compositional reasoning with large language models refers to the ability of these models to correctly identify and represent complex concepts by breaking them down into smaller, more basic parts and combining them in a structured way. This involves understanding the syntax and semantics of language and using that understanding to build up more complex meanings from simpler ones. In the context of the paper "Does CLIP Bind Concepts? Probing Compositionality in Large Image Models", the authors focus specifically on the ability of a large pretrained vision and language model (CLIP) to encode compositional concepts and to bind variables in a structure-sensitive way. They examine CLIP's ability to compose concepts in a single-object setting, as well as in situations where concept binding is needed. The authors situate their work within the tradition of research on compositional distributional semantics models (CDSMs), which seek to bridge the gap between distributional models and formal semantics by building architectures which operate over vectors yet still obey traditional theories of linguistic composition. They compare the performance of CLIP with several architectures from research on CDSMs to evaluate its ability to encode and reason about compositional concepts. questions
https://python.langchain.com/docs/integrations/retrievers/arxiv
6a6ad270345a-5
to evaluate its ability to encode and reason about compositional concepts. questions = [ "What are Heat-bath random walks with Markov base? Include references to answer.",]chat_history = []for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What are Heat-bath random walks with Markov base? Include references to answer. **Answer**: Heat-bath random walks with Markov base (HB-MB) is a class of stochastic processes that have been studied in the field of statistical mechanics and condensed matter physics. In these processes, a particle moves in a lattice by making a transition to a neighboring site, which is chosen according to a probability distribution that depends on the energy of the particle and the energy of its surroundings. The HB-MB process was introduced by Bortz, Kalos, and Lebowitz in 1975 as a way to simulate the dynamics of interacting particles in a lattice at thermal equilibrium. The method has been used to study a variety of physical phenomena, including phase transitions, critical behavior, and transport properties. References: Bortz, A. B., Kalos, M. H., & Lebowitz, J. L. (1975). A new algorithm for Monte Carlo simulation of Ising spin systems. Journal of Computational Physics, 17(1), 10-18. Binder, K., & Heermann, D. W. (2010). Monte Carlo simulation in statistical physics:
https://python.langchain.com/docs/integrations/retrievers/arxiv
6a6ad270345a-6
& Heermann, D. W. (2010). Monte Carlo simulation in statistical physics: an introduction. Springer Science & Business Media. PreviousAmazon KendraNextAzure Cognitive SearchInstallationExamplesRunning retrieverQuestion Answering on factsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/retrievers/arxiv
bedfadc9e0f7-0
Amazon Kendra | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/retrievers/amazon_kendra_retriever
bedfadc9e0f7-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversAmazon KendraArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArray RetrieverElasticSearch BM25Google Cloud Enterprise SearchkNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedSVMTF-IDFVespaWeaviate Hybrid SearchWikipediaZepText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsRetrieversAmazon KendraOn this pageAmazon KendraAmazon Kendra is an intelligent search service provided by Amazon Web Services (AWS). It utilizes advanced natural language processing (NLP) and machine learning algorithms to enable powerful search capabilities across various data sources within an organization. Kendra is designed to help users find the information they need quickly and accurately, improving productivity and decision-making.With Kendra, users can search across a wide range of content types, including documents, FAQs, knowledge bases, manuals, and websites. It supports multiple languages and can understand complex queries, synonyms, and contextual meanings to provide highly relevant search results.Using the Amazon Kendra Index Retriever​%pip install boto3import boto3from langchain.retrievers import AmazonKendraRetrieverCreate New Retrieverretriever = AmazonKendraRetriever(index_id="c0806df7-e76b-4bce-9b5c-d5582f6b1a03")Now you can use retrieved documents from Kendra indexretriever.get_relevant_documents("what is langchain")PreviousRetrieversNextArxivUsing the Amazon Kendra Index RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ©
https://python.langchain.com/docs/integrations/retrievers/amazon_kendra_retriever
bedfadc9e0f7-2
Index RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/retrievers/amazon_kendra_retriever
055810621507-0
Vector stores | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/vectorstores/
055810621507-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cognitive SearchCassandraChromaClarifaiClickHouse Vector SearchActiveloop's Deep LakeDocArrayHnswSearchDocArrayInMemorySearchElasticSearchFAISSHologresLanceDBMarqoMatchingEngineMilvusMongoDB AtlasMyScaleOpenSearchpg_embeddingPGVectorPineconeQdrantRedisRocksetSingleStoreDBscikit-learnStarRocksSupabase (Postgres)TairTigrisTypesenseVectaraWeaviateZillizGrouped by providerIntegrationsVector storesVector stores📄� Alibaba Cloud OpenSearchAlibaba Cloud Opensearch is a one-stop platform to develop intelligent search services. OpenSearch was built on the large-scale distributed search engine developed by Alibaba. OpenSearch serves more than 500 business cases in Alibaba Group and thousands of Alibaba Cloud customers. OpenSearch helps develop search services in different search scenarios, including e-commerce, O2O, multimedia, the content industry, communities and forums, and big data query in enterprises.📄� AnalyticDBAnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.📄� AnnoyAnnoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share
https://python.langchain.com/docs/integrations/vectorstores/
055810621507-2
creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data.📄� AtlasAtlas is a platform for interacting with both small and internet scale unstructured datasets by Nomic.📄� AwaDBAwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications.📄� Azure Cognitive SearchAzure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.📄� CassandraApache Cassandra® is a NoSQL, row-oriented, highly scalable and highly available database.📄� ChromaChroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0.📄� ClarifaiClarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference. A Clarifai application can be used as a vector database after uploading inputs.📄� ClickHouse Vector SearchClickHouse is the fastest and most resource efficient open-source database for real-time apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries. Lately added data structures and distance search functions (like L2Distance) as well as approximate nearest neighbor search indexes enable ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL.📄� Activeloop's Deep LakeActiveloop's Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text,
https://python.langchain.com/docs/integrations/vectorstores/
055810621507-3
Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes.📄� DocArrayHnswSearchDocArrayHnswSearch is a lightweight Document Index implementation provided by Docarray that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in hnswlib, and stores all other data in SQLite.📄� DocArrayInMemorySearchDocArrayInMemorySearch is a document index provided by Docarray that stores documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server.📄� ElasticSearchElasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.📄� FAISSFacebook AI Similarity Search (Faiss) is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning.📄� HologresHologres is a unified real-time data warehousing service developed by Alibaba Cloud. You can use Hologres to write, update, process, and analyze large amounts of data in real time.📄� LanceDBLanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrevial, filtering and management of embeddings. Fully open source.📄�
https://python.langchain.com/docs/integrations/vectorstores/
055810621507-4
filtering and management of embeddings. Fully open source.📄� MarqoThis notebook shows how to use functionality related to the Marqo vectorstore.📄� MatchingEngineThis notebook shows how to use functionality related to the GCP Vertex AI MatchingEngine vector database.📄� MilvusMilvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models.📄� MongoDB AtlasMongoDB Atlas is a fully-managed cloud database available in AWS , Azure, and GCP. It now has support for native Vector Search on your MongoDB document data.📄� MyScaleMyScale is a cloud-based database optimized for AI applications and solutions, built on the open-source ClickHouse.📄� OpenSearchOpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene.📄� pg_embeddingpgembedding is an open-source vector similarity search for Postgres that uses Hierarchical Navigable Small Worlds for approximate nearest neighbor search.📄� PGVectorPGVector is an open-source vector similarity search for Postgres📄� PineconePinecone is a vector database with broad functionality.📄� QdrantQdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts
https://python.langchain.com/docs/integrations/vectorstores/
055810621507-5
additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications.📄� RedisRedis (Remote Dictionary Server) is an in-memory data structure store, used as a distributed, in-memory key–value database, cache and message broker, with optional durability.📄� RocksetRockset is a real-time analytics database service for serving low latency, high concurrency analytical queries at scale. It builds a Converged Index™ on structured and semi-structured data with an efficient store for vector embeddings. Its support for running SQL on schemaless data makes it a perfect choice for running vector search with metadata filters.📄� SingleStoreDBSingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premises. It provides vector storage, and vector functions including dotproduct and euclideandistance, thereby supporting AI applications that require text similarity matching.📄� scikit-learnscikit-learn is an open source collection of machine learning algorithms, including some implementations of the k nearest neighbors. SKLearnVectorStore wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format.📄� StarRocksStarRocks is a High-Performance Analytical Database.📄� Supabase (Postgres)Supabase is an open source Firebase alternative. Supabase is built on top of PostgreSQL, which offers strong SQL querying capabilities and enables a simple interface with already-existing tools and frameworks.📄� TairTair is a cloud native in-memory database service developed by Alibaba
https://python.langchain.com/docs/integrations/vectorstores/
055810621507-6
TairTair is a cloud native in-memory database service developed by Alibaba Cloud.📄� TigrisTigris is an open source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications.📄� TypesenseTypesense is an open source, in-memory search engine, that you can either self-host or run on Typesense Cloud.📄� VectaraVectara is a API platform for building LLM-powered applications. It provides a simple to use API for document indexing and query that is managed by Vectara and is optimized for performance and accuracy.📄� WeaviateWeaviate is an open-source vector database. It allows you to store data objects and vector embeddings from your favorite ML-models, and scale seamlessly into billions of data objects.📄� ZillizZilliz Cloud is a fully managed service on cloud for LF AI Milvus®,PreviousZapier Natural Language Actions APINextAlibaba Cloud OpenSearchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/vectorstores/
9051c1cc6979-0
Pinecone | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/vectorstores/pinecone
9051c1cc6979-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cognitive SearchCassandraChromaClarifaiClickHouse Vector SearchActiveloop's Deep LakeDocArrayHnswSearchDocArrayInMemorySearchElasticSearchFAISSHologresLanceDBMarqoMatchingEngineMilvusMongoDB AtlasMyScaleOpenSearchpg_embeddingPGVectorPineconeQdrantRedisRocksetSingleStoreDBscikit-learnStarRocksSupabase (Postgres)TairTigrisTypesenseVectaraWeaviateZillizGrouped by providerIntegrationsVector storesPineconeOn this pagePineconePinecone is a vector database with broad functionality.This notebook shows how to use functionality related to the Pinecone vector database.To use Pinecone, you must have an API key.
https://python.langchain.com/docs/integrations/vectorstores/pinecone
9051c1cc6979-2
Here are the installation instructions.pip install pinecone-client openai tiktokenimport osimport getpassPINECONE_API_KEY = getpass.getpass("Pinecone API Key:")PINECONE_ENV = getpass.getpass("Pinecone Environment:")We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Pineconefrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()import pinecone# initialize pineconepinecone.init( api_key=PINECONE_API_KEY, # find at app.pinecone.io environment=PINECONE_ENV, # next to api key in console)index_name = "langchain-demo"docsearch = Pinecone.from_documents(docs, embeddings, index_name=index_name)# if you already have an index, you can load it like this# docsearch = Pinecone.from_existing_index(index_name, embeddings)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query)print(docs[0].page_content)Adding More Text to an Existing Index​More text can embedded and upserted to an existing Pinecone index using the add_texts functionindex = pinecone.Index("langchain-demo")vectorstore = Pinecone(index, embeddings.embed_query, "text")vectorstore.add_texts("More text!")Maximal Marginal Relevance
https://python.langchain.com/docs/integrations/vectorstores/pinecone
9051c1cc6979-3
"text")vectorstore.add_texts("More text!")Maximal Marginal Relevance Searches​In addition to using similarity search in the retriever object, you can also use mmr as retriever.retriever = docsearch.as_retriever(search_type="mmr")matched_docs = retriever.get_relevant_documents(query)for i, d in enumerate(matched_docs): print(f"\n## Document {i}\n") print(d.page_content)Or use max_marginal_relevance_search directly:found_docs = docsearch.max_marginal_relevance_search(query, k=2, fetch_k=10)for i, doc in enumerate(found_docs): print(f"{i + 1}.", doc.page_content, "\n")PreviousPGVectorNextQdrantAdding More Text to an Existing IndexMaximal Marginal Relevance SearchesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/vectorstores/pinecone
68e2d9ca3c5f-0
DocArrayInMemorySearch | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/vectorstores/docarray_in_memory
68e2d9ca3c5f-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cognitive SearchCassandraChromaClarifaiClickHouse Vector SearchActiveloop's Deep LakeDocArrayHnswSearchDocArrayInMemorySearchElasticSearchFAISSHologresLanceDBMarqoMatchingEngineMilvusMongoDB AtlasMyScaleOpenSearchpg_embeddingPGVectorPineconeQdrantRedisRocksetSingleStoreDBscikit-learnStarRocksSupabase (Postgres)TairTigrisTypesenseVectaraWeaviateZillizGrouped by providerIntegrationsVector storesDocArrayInMemorySearchOn this pageDocArrayInMemorySearchDocArrayInMemorySearch is a document index provided by Docarray that stores documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server.This notebook shows how to use functionality related to the DocArrayInMemorySearch.Setup​Uncomment the below cells to install docarray and get/set your OpenAI api key if you haven't already done so.# !pip install "docarray"# Get an OpenAI token: https://platform.openai.com/account/api-keys# import os# from getpass import getpass# OPENAI_API_KEY = getpass()# os.environ["OPENAI_API_KEY"] = OPENAI_API_KEYUsing DocArrayInMemorySearch​from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import DocArrayInMemorySearchfrom langchain.document_loaders import TextLoaderdocuments =
https://python.langchain.com/docs/integrations/vectorstores/docarray_in_memory
68e2d9ca3c5f-2
import DocArrayInMemorySearchfrom langchain.document_loaders import TextLoaderdocuments = TextLoader("../../../state_of_the_union.txt").load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()db = DocArrayInMemorySearch.from_documents(docs, embeddings)Similarity search​query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity search with score​The returned distance score is cosine distance. Therefore, a lower score is better.docs = db.similarity_search_with_score(query)docs[0] (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the
https://python.langchain.com/docs/integrations/vectorstores/docarray_in_memory
68e2d9ca3c5f-3
Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={}), 0.8154190158347903)PreviousDocArrayHnswSearchNextElasticSearchSetupUsing DocArrayInMemorySearchSimilarity searchSimilarity search with scoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/vectorstores/docarray_in_memory
65a64c14a093-0
Supabase (Postgres) | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/vectorstores/supabase
65a64c14a093-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cognitive SearchCassandraChromaClarifaiClickHouse Vector SearchActiveloop's Deep LakeDocArrayHnswSearchDocArrayInMemorySearchElasticSearchFAISSHologresLanceDBMarqoMatchingEngineMilvusMongoDB AtlasMyScaleOpenSearchpg_embeddingPGVectorPineconeQdrantRedisRocksetSingleStoreDBscikit-learnStarRocksSupabase (Postgres)TairTigrisTypesenseVectaraWeaviateZillizGrouped by providerIntegrationsVector storesSupabase (Postgres)On this pageSupabase (Postgres)Supabase is an open source Firebase alternative. Supabase is built on top of PostgreSQL, which offers strong SQL querying capabilities and enables a simple interface with already-existing tools and frameworks.PostgreSQL also known as Postgres, is a free and open-source relational database management system (RDBMS) emphasizing extensibility and SQL compliance.This notebook shows how to use Supabase and pgvector as your VectorStore.To run this notebook, please ensure:the pgvector extension is enabledyou have installed the supabase-py packagethat you have created a match_documents function in your databasethat you have a documents table in your public schema similar to the one below.The following function determines cosine similarity, but you can adjust to your needs. -- Enable the pgvector extension to work with embedding vectors create extension vector; -- Create a table to store your documents
https://python.langchain.com/docs/integrations/vectorstores/supabase
65a64c14a093-2
extension vector; -- Create a table to store your documents create table documents ( id bigserial primary key, content text, -- corresponds to Document.pageContent metadata jsonb, -- corresponds to Document.metadata embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed ); CREATE FUNCTION match_documents(query_embedding vector(1536), match_count int) RETURNS TABLE( id uuid, content text, metadata jsonb, -- we return matched vectors to enable maximal marginal relevance searches embedding vector(1536), similarity float) LANGUAGE plpgsql AS $$ # variable_conflict use_column BEGIN RETURN query SELECT id, content, metadata, embedding, 1
https://python.langchain.com/docs/integrations/vectorstores/supabase
65a64c14a093-3
embedding, 1 -(documents.embedding <=> query_embedding) AS similarity FROM documents ORDER BY documents.embedding <=> query_embedding LIMIT match_count; END; $$;# with pippip install supabase# with conda# !conda install -c conda-forge supabaseWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")os.environ["SUPABASE_URL"] = getpass.getpass("Supabase URL:")os.environ["SUPABASE_SERVICE_KEY"] = getpass.getpass("Supabase Service Key:")# If you're storing your Supabase and OpenAI API keys in a .env file, you can load them with dotenvfrom dotenv import load_dotenvload_dotenv()import osfrom supabase.client import Client, create_clientsupabase_url = os.environ.get("SUPABASE_URL")supabase_key = os.environ.get("SUPABASE_SERVICE_KEY")supabase: Client = create_client(supabase_url, supabase_key)from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import SupabaseVectorStorefrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter =
https://python.langchain.com/docs/integrations/vectorstores/supabase
65a64c14a093-4
= TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()# We're using the default `documents` table here. You can modify this by passing in a `table_name` argument to the `from_documents` method.vector_store = SupabaseVectorStore.from_documents(docs, embeddings, client=supabase)query = "What did the president say about Ketanji Brown Jackson"matched_docs = vector_store.similarity_search(query)print(matched_docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity search with score​The returned distance score is cosine distance. Therefore, a lower score is better.matched_docs = vector_store.similarity_search_with_relevance_scores(query)matched_docs[0] (Document(page_content='Tonight. I call on the
https://python.langchain.com/docs/integrations/vectorstores/supabase
65a64c14a093-5
(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}), 0.802509746274066)Retriever options​This section goes over different options for how to use SupabaseVectorStore as a retriever.Maximal Marginal Relevance Searches​In addition to using similarity search in the retriever object, you can also use mmr.retriever = vector_store.as_retriever(search_type="mmr")matched_docs = retriever.get_relevant_documents(query)for i, d in enumerate(matched_docs): print(f"\n## Document {i}\n") print(d.page_content) ## Document 0 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
https://python.langchain.com/docs/integrations/vectorstores/supabase
65a64c14a093-6
at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ## Document 1 One was stationed at bases and breathing in toxic smoke from “burn pits� that incinerated wastes of war—medical and hazard material, jet fuel, and more. When they came home, many of the world’s fittest and best trained warriors were never the same. Headaches. Numbness. Dizziness. A cancer that would put them in a flag-draped coffin. I know. One of those soldiers was my son Major Beau Biden. We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. But I’m committed to finding out everything we can. Committed to military families
https://python.langchain.com/docs/integrations/vectorstores/supabase
65a64c14a093-7
to finding out everything we can. Committed to military families like Danielle Robinson from Ohio. The widow of Sergeant First Class Heath Robinson. He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. Stationed near Baghdad, just yards from burn pits the size of football fields. Heath’s widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their daughter. ## Document 2 And I’m taking robust action to make sure the pain of our sanctions is targeted at Russia’s economy. And I will use every tool at our disposal to protect American businesses and consumers. Tonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. America will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. These steps will help blunt gas prices here at home. And I know the news about what’s happening can seem alarming. But I want you to know that we are going to be okay. When the history of this era is written Putin’s war on Ukraine will have left Russia weaker and the rest of the world stronger. While it shouldn’t have taken something so terrible for people around the world to see what’s at stake now everyone sees it
https://python.langchain.com/docs/integrations/vectorstores/supabase
65a64c14a093-8
something so terrible for people around the world to see what’s at stake now everyone sees it clearly. ## Document 3 We can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. Officer Mora was 27 years old. Officer Rivera was 22. Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.PreviousStarRocksNextTairSimilarity search with scoreRetriever optionsMaximal Marginal Relevance SearchesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/vectorstores/supabase
b3f08286718b-0
Vectara | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/vectorstores/vectara
b3f08286718b-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cognitive SearchCassandraChromaClarifaiClickHouse Vector SearchActiveloop's Deep LakeDocArrayHnswSearchDocArrayInMemorySearchElasticSearchFAISSHologresLanceDBMarqoMatchingEngineMilvusMongoDB AtlasMyScaleOpenSearchpg_embeddingPGVectorPineconeQdrantRedisRocksetSingleStoreDBscikit-learnStarRocksSupabase (Postgres)TairTigrisTypesenseVectaraWeaviateZillizGrouped by providerIntegrationsVector storesVectaraOn this pageVectaraVectara is a API platform for building LLM-powered applications. It provides a simple to use API for document indexing and query that is managed by Vectara and is optimized for performance and accuracy. This notebook shows how to use functionality related to the Vectara vector database or the Vectara retriever. See the Vectara API documentation for more information on how to use the API.import osfrom langchain.embeddings import FakeEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Vectarafrom langchain.document_loaders import TextLoaderConnecting to Vectara from LangChain​The Vectara API provides simple API endpoints for indexing and querying, which is encapsulated in the Vectara integration.
https://python.langchain.com/docs/integrations/vectorstores/vectara
b3f08286718b-2
First let's ingest the documents using the from_documents() method:loader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)vectara = Vectara.from_documents( docs, embedding=FakeEmbeddings(size=768), doc_metadata={"speech": "state-of-the-union"},)Vectara's indexing API provides a file upload API where the file is handled directly by Vectara - pre-processed, chunked optimally and added to the Vectara vector store.
https://python.langchain.com/docs/integrations/vectorstores/vectara
b3f08286718b-3
To use this, we added the add_files() method (and from_files()). Let's see this in action. We pick two PDF documents to upload: The "I have a dream" speech by Dr. KingChurchill's "We Shall Fight on the Beaches" speechimport tempfileimport urllib.requesturls = [ [ "https://www.gilderlehrman.org/sites/default/files/inline-pdfs/king.dreamspeech.excerpts.pdf", "I-have-a-dream", ], [ "https://www.parkwayschools.net/cms/lib/MO01931486/Centricity/Domain/1578/Churchill_Beaches_Speech.pdf", "we shall fight on the beaches", ],]files_list = []for url, _ in urls: name = tempfile.NamedTemporaryFile().name urllib.request.urlretrieve(url, name) files_list.append(name)docsearch: Vectara = Vectara.from_files( files=files_list, embedding=FakeEmbeddings(size=768), metadatas=[{"url": url, "speech": title} for url, title in urls],)Similarity search​The simplest scenario for using Vectara is to perform a similarity search. query = "What did the president say about Ketanji Brown Jackson"found_docs = vectara.similarity_search( query, n_sentence_context=0, filter="doc.speech = 'state-of-the-union'")print(found_docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass
https://python.langchain.com/docs/integrations/vectorstores/vectara
b3f08286718b-4
Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity search with score​Sometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result.query = "What did the president say about Ketanji Brown Jackson"found_docs = vectara.similarity_search_with_score( query, filter="doc.speech = 'state-of-the-union'")document, score = found_docs[0]print(document.page_content)print(f"\nScore: {score}") Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your
https://python.langchain.com/docs/integrations/vectorstores/vectara
b3f08286718b-5
and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. Score: 0.4917977Now let's do similar search for content in the files we uploadedquery = "We must forever conduct our struggle"found_docs = vectara.similarity_search_with_score( query, filter="doc.speech = 'I-have-a-dream'")print(found_docs[0])print(found_docs[1]) (Document(page_content='We must forever conduct our struggle on the high plane of dignity and discipline.', metadata={'section': '1'}), 0.7962591) (Document(page_content='We must not allow our\ncreative protests to degenerate into physical violence. . . .', metadata={'section': '1'}), 0.25983918)Vectara as a Retriever​Vectara, as all the other vector stores, can be used also as a LangChain Retriever:retriever = vectara.as_retriever()retriever VectaraRetriever(vectorstore=<langchain.vectorstores.vectara.Vectara object at 0x12772caf0>, search_type='similarity', search_kwargs={'lambda_val': 0.025, 'k': 5, 'filter': '', 'n_sentence_context': '0'})query = "What did the president say about Ketanji Brown
https://python.langchain.com/docs/integrations/vectorstores/vectara
b3f08286718b-6
'0'})query = "What did the president say about Ketanji Brown Jackson"retriever.get_relevant_documents(query)[0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})PreviousTypesenseNextWeaviateConnecting to Vectara from LangChainSimilarity searchSimilarity search with scoreVectara as a RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/vectorstores/vectara
75b2d79c5eaf-0
FAISS | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/vectorstores/faiss
75b2d79c5eaf-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cognitive SearchCassandraChromaClarifaiClickHouse Vector SearchActiveloop's Deep LakeDocArrayHnswSearchDocArrayInMemorySearchElasticSearchFAISSHologresLanceDBMarqoMatchingEngineMilvusMongoDB AtlasMyScaleOpenSearchpg_embeddingPGVectorPineconeQdrantRedisRocksetSingleStoreDBscikit-learnStarRocksSupabase (Postgres)TairTigrisTypesenseVectaraWeaviateZillizGrouped by providerIntegrationsVector storesFAISSOn this pageFAISSFacebook AI Similarity Search (Faiss) is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning.Faiss documentation.This notebook shows how to use functionality related to the FAISS vector database.#!pip install faiss# ORpip install faiss-cpuWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key. import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")# Uncomment the following line if you need to initialize FAISS with no AVX2 optimization# os.environ['FAISS_NO_AVX2'] = '1'from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import FAISSfrom langchain.document_loaders import TextLoaderfrom
https://python.langchain.com/docs/integrations/vectorstores/faiss
75b2d79c5eaf-2
langchain.vectorstores import FAISSfrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()db = FAISS.from_documents(docs, embeddings)query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity Search with score​There are some FAISS specific methods. One of them is similarity_search_with_score, which allows you to return not only the documents but also the distance score of the query to them. The returned distance score is L2 distance. Therefore, a lower score is better.docs_and_scores =
https://python.langchain.com/docs/integrations/vectorstores/faiss
75b2d79c5eaf-3
The returned distance score is L2 distance. Therefore, a lower score is better.docs_and_scores = db.similarity_search_with_score(query)docs_and_scores[0] (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}), 0.36913747)It is also possible to do a search for documents similar to a given embedding vector using similarity_search_by_vector which accepts an embedding vector as a parameter instead of a string.embedding_vector = embeddings.embed_query(query)docs_and_scores = db.similarity_search_by_vector(embedding_vector)Saving and loading​You can also save and load a FAISS index. This is useful so you don't have to recreate it everytime you use it.db.save_local("faiss_index")new_db = FAISS.load_local("faiss_index", embeddings)docs = new_db.similarity_search(query)docs[0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting
https://python.langchain.com/docs/integrations/vectorstores/faiss
75b2d79c5eaf-4
I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})Merging​You can also merge two FAISS vectorstoresdb1 = FAISS.from_texts(["foo"], embeddings)db2 = FAISS.from_texts(["bar"], embeddings)db1.docstore._dict {'068c473b-d420-487a-806b-fb0ccea7f711': Document(page_content='foo', metadata={})}db2.docstore._dict {'807e0c63-13f6-4070-9774-5c6f0fbb9866': Document(page_content='bar', metadata={})}db1.merge_from(db2)db1.docstore._dict {'068c473b-d420-487a-806b-fb0ccea7f711': Document(page_content='foo', metadata={}), '807e0c63-13f6-4070-9774-5c6f0fbb9866':
https://python.langchain.com/docs/integrations/vectorstores/faiss
75b2d79c5eaf-5
Document(page_content='bar', metadata={})}Similarity Search with filtering​FAISS vectorstore can also support filtering, since the FAISS does not natively support filtering we have to do it manually. This is done by first fetching more results than k and then filtering them. You can filter the documents based on metadata. You can also set the fetch_k parameter when calling any search method to set how many documents you want to fetch before filtering. Here is a small example:from langchain.schema import Documentlist_of_documents = [ Document(page_content="foo", metadata=dict(page=1)), Document(page_content="bar", metadata=dict(page=1)), Document(page_content="foo", metadata=dict(page=2)), Document(page_content="barbar", metadata=dict(page=2)), Document(page_content="foo", metadata=dict(page=3)), Document(page_content="bar burr", metadata=dict(page=3)), Document(page_content="foo", metadata=dict(page=4)), Document(page_content="bar bruh", metadata=dict(page=4)),]db = FAISS.from_documents(list_of_documents, embeddings)results_with_scores = db.similarity_search_with_score("foo")for doc, score in results_with_scores: print(f"Content: {doc.page_content}, Metadata: {doc.metadata}, Score: {score}") Content: foo, Metadata: {'page': 1}, Score: 5.159960813797904e-15 Content: foo, Metadata: {'page': 2}, Score: 5.159960813797904e-15 Content: foo, Metadata: {'page': 3}, Score: 5.159960813797904e-15 Content: foo, Metadata: {'page': 4}, Score:
https://python.langchain.com/docs/integrations/vectorstores/faiss
75b2d79c5eaf-6
Content: foo, Metadata: {'page': 4}, Score: 5.159960813797904e-15Now we make the same query call but we filter for only page = 1 results_with_scores = db.similarity_search_with_score("foo", filter=dict(page=1))for doc, score in results_with_scores: print(f"Content: {doc.page_content}, Metadata: {doc.metadata}, Score: {score}") Content: foo, Metadata: {'page': 1}, Score: 5.159960813797904e-15 Content: bar, Metadata: {'page': 1}, Score: 0.3131446838378906Same thing can be done with the max_marginal_relevance_search as well.results = db.max_marginal_relevance_search("foo", filter=dict(page=1))for doc in results: print(f"Content: {doc.page_content}, Metadata: {doc.metadata}") Content: foo, Metadata: {'page': 1} Content: bar, Metadata: {'page': 1}Here is an example of how to set fetch_k parameter when calling similarity_search. Usually you would want the fetch_k parameter >> k parameter. This is because the fetch_k parameter is the number of documents that will be fetched before filtering. If you set fetch_k to a low number, you might not get enough documents to filter from.results = db.similarity_search("foo", filter=dict(page=1), k=1, fetch_k=4)for doc in results: print(f"Content: {doc.page_content}, Metadata: {doc.metadata}") Content: foo, Metadata: {'page': 1}PreviousElasticSearchNextHologresSimilarity Search with scoreSaving and loadingMergingSimilarity Search with
https://python.langchain.com/docs/integrations/vectorstores/faiss
75b2d79c5eaf-7
Search with scoreSaving and loadingMergingSimilarity Search with filteringCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/vectorstores/faiss