id
stringlengths
14
15
text
stringlengths
23
2.21k
source
stringlengths
52
97
619b5efe033c-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryCassandra Chat Message HistoryDynamodb Chat Message HistoryEntity Memory with SQLite storageMomento Chat Message HistoryMongodb Chat Message HistoryMotörhead MemoryMotörhead Memory (Managed)Postgres Chat Message HistoryRedis Chat Message HistoryZep MemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsMemoryMemory📄� Cassandra Chat Message HistoryApache Cassandra® is a NoSQL, row-oriented, highly scalable and highly available database, well suited for storing large amounts of data.📄� Dynamodb Chat Message HistoryThis notebook goes over how to use Dynamodb to store chat message history.📄� Entity Memory with SQLite storageIn this walkthrough we'll create a simple conversation chain which uses ConversationEntityMemory backed by a SqliteEntityStore.📄� Momento Chat Message HistoryThis notebook goes over how to use Momento Cache to store chat message history using the MomentoChatMessageHistory class. See the Momento docs for more detail on how to get set up with Momento.📄� Mongodb Chat Message HistoryThis notebook goes over how to use Mongodb to store chat message history.📄� Motörhead MemoryMotörhead is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications.📄� Motörhead Memory (Managed)Motörhead is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows
https://python.langchain.com/docs/integrations/memory/
619b5efe033c-2
is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications.📄� Postgres Chat Message HistoryThis notebook goes over how to use Postgres to store chat message history.📄� Redis Chat Message HistoryThis notebook goes over how to use Redis to store chat message history.📄� Zep MemoryREACT Agent Chat Message History with Zep - A long-term memory store for LLM applications.PreviousWriterNextCassandra Chat Message HistoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/memory/
d0d3bd72cb6f-0
Motörhead Memory (Managed) | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/memory/motorhead_memory_managed
d0d3bd72cb6f-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryCassandra Chat Message HistoryDynamodb Chat Message HistoryEntity Memory with SQLite storageMomento Chat Message HistoryMongodb Chat Message HistoryMotörhead MemoryMotörhead Memory (Managed)Postgres Chat Message HistoryRedis Chat Message HistoryZep MemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsMemoryMotörhead Memory (Managed)On this pageMotörhead Memory (Managed)Motörhead is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications.Setup​See instructions at Motörhead for running the managed version of Motorhead. You can retrieve your api_key and client_id by creating an account on Metal.from langchain.memory.motorhead_memory import MotorheadMemoryfrom langchain import OpenAI, LLMChain, PromptTemplatetemplate = """You are a chatbot having a conversation with a human.{chat_history}Human: {human_input}AI:"""prompt = PromptTemplate( input_variables=["chat_history", "human_input"], template=template)memory = MotorheadMemory( api_key="YOUR_API_KEY", client_id="YOUR_CLIENT_ID" session_id="testing-1", memory_key="chat_history")await memory.init(); # loads previous state from Motörhead 🤘llm_chain = LLMChain( llm=OpenAI(), prompt=prompt, verbose=True, memory=memory,)llm_chain.run("hi
https://python.langchain.com/docs/integrations/memory/motorhead_memory_managed
d0d3bd72cb6f-2
verbose=True, memory=memory,)llm_chain.run("hi im bob") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: hi im bob AI: > Finished chain. ' Hi Bob, nice to meet you! How are you doing today?'llm_chain.run("whats my name?") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: hi im bob AI: Hi Bob, nice to meet you! How are you doing today? Human: whats my name? AI: > Finished chain. ' You said your name is Bob. Is that correct?'llm_chain.run("whats for dinner?") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: hi im bob AI: Hi Bob, nice to meet you! How are you doing today? Human: whats my name? AI: You said your name is Bob. Is that correct? Human: whats for dinner? AI: > Finished chain. " I'm sorry, I'm not sure what you're asking. Could you please rephrase your
https://python.langchain.com/docs/integrations/memory/motorhead_memory_managed
d0d3bd72cb6f-3
I'm sorry, I'm not sure what you're asking. Could you please rephrase your question?"PreviousMotörhead MemoryNextPostgres Chat Message HistorySetupCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/memory/motorhead_memory_managed
169310d2cf52-0
Momento Chat Message History | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/memory/momento_chat_message_history
169310d2cf52-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryCassandra Chat Message HistoryDynamodb Chat Message HistoryEntity Memory with SQLite storageMomento Chat Message HistoryMongodb Chat Message HistoryMotörhead MemoryMotörhead Memory (Managed)Postgres Chat Message HistoryRedis Chat Message HistoryZep MemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsMemoryMomento Chat Message HistoryMomento Chat Message HistoryThis notebook goes over how to use Momento Cache to store chat message history using the MomentoChatMessageHistory class. See the Momento docs for more detail on how to get set up with Momento.Note that, by default we will create a cache if one with the given name doesn't already exist.You'll need to get a Momento auth token to use this class. This can either be passed in to a momento.CacheClient if you'd like to instantiate that directly, as a named parameter auth_token to MomentoChatMessageHistory.from_client_params, or can just be set as an environment variable MOMENTO_AUTH_TOKEN.from datetime import timedeltafrom langchain.memory import MomentoChatMessageHistorysession_id = "foo"cache_name = "langchain"ttl = timedelta(days=1)history = MomentoChatMessageHistory.from_client_params( session_id, cache_name, ttl,)history.add_user_message("hi!")history.add_ai_message("whats up?")history.messages [HumanMessage(content='hi!', additional_kwargs={}, example=False), AIMessage(content='whats up?', additional_kwargs={}, example=False)]PreviousEntity Memory with SQLite storageNextMongodb Chat Message HistoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/memory/momento_chat_message_history
c948729e86e5-0
Redis Chat Message History | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryCassandra Chat Message HistoryDynamodb Chat Message HistoryEntity Memory with SQLite storageMomento Chat Message HistoryMongodb Chat Message HistoryMotörhead MemoryMotörhead Memory (Managed)Postgres Chat Message HistoryRedis Chat Message HistoryZep MemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsMemoryRedis Chat Message HistoryRedis Chat Message HistoryThis notebook goes over how to use Redis to store chat message history.from langchain.memory import RedisChatMessageHistoryhistory = RedisChatMessageHistory("foo")history.add_user_message("hi!")history.add_ai_message("whats up?")history.messages [AIMessage(content='whats up?', additional_kwargs={}), HumanMessage(content='hi!', additional_kwargs={})]PreviousPostgres Chat Message HistoryNextZep MemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/memory/redis_chat_message_history
e849dc4a20fc-0
Dynamodb Chat Message History | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/memory/dynamodb_chat_message_history
e849dc4a20fc-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryCassandra Chat Message HistoryDynamodb Chat Message HistoryEntity Memory with SQLite storageMomento Chat Message HistoryMongodb Chat Message HistoryMotörhead MemoryMotörhead Memory (Managed)Postgres Chat Message HistoryRedis Chat Message HistoryZep MemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsMemoryDynamodb Chat Message HistoryOn this pageDynamodb Chat Message HistoryThis notebook goes over how to use Dynamodb to store chat message history.First make sure you have correctly configured the AWS CLI. Then make sure you have installed boto3.Next, create the DynamoDB Table where we will be storing messages:import boto3# Get the service resource.dynamodb = boto3.resource("dynamodb")# Create the DynamoDB table.table = dynamodb.create_table( TableName="SessionTable", KeySchema=[{"AttributeName": "SessionId", "KeyType": "HASH"}], AttributeDefinitions=[{"AttributeName": "SessionId", "AttributeType": "S"}], BillingMode="PAY_PER_REQUEST",)# Wait until the table exists.table.meta.client.get_waiter("table_exists").wait(TableName="SessionTable")# Print out some data about the table.print(table.item_count) 0DynamoDBChatMessageHistory​from langchain.memory.chat_message_histories import DynamoDBChatMessageHistoryhistory = DynamoDBChatMessageHistory(table_name="SessionTable", session_id="0")history.add_user_message("hi!")history.add_ai_message("whats up?")history.messages [HumanMessage(content='hi!', additional_kwargs={}, example=False), AIMessage(content='whats
https://python.langchain.com/docs/integrations/memory/dynamodb_chat_message_history
e849dc4a20fc-2
additional_kwargs={}, example=False), AIMessage(content='whats up?', additional_kwargs={}, example=False)]DynamoDBChatMessageHistory with Custom Endpoint URL​Sometimes it is useful to specify the URL to the AWS endpoint to connect to. For instance, when you are running locally against Localstack. For those cases you can specify the URL via the endpoint_url parameter in the constructor.from langchain.memory.chat_message_histories import DynamoDBChatMessageHistoryhistory = DynamoDBChatMessageHistory( table_name="SessionTable", session_id="0", endpoint_url="http://localhost.localstack.cloud:4566",)Agent with DynamoDB Memory​from langchain.agents import Toolfrom langchain.memory import ConversationBufferMemoryfrom langchain.chat_models import ChatOpenAIfrom langchain.agents import initialize_agentfrom langchain.agents import AgentTypefrom langchain.utilities import PythonREPLfrom getpass import getpassmessage_history = DynamoDBChatMessageHistory(table_name="SessionTable", session_id="1")memory = ConversationBufferMemory( memory_key="chat_history", chat_memory=message_history, return_messages=True)python_repl = PythonREPL()# You can create the tool to pass to an agenttools = [ Tool( name="python_repl", description="A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.", func=python_repl.run, )]llm = ChatOpenAI(temperature=0)agent_chain = initialize_agent( tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
https://python.langchain.com/docs/integrations/memory/dynamodb_chat_message_history
e849dc4a20fc-3
agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory,)agent_chain.run(input="Hello!") > Entering new AgentExecutor chain... { "action": "Final Answer", "action_input": "Hello! How can I assist you today?" } > Finished chain. 'Hello! How can I assist you today?'agent_chain.run(input="Who owns Twitter?") > Entering new AgentExecutor chain... { "action": "python_repl", "action_input": "import requests\nfrom bs4 import BeautifulSoup\n\nurl = 'https://en.wikipedia.org/wiki/Twitter'\nresponse = requests.get(url)\nsoup = BeautifulSoup(response.content, 'html.parser')\nowner = soup.find('th', text='Owner').find_next_sibling('td').text.strip()\nprint(owner)" } Observation: X Corp. (2023–present)Twitter, Inc. (2006–2023) Thought:{ "action": "Final Answer", "action_input": "X Corp. (2023–present)Twitter, Inc. (2006–2023)" } > Finished chain. 'X Corp. (2023–present)Twitter, Inc. (2006–2023)'agent_chain.run(input="My name is Bob.") >
https://python.langchain.com/docs/integrations/memory/dynamodb_chat_message_history
e849dc4a20fc-4
name is Bob.") > Entering new AgentExecutor chain... { "action": "Final Answer", "action_input": "Hello Bob! How can I assist you today?" } > Finished chain. 'Hello Bob! How can I assist you today?'agent_chain.run(input="Who am I?") > Entering new AgentExecutor chain... { "action": "Final Answer", "action_input": "Your name is Bob." } > Finished chain. 'Your name is Bob.'PreviousCassandra Chat Message HistoryNextEntity Memory with SQLite storageDynamoDBChatMessageHistoryDynamoDBChatMessageHistory with Custom Endpoint URLAgent with DynamoDB MemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/memory/dynamodb_chat_message_history
7b35985d3863-0
Postgres Chat Message History | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryCassandra Chat Message HistoryDynamodb Chat Message HistoryEntity Memory with SQLite storageMomento Chat Message HistoryMongodb Chat Message HistoryMotörhead MemoryMotörhead Memory (Managed)Postgres Chat Message HistoryRedis Chat Message HistoryZep MemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsMemoryPostgres Chat Message HistoryPostgres Chat Message HistoryThis notebook goes over how to use Postgres to store chat message history.from langchain.memory import PostgresChatMessageHistoryhistory = PostgresChatMessageHistory( connection_string="postgresql://postgres:mypassword@localhost/chat_history", session_id="foo",)history.add_user_message("hi!")history.add_ai_message("whats up?")history.messagesPreviousMotörhead Memory (Managed)NextRedis Chat Message HistoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/memory/postgres_chat_message_history
de730d9c7df0-0
Cassandra Chat Message History | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/memory/cassandra_chat_message_history
de730d9c7df0-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryCassandra Chat Message HistoryDynamodb Chat Message HistoryEntity Memory with SQLite storageMomento Chat Message HistoryMongodb Chat Message HistoryMotörhead MemoryMotörhead Memory (Managed)Postgres Chat Message HistoryRedis Chat Message HistoryZep MemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsMemoryCassandra Chat Message HistoryOn this pageCassandra Chat Message HistoryApache Cassandra® is a NoSQL, row-oriented, highly scalable and highly available database, well suited for storing large amounts of data.Cassandra is a good choice for storing chat message history because it is easy to scale and can handle a large number of writes.This notebook goes over how to use Cassandra to store chat message history.To run this notebook you need either a running Cassandra cluster or a DataStax Astra DB instance running in the cloud (you can get one for free at datastax.com). Check cassio.org for more information.pip install "cassio>=0.0.7"Please provide database connection parameters and secrets:​import osimport getpassdatabase_mode = (input("\n(C)assandra or (A)stra DB? ")).upper()keyspace_name = input("\nKeyspace name? ")if database_mode == "A": ASTRA_DB_APPLICATION_TOKEN = getpass.getpass('\nAstra DB Token ("AstraCS:...") ') # ASTRA_DB_SECURE_BUNDLE_PATH = input("Full path to your Secure Connect Bundle? ")elif database_mode == "C": CASSANDRA_CONTACT_POINTS = input( "Contact points?
https://python.langchain.com/docs/integrations/memory/cassandra_chat_message_history
de730d9c7df0-2
CASSANDRA_CONTACT_POINTS = input( "Contact points? (comma-separated, empty for localhost) " ).strip()depending on whether local or cloud-based Astra DB, create the corresponding database connection "Session" object​from cassandra.cluster import Clusterfrom cassandra.auth import PlainTextAuthProviderif database_mode == "C": if CASSANDRA_CONTACT_POINTS: cluster = Cluster( [cp.strip() for cp in CASSANDRA_CONTACT_POINTS.split(",") if cp.strip()] ) else: cluster = Cluster() session = cluster.connect()elif database_mode == "A": ASTRA_DB_CLIENT_ID = "token" cluster = Cluster( cloud={ "secure_connect_bundle": ASTRA_DB_SECURE_BUNDLE_PATH, }, auth_provider=PlainTextAuthProvider( ASTRA_DB_CLIENT_ID, ASTRA_DB_APPLICATION_TOKEN, ), ) session = cluster.connect()else: raise NotImplementedErrorCreation and usage of the Chat Message History​from langchain.memory import CassandraChatMessageHistorymessage_history = CassandraChatMessageHistory( session_id="test-session", session=session, keyspace=keyspace_name,)message_history.add_user_message("hi!")message_history.add_ai_message("whats up?")message_history.messagesPreviousMemoryNextDynamodb Chat Message HistoryPlease provide database connection parameters and secrets:Creation and usage
https://python.langchain.com/docs/integrations/memory/cassandra_chat_message_history
de730d9c7df0-3
Chat Message HistoryPlease provide database connection parameters and secrets:Creation and usage of the Chat Message HistoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/memory/cassandra_chat_message_history
a1c610e35c1a-0
Entity Memory with SQLite storage | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/memory/entity_memory_with_sqlite
a1c610e35c1a-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryCassandra Chat Message HistoryDynamodb Chat Message HistoryEntity Memory with SQLite storageMomento Chat Message HistoryMongodb Chat Message HistoryMotörhead MemoryMotörhead Memory (Managed)Postgres Chat Message HistoryRedis Chat Message HistoryZep MemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsMemoryEntity Memory with SQLite storageEntity Memory with SQLite storageIn this walkthrough we'll create a simple conversation chain which uses ConversationEntityMemory backed by a SqliteEntityStore.from langchain.chains import ConversationChainfrom langchain.llms import OpenAIfrom langchain.memory import ConversationEntityMemoryfrom langchain.memory.entity import SQLiteEntityStorefrom langchain.memory.prompt import ENTITY_MEMORY_CONVERSATION_TEMPLATEentity_store = SQLiteEntityStore()llm = OpenAI(temperature=0)memory = ConversationEntityMemory(llm=llm, entity_store=entity_store)conversation = ConversationChain( llm=llm, prompt=ENTITY_MEMORY_CONVERSATION_TEMPLATE, memory=memory, verbose=True,)Notice the usage of EntitySqliteStore as parameter to entity_store on the memory property.conversation.run("Deven & Sam are working on a hackathon project") > Entering new ConversationChain chain... Prompt after formatting: You are an assistant to a human, powered by a large language model trained by OpenAI. You are designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, you are able
https://python.langchain.com/docs/integrations/memory/entity_memory_with_sqlite
a1c610e35c1a-2
in-depth explanations and discussions on a wide range of topics. As a language model, you are able to generate human-like text based on the input you receive, allowing you to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. You are constantly learning and improving, and your capabilities are constantly evolving. You are able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. You have access to some personalized information provided by the human in the Context section below. Additionally, you are able to generate your own text based on the input you receive, allowing you to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, you are a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist. Context: {'Deven': 'Deven is working on a hackathon project with Sam.', 'Sam': 'Sam is working on a hackathon project with Deven.'} Current conversation: Last line: Human: Deven & Sam are working on a hackathon project You: > Finished chain. ' That sounds like a great project! What kind of project are they working on?'conversation.memory.entity_store.get("Deven") 'Deven is working on a hackathon project with Sam.'conversation.memory.entity_store.get("Sam") 'Sam is working on a hackathon project with Deven.'PreviousDynamodb Chat Message HistoryNextMomento Chat Message
https://python.langchain.com/docs/integrations/memory/entity_memory_with_sqlite
a1c610e35c1a-3
on a hackathon project with Deven.'PreviousDynamodb Chat Message HistoryNextMomento Chat Message HistoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/memory/entity_memory_with_sqlite
babbe4a6a02c-0
Motörhead Memory | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/memory/motorhead_memory
babbe4a6a02c-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryCassandra Chat Message HistoryDynamodb Chat Message HistoryEntity Memory with SQLite storageMomento Chat Message HistoryMongodb Chat Message HistoryMotörhead MemoryMotörhead Memory (Managed)Postgres Chat Message HistoryRedis Chat Message HistoryZep MemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsMemoryMotörhead MemoryOn this pageMotörhead MemoryMotörhead is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications.Setup​See instructions at Motörhead for running the server locally.from langchain.memory.motorhead_memory import MotorheadMemoryfrom langchain import OpenAI, LLMChain, PromptTemplatetemplate = """You are a chatbot having a conversation with a human.{chat_history}Human: {human_input}AI:"""prompt = PromptTemplate( input_variables=["chat_history", "human_input"], template=template)memory = MotorheadMemory( session_id="testing-1", url="http://localhost:8080", memory_key="chat_history")await memory.init()# loads previous state from Motörhead 🤘llm_chain = LLMChain( llm=OpenAI(), prompt=prompt, verbose=True, memory=memory,)llm_chain.run("hi im bob") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human.
https://python.langchain.com/docs/integrations/memory/motorhead_memory
babbe4a6a02c-2
after formatting: You are a chatbot having a conversation with a human. Human: hi im bob AI: > Finished chain. ' Hi Bob, nice to meet you! How are you doing today?'llm_chain.run("whats my name?") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: hi im bob AI: Hi Bob, nice to meet you! How are you doing today? Human: whats my name? AI: > Finished chain. ' You said your name is Bob. Is that correct?'llm_chain.run("whats for dinner?") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: hi im bob AI: Hi Bob, nice to meet you! How are you doing today? Human: whats my name? AI: You said your name is Bob. Is that correct? Human: whats for dinner? AI: > Finished chain. " I'm sorry, I'm not sure what you're asking. Could you please rephrase your question?"PreviousMongodb Chat Message HistoryNextMotörhead Memory (Managed)SetupCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/memory/motorhead_memory
b10209f5a73e-0
Zep Memory | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/memory/zep_memory
b10209f5a73e-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryCassandra Chat Message HistoryDynamodb Chat Message HistoryEntity Memory with SQLite storageMomento Chat Message HistoryMongodb Chat Message HistoryMotörhead MemoryMotörhead Memory (Managed)Postgres Chat Message HistoryRedis Chat Message HistoryZep MemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsMemoryZep MemoryOn this pageZep MemoryREACT Agent Chat Message History with Zep - A long-term memory store for LLM applications.​This notebook demonstrates how to use the Zep Long-term Memory Store as memory for your chatbot.We'll demonstrate:Adding conversation history to the Zep memory store.Running an agent and having message automatically added to the store.Viewing the enriched messages.Vector search over the conversation history.More on Zep:​Zep stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs.Key Features:Fast! Zep’s async extractors operate independently of the your chat loop, ensuring a snappy user experience.Long-term memory persistence, with access to historical messages irrespective of your summarization strategy.Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies.Hybrid search over memories and metadata, with messages automatically embedded on creation.Entity Extractor that automatically extracts named entities from messages and stores them in the message metadata.Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.Python and JavaScript SDKs.Zep project: https://github.com/getzep/zep
https://python.langchain.com/docs/integrations/memory/zep_memory
b10209f5a73e-2
Docs: https://docs.getzep.com/from langchain.memory import ZepMemoryfrom langchain.retrievers import ZepRetrieverfrom langchain import OpenAIfrom langchain.schema import HumanMessage, AIMessagefrom langchain.utilities import WikipediaAPIWrapperfrom langchain.agents import initialize_agent, AgentType, Toolfrom uuid import uuid4# Set this to your Zep server URLZEP_API_URL = "http://localhost:8000"session_id = str(uuid4()) # This is a unique identifier for the user# Provide your OpenAI keyimport getpassopenai_key = getpass.getpass()# Provide your Zep API key. Note that this is optional. See https://docs.getzep.com/deployment/authzep_api_key = getpass.getpass()Initialize the Zep Chat Message History Class and initialize the Agent​search = WikipediaAPIWrapper()tools = [ Tool( name="Search", func=search.run, description="useful for when you need to search online for answers. You should ask targeted questions", ),]# Set up Zep Chat Historymemory = ZepMemory( session_id=session_id, url=ZEP_API_URL, api_key=zep_api_key, memory_key="chat_history",)# Initialize the agentllm = OpenAI(temperature=0, openai_api_key=openai_key)agent_chain = initialize_agent( tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory,)Add some history data​# Preload some messages into the memory. The default message window is 12 messages. We want to push beyond this to demonstrate
https://python.langchain.com/docs/integrations/memory/zep_memory
b10209f5a73e-3
the memory. The default message window is 12 messages. We want to push beyond this to demonstrate auto-summarization.test_history = [ {"role": "human", "content": "Who was Octavia Butler?"}, { "role": "ai", "content": ( "Octavia Estelle Butler (June 22, 1947 – February 24, 2006) was an American" " science fiction author." ), }, {"role": "human", "content": "Which books of hers were made into movies?"}, { "role": "ai", "content": ( "The most well-known adaptation of Octavia Butler's work is the FX series" " Kindred, based on her novel of the same name." ), }, {"role": "human", "content": "Who were her contemporaries?"}, { "role": "ai", "content": ( "Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R." " Delany, and Joanna Russ." ), }, {"role": "human", "content": "What awards did she win?"}, {
https://python.langchain.com/docs/integrations/memory/zep_memory
b10209f5a73e-4
"What awards did she win?"}, { "role": "ai", "content": ( "Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur" " Fellowship." ), }, { "role": "human", "content": "Which other women sci-fi writers might I want to read?", }, { "role": "ai", "content": "You might want to read Ursula K. Le Guin or Joanna Russ.", }, { "role": "human", "content": ( "Write a short synopsis of Butler's book, Parable of the Sower. What is it" " about?" ), }, { "role": "ai", "content": ( "Parable of the Sower is a science fiction novel by Octavia Butler," " published in 1993. It follows the story of Lauren Olamina, a young woman" " living in a dystopian future where society has collapsed due to" " environmental disasters, poverty, and
https://python.langchain.com/docs/integrations/memory/zep_memory
b10209f5a73e-5
to" " environmental disasters, poverty, and violence." ), "metadata": {"foo": "bar"}, },]for msg in test_history: memory.chat_memory.add_message( HumanMessage(content=msg["content"]) if msg["role"] == "human" else AIMessage(content=msg["content"]), metadata=msg.get("metadata", {}), )Run the agent​Doing so will automatically add the input and response to the Zep memory.agent_chain.run( input="What is the book's relevance to the challenges facing contemporary society?",) > Entering new chain... Thought: Do I need to use a tool? No AI: Parable of the Sower is a prescient novel that speaks to the challenges facing contemporary society, such as climate change, inequality, and violence. It is a cautionary tale that warns of the dangers of unchecked greed and the need for individuals to take responsibility for their own lives and the lives of those around them. > Finished chain. 'Parable of the Sower is a prescient novel that speaks to the challenges facing contemporary society, such as climate change, inequality, and violence. It is a cautionary tale that warns of the dangers of unchecked greed and the need for individuals to take responsibility for their own lives and the lives of those around them.'Inspect the Zep memory​Note the summary, and that the history has been enriched with token counts, UUIDs, and timestamps.Summaries are biased towards the most recent messages.def print_messages(messages):
https://python.langchain.com/docs/integrations/memory/zep_memory
b10209f5a73e-6
and timestamps.Summaries are biased towards the most recent messages.def print_messages(messages): for m in messages: print(m.type, ":\n", m.dict())print(memory.chat_memory.zep_summary)print("\n")print_messages(memory.chat_memory.messages) The human inquires about Octavia Butler. The AI identifies her as an American science fiction author. The human then asks which books of hers were made into movies. The AI responds by mentioning the FX series Kindred, based on her novel of the same name. The human then asks about her contemporaries, and the AI lists Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ. system : {'content': 'The human inquires about Octavia Butler. The AI identifies her as an American science fiction author. The human then asks which books of hers were made into movies. The AI responds by mentioning the FX series Kindred, based on her novel of the same name. The human then asks about her contemporaries, and the AI lists Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.', 'additional_kwargs': {}} human : {'content': 'What awards did she win?', 'additional_kwargs': {'uuid': '6b733f0b-6778-49ae-b3ec-4e077c039f31', 'created_at': '2023-07-09T19:23:16.611232Z', 'token_count': 8, 'metadata': {'system': {'entities': [], 'intent': 'The subject is inquiring about the awards that someone, whose identity is not specified, has won.'}}}, 'example': False} ai : {'content': 'Octavia Butler won the Hugo
https://python.langchain.com/docs/integrations/memory/zep_memory
b10209f5a73e-7
ai : {'content': 'Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur Fellowship.', 'additional_kwargs': {'uuid': '2f6d80c6-3c08-4fd4-8d4e-7bbee341ac90', 'created_at': '2023-07-09T19:23:16.618947Z', 'token_count': 21, 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 14, 'Start': 0, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}, {'Label': 'WORK_OF_ART', 'Matches': [{'End': 33, 'Start': 19, 'Text': 'the Hugo Award'}], 'Name': 'the Hugo Award'}, {'Label': 'EVENT', 'Matches': [{'End': 81, 'Start': 57, 'Text': 'the MacArthur Fellowship'}], 'Name': 'the MacArthur Fellowship'}], 'intent': 'The subject is stating that Octavia Butler received the Hugo Award, the Nebula Award, and the MacArthur Fellowship.'}}}, 'example': False} human : {'content': 'Which other women sci-fi writers might I want to read?', 'additional_kwargs': {'uuid': 'ccdcc901-ea39-4981-862f-6fe22ab9289b', 'created_at': '2023-07-09T19:23:16.62678Z', 'token_count': 14, 'metadata': {'system': {'entities': [], 'intent': 'The subject is seeking recommendations for additional women science fiction writers to explore.'}}}, 'example': False} ai : {'content': 'You might want to read Ursula K.
https://python.langchain.com/docs/integrations/memory/zep_memory
b10209f5a73e-8
ai : {'content': 'You might want to read Ursula K. Le Guin or Joanna Russ.', 'additional_kwargs': {'uuid': '7977099a-0c62-4c98-bfff-465bbab6c9c3', 'created_at': '2023-07-09T19:23:16.631721Z', 'token_count': 18, 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 40, 'Start': 23, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 55, 'Start': 44, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}], 'intent': 'The subject is suggesting that the person should consider reading the works of Ursula K. Le Guin or Joanna Russ.'}}}, 'example': False} human : {'content': "Write a short synopsis of Butler's book, Parable of the Sower. What is it about?", 'additional_kwargs': {'uuid': 'e439b7e6-286a-4278-a8cb-dc260fa2e089', 'created_at': '2023-07-09T19:23:16.63623Z', 'token_count': 23, 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 32, 'Start': 26, 'Text': 'Butler'}], 'Name': 'Butler'}, {'Label': 'WORK_OF_ART', 'Matches': [{'End': 61, 'Start': 41, 'Text': 'Parable of the Sower'}], 'Name': 'Parable of
https://python.langchain.com/docs/integrations/memory/zep_memory
b10209f5a73e-9
'Text': 'Parable of the Sower'}], 'Name': 'Parable of the Sower'}], 'intent': 'The subject is requesting a brief summary or explanation of the book "Parable of the Sower" by Butler.'}}}, 'example': False} ai : {'content': 'Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', 'additional_kwargs': {'uuid': '6760489b-19c9-41aa-8b45-fae6cb1d7ee6', 'created_at': '2023-07-09T19:23:16.647524Z', 'token_count': 56, 'metadata': {'foo': 'bar', 'system': {'entities': [{'Label': 'GPE', 'Matches': [{'End': 20, 'Start': 15, 'Text': 'Sower'}], 'Name': 'Sower'}, {'Label': 'PERSON', 'Matches': [{'End': 65, 'Start': 51, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}, {'Label': 'DATE', 'Matches': [{'End': 84, 'Start': 80, 'Text': '1993'}], 'Name': '1993'}, {'Label': 'PERSON', 'Matches': [{'End': 124, 'Start': 110, 'Text': 'Lauren Olamina'}], 'Name': 'Lauren Olamina'}], 'intent': 'The subject is providing information about the novel "Parable of the Sower" by Octavia Butler, including its genre, publication date, and a brief summary of the plot.'}}}, 'example':
https://python.langchain.com/docs/integrations/memory/zep_memory
b10209f5a73e-10
including its genre, publication date, and a brief summary of the plot.'}}}, 'example': False} human : {'content': "What is the book's relevance to the challenges facing contemporary society?", 'additional_kwargs': {'uuid': '7dbbbb93-492b-4739-800f-cad2b6e0e764', 'created_at': '2023-07-09T19:23:19.315182Z', 'token_count': 15, 'metadata': {'system': {'entities': [], 'intent': 'The subject is asking about the relevance of a book to the challenges currently faced by society.'}}}, 'example': False} ai : {'content': 'Parable of the Sower is a prescient novel that speaks to the challenges facing contemporary society, such as climate change, inequality, and violence. It is a cautionary tale that warns of the dangers of unchecked greed and the need for individuals to take responsibility for their own lives and the lives of those around them.', 'additional_kwargs': {'uuid': '3e14ac8f-b7c1-4360-958b-9f3eae1f784f', 'created_at': '2023-07-09T19:23:19.332517Z', 'token_count': 66, 'metadata': {'system': {'entities': [{'Label': 'GPE', 'Matches': [{'End': 20, 'Start': 15, 'Text': 'Sower'}], 'Name': 'Sower'}], 'intent': 'The subject is providing an analysis and evaluation of the novel "Parable of the Sower" and highlighting its relevance to contemporary societal challenges.'}}}, 'example': False}Vector search over the Zep memory​Zep provides native vector search over historical conversation memory via the
https://python.langchain.com/docs/integrations/memory/zep_memory
b10209f5a73e-11
the Zep memory​Zep provides native vector search over historical conversation memory via the ZepRetriever.You can use the ZepRetriever with chains that support passing in a Langchain Retriever object.retriever = ZepRetriever( session_id=session_id, url=ZEP_API_URL, api_key=zep_api_key,)search_results = memory.chat_memory.search("who are some famous women sci-fi authors?")for r in search_results: if r.dist > 0.8: # Only print results with similarity of 0.8 or higher print(r.message, r.dist) {'uuid': 'ccdcc901-ea39-4981-862f-6fe22ab9289b', 'created_at': '2023-07-09T19:23:16.62678Z', 'role': 'human', 'content': 'Which other women sci-fi writers might I want to read?', 'metadata': {'system': {'entities': [], 'intent': 'The subject is seeking recommendations for additional women science fiction writers to explore.'}}, 'token_count': 14} 0.9119619869747062 {'uuid': '7977099a-0c62-4c98-bfff-465bbab6c9c3', 'created_at': '2023-07-09T19:23:16.631721Z', 'role': 'ai', 'content': 'You might want to read Ursula K. Le Guin or Joanna Russ.', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 40, 'Start': 23, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le
https://python.langchain.com/docs/integrations/memory/zep_memory
b10209f5a73e-12
K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 55, 'Start': 44, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}], 'intent': 'The subject is suggesting that the person should consider reading the works of Ursula K. Le Guin or Joanna Russ.'}}, 'token_count': 18} 0.8534346954749745 {'uuid': 'b05e2eb5-c103-4973-9458-928726f08655', 'created_at': '2023-07-09T19:23:16.603098Z', 'role': 'ai', 'content': "Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.", 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 16, 'Start': 0, 'Text': "Octavia Butler's"}], 'Name': "Octavia Butler's"}, {'Label': 'ORG', 'Matches': [{'End': 58, 'Start': 41, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 76, 'Start': 60, 'Text': 'Samuel R. Delany'}], 'Name': 'Samuel R. Delany'}, {'Label': 'PERSON', 'Matches': [{'End': 93, 'Start': 82, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}], 'intent': "The subject is stating that Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and
https://python.langchain.com/docs/integrations/memory/zep_memory
b10209f5a73e-13
Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ."}}, 'token_count': 27} 0.8523831524040919 {'uuid': 'e346f02b-f854-435d-b6ba-fb394a416b9b', 'created_at': '2023-07-09T19:23:16.556587Z', 'role': 'human', 'content': 'Who was Octavia Butler?', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 22, 'Start': 8, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}], 'intent': 'The subject is asking for information about the identity or background of Octavia Butler.'}}, 'token_count': 8} 0.8236355436055457 {'uuid': '42ff41d2-c63a-4d5b-b19b-d9a87105cfc3', 'created_at': '2023-07-09T19:23:16.578022Z', 'role': 'ai', 'content': 'Octavia Estelle Butler (June 22, 1947 – February 24, 2006) was an American science fiction author.', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 22, 'Start': 0, 'Text': 'Octavia Estelle Butler'}], 'Name': 'Octavia Estelle Butler'}, {'Label': 'DATE', 'Matches': [{'End': 37, 'Start': 24, 'Text': 'June 22, 1947'}], 'Name': 'June 22, 1947'}, {'Label': 'DATE', 'Matches': [{'End': 57,
https://python.langchain.com/docs/integrations/memory/zep_memory
b10209f5a73e-14
1947'}, {'Label': 'DATE', 'Matches': [{'End': 57, 'Start': 40, 'Text': 'February 24, 2006'}], 'Name': 'February 24, 2006'}, {'Label': 'NORP', 'Matches': [{'End': 74, 'Start': 66, 'Text': 'American'}], 'Name': 'American'}], 'intent': 'The subject is providing information about Octavia Estelle Butler, who was an American science fiction author.'}}, 'token_count': 31} 0.8206687242257686 {'uuid': '2f6d80c6-3c08-4fd4-8d4e-7bbee341ac90', 'created_at': '2023-07-09T19:23:16.618947Z', 'role': 'ai', 'content': 'Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur Fellowship.', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 14, 'Start': 0, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}, {'Label': 'WORK_OF_ART', 'Matches': [{'End': 33, 'Start': 19, 'Text': 'the Hugo Award'}], 'Name': 'the Hugo Award'}, {'Label': 'EVENT', 'Matches': [{'End': 81, 'Start': 57, 'Text': 'the MacArthur Fellowship'}], 'Name': 'the MacArthur Fellowship'}], 'intent': 'The subject is stating that Octavia Butler received the Hugo Award, the Nebula Award, and the MacArthur Fellowship.'}}, 'token_count': 21} 0.8199012397683285PreviousRedis Chat Message HistoryNextRetrieversREACT Agent Chat
https://python.langchain.com/docs/integrations/memory/zep_memory
b10209f5a73e-15
Chat Message HistoryNextRetrieversREACT Agent Chat Message History with Zep - A long-term memory store for LLM applications.More on Zep:Initialize the Zep Chat Message History Class and initialize the AgentAdd some history dataRun the agentInspect the Zep memoryVector search over the Zep memoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/memory/zep_memory
68b8715a0937-0
Mongodb Chat Message History | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryCassandra Chat Message HistoryDynamodb Chat Message HistoryEntity Memory with SQLite storageMomento Chat Message HistoryMongodb Chat Message HistoryMotörhead MemoryMotörhead Memory (Managed)Postgres Chat Message HistoryRedis Chat Message HistoryZep MemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsMemoryMongodb Chat Message HistoryMongodb Chat Message HistoryThis notebook goes over how to use Mongodb to store chat message history.MongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional schemas.MongoDB is developed by MongoDB Inc. and licensed under the Server Side Public License (SSPL). - Wikipedia# Provide the connection string to connect to the MongoDB databaseconnection_string = "mongodb://mongo_user:password123@mongo:27017"from langchain.memory import MongoDBChatMessageHistorymessage_history = MongoDBChatMessageHistory( connection_string=connection_string, session_id="test-session")message_history.add_user_message("hi!")message_history.add_ai_message("whats up?")message_history.messages [HumanMessage(content='hi!', additional_kwargs={}, example=False), AIMessage(content='whats up?', additional_kwargs={}, example=False)]PreviousMomento Chat Message HistoryNextMotörhead MemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/memory/mongodb_chat_message_history
70a6030f17d5-0
Retrievers | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/retrievers/
70a6030f17d5-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversAmazon KendraArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArray RetrieverElasticSearch BM25Google Cloud Enterprise SearchkNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedSVMTF-IDFVespaWeaviate Hybrid SearchWikipediaZepText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsRetrieversRetrievers📄� Amazon KendraAmazon Kendra is an intelligent search service provided by Amazon Web Services (AWS). It utilizes advanced natural language processing (NLP) and machine learning algorithms to enable powerful search capabilities across various data sources within an organization. Kendra is designed to help users find the information they need quickly and accurately, improving productivity and decision-making.📄� ArxivarXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.📄� Azure Cognitive SearchAzure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.📄� BM25BM25 also known as the Okapi BM25, is a ranking function used in information retrieval systems to estimate the relevance of documents to a given search query.📄� ChaindeskChaindesk platform brings data from anywhere (Datsources: Text,
https://python.langchain.com/docs/integrations/retrievers/
70a6030f17d5-2
ChaindeskChaindesk platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources).📄� ChatGPT PluginOpenAI plugins connect ChatGPT to third-party applications. These plugins enable ChatGPT to interact with APIs defined by developers, enhancing ChatGPT's capabilities and allowing it to perform a wide range of actions.📄� Cohere RerankerCohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.📄� DocArray RetrieverDocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps!📄� ElasticSearch BM25Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.📄� Google Cloud Enterprise SearchEnterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud.📄� kNNIn statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression.📄� LOTR (Merger Retriever)Lord of the Retrievers, also known
https://python.langchain.com/docs/integrations/retrievers/
70a6030f17d5-3
LOTR (Merger Retriever)Lord of the Retrievers, also known as MergerRetriever, takes a list of retrievers as input and merges the results of their getrelevantdocuments() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers.📄� MetalMetal is a managed service for ML Embeddings.📄� Pinecone Hybrid SearchPinecone is a vector database with broad functionality.📄� PubMedThis notebook goes over how to use PubMed as a retriever📄� SVMSupport vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.📄� TF-IDFTF-IDF means term-frequency times inverse document-frequency.📄� VespaVespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.📄� Weaviate Hybrid SearchWeaviate is an open source vector database.📄� WikipediaWikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.📄� ZepRetriever Example for Zep - A long-term memory store for LLM applications.PreviousZep MemoryNextAmazon KendraCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/retrievers/
91475250ccea-0
Azure Cognitive Search | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/retrievers/azure_cognitive_search
91475250ccea-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversAmazon KendraArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArray RetrieverElasticSearch BM25Google Cloud Enterprise SearchkNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedSVMTF-IDFVespaWeaviate Hybrid SearchWikipediaZepText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsRetrieversAzure Cognitive SearchOn this pageAzure Cognitive SearchAzure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.Search is foundational to any app that surfaces text to users, where common scenarios include catalog or document search, online retail apps, or data exploration over proprietary content. When you create a search service, you'll work with the following capabilities:A search engine for full text search over a search index containing user-owned contentRich indexing, with lexical analysis and optional AI enrichment for content extraction and transformationRich query syntax for text search, fuzzy search, autocomplete, geo-search and moreProgrammability through REST APIs and client libraries in Azure SDKsAzure integration at the data layer, machine learning layer, and AI (Cognitive Services)This notebook shows how to use Azure Cognitive Search (ACS) within LangChain.Set up Azure Cognitive Search​To set up ACS, please follow the instrcutions here.Please notethe name of your ACS service, the name of your ACS index,your API key.Your API key can be either Admin or Query key, but as we only read data it is recommended to use a Query key.Using
https://python.langchain.com/docs/integrations/retrievers/azure_cognitive_search
91475250ccea-2
or Query key, but as we only read data it is recommended to use a Query key.Using the Azure Cognitive Search Retriever​import osfrom langchain.retrievers import AzureCognitiveSearchRetrieverSet Service Name, Index Name and API key as environment variables (alternatively, you can pass them as arguments to AzureCognitiveSearchRetriever).os.environ["AZURE_COGNITIVE_SEARCH_SERVICE_NAME"] = "<YOUR_ACS_SERVICE_NAME>"os.environ["AZURE_COGNITIVE_SEARCH_INDEX_NAME"] = "<YOUR_ACS_INDEX_NAME>"os.environ["AZURE_COGNITIVE_SEARCH_API_KEY"] = "<YOUR_API_KEY>"Create the Retrieverretriever = AzureCognitiveSearchRetriever(content_key="content", top_k=10)Now you can use retrieve documents from Azure Cognitive Searchretriever.get_relevant_documents("what is langchain")You can change the number of results returned with the top_k parameter. The default value is None, which returns all results.PreviousArxivNextBM25Set up Azure Cognitive SearchUsing the Azure Cognitive Search RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/retrievers/azure_cognitive_search
87daecbbd973-0
Vespa | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/retrievers/vespa
87daecbbd973-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversAmazon KendraArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArray RetrieverElasticSearch BM25Google Cloud Enterprise SearchkNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedSVMTF-IDFVespaWeaviate Hybrid SearchWikipediaZepText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsRetrieversVespaVespaVespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.This notebook shows how to use Vespa.ai as a LangChain retriever.In order to create a retriever, we use pyvespa to create a connection a Vespa service.#!pip install pyvespafrom vespa.application import Vespavespa_app = Vespa(url="https://doc-search.vespa.oath.cloud")This creates a connection to a Vespa service, here the Vespa documentation search service. Using pyvespa package, you can also connect to a Vespa Cloud instance or a local
https://python.langchain.com/docs/integrations/retrievers/vespa
87daecbbd973-2
Vespa Cloud instance or a local Docker instance.After connecting to the service, you can set up the retriever:from langchain.retrievers.vespa_retriever import VespaRetrievervespa_query_body = { "yql": "select content from paragraph where userQuery()", "hits": 5, "ranking": "documentation", "locale": "en-us",}vespa_content_field = "content"retriever = VespaRetriever(vespa_app, vespa_query_body, vespa_content_field)This sets up a LangChain retriever that fetches documents from the Vespa application. Here, up to 5 results are retrieved from the content field in the paragraph document type, using doumentation as the ranking method. The userQuery() is replaced with the actual query passed from LangChain.Please refer to the pyvespa documentation for more information.Now you can return the results and continue using the results in LangChain.retriever.get_relevant_documents("what is vespa?")PreviousTF-IDFNextWeaviate Hybrid SearchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/retrievers/vespa
c3b2e44a6457-0
ElasticSearch BM25 | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/retrievers/elastic_search_bm25
c3b2e44a6457-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversAmazon KendraArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArray RetrieverElasticSearch BM25Google Cloud Enterprise SearchkNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedSVMTF-IDFVespaWeaviate Hybrid SearchWikipediaZepText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsRetrieversElasticSearch BM25On this pageElasticSearch BM25Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.In information retrieval, Okapi BM25 (BM is an abbreviation of best matching) is a ranking function used by search engines to estimate the relevance of documents to a given search query. It is based on the probabilistic retrieval framework developed in the 1970s and 1980s by Stephen E. Robertson, Karen Spärck Jones, and others.The name of the actual ranking function is BM25. The fuller name, Okapi BM25, includes the name of the first system to use it, which was the Okapi information retrieval system, implemented at London's City University in the 1980s and 1990s. BM25 and its newer variants, e.g. BM25F (a version of BM25 that can take document structure and anchor text into account), represent TF-IDF-like retrieval functions used in document retrieval.This notebook shows how to use a retriever that uses ElasticSearch and BM25.For more information on the details of BM25 see this blog post.#!pip install
https://python.langchain.com/docs/integrations/retrievers/elastic_search_bm25
c3b2e44a6457-2
and BM25.For more information on the details of BM25 see this blog post.#!pip install elasticsearchfrom langchain.retrievers import ElasticSearchBM25RetrieverCreate New Retriever​elasticsearch_url = "http://localhost:9200"retriever = ElasticSearchBM25Retriever.create(elasticsearch_url, "langchain-index-4")# Alternatively, you can load an existing index# import elasticsearch# elasticsearch_url="http://localhost:9200"# retriever = ElasticSearchBM25Retriever(elasticsearch.Elasticsearch(elasticsearch_url), "langchain-index")Add texts (if necessary)​We can optionally add texts to the retriever (if they aren't already in there)retriever.add_texts(["foo", "bar", "world", "hello", "foo bar"]) ['cbd4cb47-8d9f-4f34-b80e-ea871bc49856', 'f3bd2e24-76d1-4f9b-826b-ec4c0e8c7365', '8631bfc8-7c12-48ee-ab56-8ad5f373676e', '8be8374c-3253-4d87-928d-d73550a2ecf0', 'd79f457b-2842-4eab-ae10-77aa420b53d7']Use Retriever​We can now use the retriever!result = retriever.get_relevant_documents("foo")result [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={})]PreviousDocArray RetrieverNextGoogle Cloud Enterprise SearchCreate New RetrieverAdd texts
https://python.langchain.com/docs/integrations/retrievers/elastic_search_bm25
c3b2e44a6457-3
RetrieverNextGoogle Cloud Enterprise SearchCreate New RetrieverAdd texts (if necessary)Use RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/retrievers/elastic_search_bm25
9d5e87bbd763-0
Zep | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
9d5e87bbd763-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversAmazon KendraArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArray RetrieverElasticSearch BM25Google Cloud Enterprise SearchkNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedSVMTF-IDFVespaWeaviate Hybrid SearchWikipediaZepText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsRetrieversZepOn this pageZepRetriever Example for Zep - A long-term memory store for LLM applications.​More on Zep:​Zep stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs.Key Features:Fast! Zep’s async extractors operate independently of the your chat loop, ensuring a snappy user experience.Long-term memory persistence, with access to historical messages irrespective of your summarization strategy.Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies.Hybrid search over memories and metadata, with messages automatically embedded on creation.Entity Extractor that automatically extracts named entities from messages and stores them in the message metadata.Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.Python and JavaScript SDKs.Zep project: https://github.com/getzep/zep
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
9d5e87bbd763-2
Docs: https://docs.getzep.com/Retriever Example​This notebook demonstrates how to search historical chat message histories using the Zep Long-term Memory Store.We'll demonstrate:Adding conversation history to the Zep memory store.Vector search over the conversation history.from langchain.memory.chat_message_histories import ZepChatMessageHistoryfrom langchain.schema import HumanMessage, AIMessagefrom uuid import uuid4import getpass# Set this to your Zep server URLZEP_API_URL = "http://localhost:8000"Initialize the Zep Chat Message History Class and add a chat message history to the memory store​NOTE: Unlike other Retrievers, the content returned by the Zep Retriever is session/user specific. A session_id is required when instantiating the Retriever.# Provide your Zep API key. Note that this is optional. See https://docs.getzep.com/deployment/authzep_api_key = getpass.getpass() ········session_id = str(uuid4()) # This is a unique identifier for the user/session# Set up Zep Chat History. We'll use this to add chat histories to the memory storezep_chat_history = ZepChatMessageHistory( session_id=session_id, url=ZEP_API_URL, api_key=zep_api_key)# Preload some messages into the memory. The default message window is 12 messages. We want to push beyond this to demonstrate auto-summarization.test_history = [ {"role": "human", "content": "Who was Octavia Butler?"}, { "role": "ai", "content": ( "Octavia Estelle Butler (June 22, 1947 –
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
9d5e87bbd763-3
"Octavia Estelle Butler (June 22, 1947 – February 24, 2006) was an American" " science fiction author." ), }, {"role": "human", "content": "Which books of hers were made into movies?"}, { "role": "ai", "content": ( "The most well-known adaptation of Octavia Butler's work is the FX series" " Kindred, based on her novel of the same name." ), }, {"role": "human", "content": "Who were her contemporaries?"}, { "role": "ai", "content": ( "Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R." " Delany, and Joanna Russ." ), }, {"role": "human", "content": "What awards did she win?"}, { "role": "ai", "content": ( "Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur" " Fellowship." ), }, {
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
9d5e87bbd763-4
), }, { "role": "human", "content": "Which other women sci-fi writers might I want to read?", }, { "role": "ai", "content": "You might want to read Ursula K. Le Guin or Joanna Russ.", }, { "role": "human", "content": ( "Write a short synopsis of Butler's book, Parable of the Sower. What is it" " about?" ), }, { "role": "ai", "content": ( "Parable of the Sower is a science fiction novel by Octavia Butler," " published in 1993. It follows the story of Lauren Olamina, a young woman" " living in a dystopian future where society has collapsed due to" " environmental disasters, poverty, and violence." ), },]for msg in test_history: zep_chat_history.add_message( HumanMessage(content=msg["content"]) if msg["role"] == "human" else AIMessage(content=msg["content"]) )Use the
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
9d5e87bbd763-5
else AIMessage(content=msg["content"]) )Use the Zep Retriever to vector search over the Zep memory​Zep provides native vector search over historical conversation memory. Embedding happens automatically.NOTE: Embedding of messages occurs asynchronously, so the first query may not return results. Subsequent queries will return results as the embeddings are generated.from langchain.retrievers import ZepRetrieverzep_retriever = ZepRetriever( session_id=session_id, # Ensure that you provide the session_id when instantiating the Retriever url=ZEP_API_URL, top_k=5, api_key=zep_api_key,)await zep_retriever.aget_relevant_documents("Who wrote Parable of the Sower?") [Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', metadata={'score': 0.8897116216176073, 'uuid': 'db60ff57-f259-4ec4-8a81-178ed4c6e54f', 'created_at': '2023-06-26T23:40:22.816214Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'GPE', 'Matches': [{'End': 20, 'Start': 15, 'Text': 'Sower'}], 'Name': 'Sower'}, {'Label': 'PERSON', 'Matches': [{'End': 65, 'Start': 51, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}, {'Label':
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
9d5e87bbd763-6
'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}, {'Label': 'DATE', 'Matches': [{'End': 84, 'Start': 80, 'Text': '1993'}], 'Name': '1993'}, {'Label': 'PERSON', 'Matches': [{'End': 124, 'Start': 110, 'Text': 'Lauren Olamina'}], 'Name': 'Lauren Olamina'}]}}, 'token_count': 56}), Document(page_content="Write a short synopsis of Butler's book, Parable of the Sower. What is it about?", metadata={'score': 0.8856661080361157, 'uuid': 'f1a5981a-8f6d-4168-a548-6e9c32f35fa1', 'created_at': '2023-06-26T23:40:22.809621Z', 'role': 'human', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 32, 'Start': 26, 'Text': 'Butler'}], 'Name': 'Butler'}, {'Label': 'WORK_OF_ART', 'Matches': [{'End': 61, 'Start': 41, 'Text': 'Parable of the Sower'}], 'Name': 'Parable of the Sower'}]}}, 'token_count': 23}), Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7757595298492976, 'uuid': '361d0043-1009-4e13-a7f0-8aea8b1ee869', 'created_at': '2023-06-26T23:40:22.709886Z', 'role': 'human', 'metadata': {'system': {'entities': [{'Label':
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
9d5e87bbd763-7
'role': 'human', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 22, 'Start': 8, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}], 'intent': 'The subject wants to know about the identity or background of an individual named Octavia Butler.'}}, 'token_count': 8}), Document(page_content="Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.", metadata={'score': 0.7601242516059306, 'uuid': '56c45e8a-0f65-45f0-bc46-d9e65164b563', 'created_at': '2023-06-26T23:40:22.778836Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 16, 'Start': 0, 'Text': "Octavia Butler's"}], 'Name': "Octavia Butler's"}, {'Label': 'ORG', 'Matches': [{'End': 58, 'Start': 41, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 76, 'Start': 60, 'Text': 'Samuel R. Delany'}], 'Name': 'Samuel R. Delany'}, {'Label': 'PERSON', 'Matches': [{'End': 93, 'Start': 82, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}], 'intent': "The subject is providing information about Octavia Butler's contemporaries."}}, 'token_count': 27}),
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
9d5e87bbd763-8
about Octavia Butler's contemporaries."}}, 'token_count': 27}), Document(page_content='You might want to read Ursula K. Le Guin or Joanna Russ.', metadata={'score': 0.7594731095320668, 'uuid': '6951f2fd-dfa4-4e05-9380-f322ef8f72f8', 'created_at': '2023-06-26T23:40:22.80464Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 40, 'Start': 23, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 55, 'Start': 44, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}]}}, 'token_count': 18})]We can also use the Zep sync API to retrieve results:zep_retriever.get_relevant_documents("Who wrote Parable of the Sower?") [Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', metadata={'score': 0.889661105796371, 'uuid': 'db60ff57-f259-4ec4-8a81-178ed4c6e54f', 'created_at': '2023-06-26T23:40:22.816214Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'GPE', 'Matches':
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
9d5e87bbd763-9
'metadata': {'system': {'entities': [{'Label': 'GPE', 'Matches': [{'End': 20, 'Start': 15, 'Text': 'Sower'}], 'Name': 'Sower'}, {'Label': 'PERSON', 'Matches': [{'End': 65, 'Start': 51, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}, {'Label': 'DATE', 'Matches': [{'End': 84, 'Start': 80, 'Text': '1993'}], 'Name': '1993'}, {'Label': 'PERSON', 'Matches': [{'End': 124, 'Start': 110, 'Text': 'Lauren Olamina'}], 'Name': 'Lauren Olamina'}]}}, 'token_count': 56}), Document(page_content="Write a short synopsis of Butler's book, Parable of the Sower. What is it about?", metadata={'score': 0.885754241595424, 'uuid': 'f1a5981a-8f6d-4168-a548-6e9c32f35fa1', 'created_at': '2023-06-26T23:40:22.809621Z', 'role': 'human', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 32, 'Start': 26, 'Text': 'Butler'}], 'Name': 'Butler'}, {'Label': 'WORK_OF_ART', 'Matches': [{'End': 61, 'Start': 41, 'Text': 'Parable of the Sower'}], 'Name': 'Parable of the Sower'}]}}, 'token_count': 23}), Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7758688965570713,
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
9d5e87bbd763-10
was Octavia Butler?', metadata={'score': 0.7758688965570713, 'uuid': '361d0043-1009-4e13-a7f0-8aea8b1ee869', 'created_at': '2023-06-26T23:40:22.709886Z', 'role': 'human', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 22, 'Start': 8, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}], 'intent': 'The subject wants to know about the identity or background of an individual named Octavia Butler.'}}, 'token_count': 8}), Document(page_content="Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.", metadata={'score': 0.7602672137411663, 'uuid': '56c45e8a-0f65-45f0-bc46-d9e65164b563', 'created_at': '2023-06-26T23:40:22.778836Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 16, 'Start': 0, 'Text': "Octavia Butler's"}], 'Name': "Octavia Butler's"}, {'Label': 'ORG', 'Matches': [{'End': 58, 'Start': 41, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 76, 'Start': 60, 'Text': 'Samuel R. Delany'}], 'Name': 'Samuel R. Delany'},
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
9d5e87bbd763-11
'Samuel R. Delany'}], 'Name': 'Samuel R. Delany'}, {'Label': 'PERSON', 'Matches': [{'End': 93, 'Start': 82, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}], 'intent': "The subject is providing information about Octavia Butler's contemporaries."}}, 'token_count': 27}), Document(page_content='You might want to read Ursula K. Le Guin or Joanna Russ.', metadata={'score': 0.7596040989115522, 'uuid': '6951f2fd-dfa4-4e05-9380-f322ef8f72f8', 'created_at': '2023-06-26T23:40:22.80464Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 40, 'Start': 23, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 55, 'Start': 44, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}]}}, 'token_count': 18})]PreviousWikipediaNextText embedding modelsRetriever Example for Zep - A long-term memory store for LLM applications.More on Zep:Retriever ExampleInitialize the Zep Chat Message History Class and add a chat message history to the memory storeUse the Zep Retriever to vector search over the Zep memoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
c920bdb54ed4-0
SVM | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/retrievers/svm
c920bdb54ed4-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversAmazon KendraArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArray RetrieverElasticSearch BM25Google Cloud Enterprise SearchkNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedSVMTF-IDFVespaWeaviate Hybrid SearchWikipediaZepText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsRetrieversSVMOn this pageSVMSupport vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.This notebook goes over how to use a retriever that under the hood uses an SVM using scikit-learn package.Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.html#!pip install scikit-learn#!pip install larkWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ········from langchain.retrievers import SVMRetrieverfrom langchain.embeddings import OpenAIEmbeddingsCreate New Retriever with Texts​retriever = SVMRetriever.from_texts( ["foo", "bar", "world", "hello", "foo bar"], OpenAIEmbeddings())Use Retriever​We can now use the retriever!result = retriever.get_relevant_documents("foo")result
https://python.langchain.com/docs/integrations/retrievers/svm
c920bdb54ed4-2
use the retriever!result = retriever.get_relevant_documents("foo")result [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='world', metadata={})]PreviousPubMedNextTF-IDFCreate New Retriever with TextsUse RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/retrievers/svm
676c29d0d5d0-0
Wikipedia | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/retrievers/wikipedia
676c29d0d5d0-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversAmazon KendraArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArray RetrieverElasticSearch BM25Google Cloud Enterprise SearchkNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedSVMTF-IDFVespaWeaviate Hybrid SearchWikipediaZepText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsRetrieversWikipediaOn this pageWikipediaWikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.This notebook shows how to retrieve wiki pages from wikipedia.org into the Document format that is used downstream.Installation​First, you need to install wikipedia python package.#!pip install wikipediaWikipediaRetriever has these arguments:optional lang: default="en". Use it to search in a specific language part of Wikipediaoptional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), title, Summary. If True, other fields also downloaded.get_relevant_documents() has one argument, query: free text which used to find documents in WikipediaExamples​Running retriever​from langchain.retrievers import WikipediaRetrieverretriever =
https://python.langchain.com/docs/integrations/retrievers/wikipedia
676c29d0d5d0-2
langchain.retrievers import WikipediaRetrieverretriever = WikipediaRetriever()docs = retriever.get_relevant_documents(query="HUNTER X HUNTER")docs[0].metadata # meta-information of the Document {'title': 'Hunter × Hunter', 'summary': 'Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced "hunter hunter") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\'s sh�nen manga magazine Weekly Sh�nen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tank�bon volumes as of November 2022. The story focuses on a young boy named Gon Freecss who discovers that his father, who left him at a young age, is actually a world-renowned Hunter, a licensed professional who specializes in fantastical pursuits such as locating rare or unidentified animal species, treasure hunting, surveying unexplored enclaves, or hunting down lawless individuals. Gon departs on a journey to become a Hunter and eventually find his father. Along the way, Gon meets various other Hunters and encounters the paranormal.\nHunter × Hunter was adapted into a 62-episode anime television series produced by Nippon Animation and directed by Kazuhiro Furuhashi, which ran on Fuji Television from October 1999 to March 2001. Three separate original video animations (OVAs) totaling 30 episodes were subsequently produced by Nippon Animation and released in Japan from 2002 to 2004. A second anime television series by Madhouse aired on Nippon Television from October 2011 to September 2014, totaling 148 episodes, with two animated theatrical films released in 2013. There are also
https://python.langchain.com/docs/integrations/retrievers/wikipedia
676c29d0d5d0-3
totaling 148 episodes, with two animated theatrical films released in 2013. There are also numerous audio albums, video games, musicals, and other media based on Hunter × Hunter.\nThe manga has been translated into English and released in North America by Viz Media since April 2005. Both television series have been also licensed by Viz Media, with the first series having aired on the Funimation Channel in 2009 and the second series broadcast on Adult Swim\'s Toonami programming block from April 2016 to June 2019.\nHunter × Hunter has been a huge critical and financial success and has become one of the best-selling manga series of all time, having over 84 million copies in circulation by July 2022.\n\n'}docs[0].page_content[:400] # a content of the Document 'Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced "hunter hunter") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\'s sh�nen manga magazine Weekly Sh�nen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tank�bon volumes as of November 2022. The sto'Question Answering on facts​# get a token: https://platform.openai.com/account/api-keysfrom getpass import getpassOPENAI_API_KEY = getpass() ········import osos.environ["OPENAI_API_KEY"] = OPENAI_API_KEYfrom langchain.chat_models import ChatOpenAIfrom langchain.chains import ConversationalRetrievalChainmodel = ChatOpenAI(model_name="gpt-3.5-turbo") # switch to
https://python.langchain.com/docs/integrations/retrievers/wikipedia
676c29d0d5d0-4
ChatOpenAI(model_name="gpt-3.5-turbo") # switch to 'gpt-4'qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)questions = [ "What is Apify?", "When the Monument to the Martyrs of the 1830 Revolution was created?", "What is the Abhayagiri Vih�ra?", # "How big is Wikipédia en français?",]chat_history = []for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What is Apify? **Answer**: Apify is a platform that allows you to easily automate web scraping, data extraction and web automation. It provides a cloud-based infrastructure for running web crawlers and other automation tasks, as well as a web-based tool for building and managing your crawlers. Additionally, Apify offers a marketplace for buying and selling pre-built crawlers and related services. -> **Question**: When the Monument to the Martyrs of the 1830 Revolution was created? **Answer**: Apify is a web scraping and automation platform that enables you to extract data from websites, turn unstructured data into structured data, and automate repetitive tasks. It provides a user-friendly interface for creating web scraping scripts without any coding knowledge. Apify can be used for various web scraping tasks such as data extraction, web monitoring, content aggregation, and much more. Additionally, it offers various features such as proxy
https://python.langchain.com/docs/integrations/retrievers/wikipedia
676c29d0d5d0-5
web monitoring, content aggregation, and much more. Additionally, it offers various features such as proxy support, scheduling, and integration with other tools to make web scraping and automation tasks easier and more efficient. -> **Question**: What is the Abhayagiri Vih�ra? **Answer**: Abhayagiri Vih�ra was a major monastery site of Theravada Buddhism that was located in Anuradhapura, Sri Lanka. It was founded in the 2nd century BCE and is considered to be one of the most important monastic complexes in Sri Lanka. PreviousWeaviate Hybrid SearchNextZepInstallationExamplesRunning retrieverQuestion Answering on factsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/retrievers/wikipedia
bee3b6edc0bc-0
Metal | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/retrievers/metal
bee3b6edc0bc-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversAmazon KendraArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArray RetrieverElasticSearch BM25Google Cloud Enterprise SearchkNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedSVMTF-IDFVespaWeaviate Hybrid SearchWikipediaZepText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsRetrieversMetalOn this pageMetalMetal is a managed service for ML Embeddings.This notebook shows how to use Metal's retriever.First, you will need to sign up for Metal and get an API key. You can do so here# !pip install metal_sdkfrom metal_sdk.metal import MetalAPI_KEY = ""CLIENT_ID = ""INDEX_ID = ""metal = Metal(API_KEY, CLIENT_ID, INDEX_ID);Ingest Documents​You only need to do this if you haven't already set up an indexmetal.index({"text": "foo1"})metal.index({"text": "foo"}) {'data': {'id': '642739aa7559b026b4430e42', 'text': 'foo', 'createdAt': '2023-03-31T19:51:06.748Z'}}Query​Now that our index is set up, we can set up a retriever and start querying it.from langchain.retrievers import MetalRetrieverretriever = MetalRetriever(metal, params={"limit": 2})retriever.get_relevant_documents("foo1") [Document(page_content='foo1',
https://python.langchain.com/docs/integrations/retrievers/metal
bee3b6edc0bc-2
[Document(page_content='foo1', metadata={'dist': '1.19209289551e-07', 'id': '642739a17559b026b4430e40', 'createdAt': '2023-03-31T19:50:57.853Z'}), Document(page_content='foo1', metadata={'dist': '4.05311584473e-06', 'id': '642738f67559b026b4430e3c', 'createdAt': '2023-03-31T19:48:06.769Z'})]PreviousLOTR (Merger Retriever)NextPinecone Hybrid SearchIngest DocumentsQueryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/retrievers/metal
bb2e92064692-0
DocArray Retriever | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
bb2e92064692-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversAmazon KendraArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArray RetrieverElasticSearch BM25Google Cloud Enterprise SearchkNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedSVMTF-IDFVespaWeaviate Hybrid SearchWikipediaZepText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsRetrieversDocArray RetrieverOn this pageDocArray RetrieverDocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps!This notebook is split into two sections. The first section offers an introduction to all five supported document index backends. It provides guidance on setting up and indexing each backend, and also instructs you on how to build a DocArrayRetriever for finding relevant documents. In the second section, we'll select one of these backends and illustrate how to use it through a basic example.Document Index BackendsInMemoryExactNNIndexHnswDocumentIndexWeaviateDocumentIndexElasticDocIndexQdrantDocumentIndexMovie Retrieval using HnswDocumentIndexNormal RetrieverRetriever with FiltersRetriever with MMR SearchDocument Index Backendsfrom langchain.retrievers import DocArrayRetrieverfrom docarray import BaseDocfrom docarray.typing import NdArrayimport numpy as npfrom langchain.embeddings import
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
bb2e92064692-2
BaseDocfrom docarray.typing import NdArrayimport numpy as npfrom langchain.embeddings import FakeEmbeddingsimport randomembeddings = FakeEmbeddings(size=32)Before you start building the index, it's important to define your document schema. This determines what fields your documents will have and what type of data each field will hold.For this demonstration, we'll create a somewhat random schema containing 'title' (str), 'title_embedding' (numpy array), 'year' (int), and 'color' (str)class MyDoc(BaseDoc): title: str title_embedding: NdArray[32] year: int color: strInMemoryExactNNIndex​InMemoryExactNNIndex stores all Documentsin memory. It is a great starting point for small datasets, where you may not want to launch a database server.Learn more here: https://docs.docarray.org/user_guide/storing/index_in_memory/from docarray.index import InMemoryExactNNIndex# initialize the indexdb = InMemoryExactNNIndex[MyDoc]()# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = {"year": {"$lte": 90}}# create a retrieverretriever = DocArrayRetriever( index=db,
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
bb2e92064692-3
= DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc) [Document(page_content='My document 56', metadata={'id': '1f33e58b6468ab722f3786b96b20afe6', 'year': 56, 'color': 'red'})]HnswDocumentIndex​HnswDocumentIndex is a lightweight Document Index implementation that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in hnswlib, and stores all other data in SQLite.Learn more here: https://docs.docarray.org/user_guide/storing/index_hnswlib/from docarray.index import HnswDocumentIndex# initialize the indexdb = HnswDocumentIndex[MyDoc](work_dir="hnsw_index")# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = {"year": {"$lte": 90}}# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings,
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
bb2e92064692-4
index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc) [Document(page_content='My document 28', metadata={'id': 'ca9f3f4268eec7c97a7d6e77f541cb82', 'year': 28, 'color': 'red'})]WeaviateDocumentIndex​WeaviateDocumentIndex is a document index that is built upon Weaviate vector database.Learn more here: https://docs.docarray.org/user_guide/storing/index_weaviate/# There's a small difference with the Weaviate backend compared to the others.# Here, you need to 'mark' the field used for vector search with 'is_embedding=True'.# So, let's create a new schema for Weaviate that takes care of this requirement.from pydantic import Fieldclass WeaviateDoc(BaseDoc): title: str title_embedding: NdArray[32] = Field(is_embedding=True) year: int color: strfrom docarray.index import WeaviateDocumentIndex# initialize the indexdbconfig = WeaviateDocumentIndex.DBConfig(host="http://localhost:8080")db = WeaviateDocumentIndex[WeaviateDoc](db_config=dbconfig)# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i,
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
bb2e92064692-5
year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = {"path": ["year"], "operator": "LessThanEqual", "valueInt": "90"}# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc) [Document(page_content='My document 17', metadata={'id': '3a5b76e85f0d0a01785dc8f9d965ce40', 'year': 17, 'color': 'red'})]ElasticDocIndex​ElasticDocIndex is a document index that is built upon ElasticSearchLearn more here: https://docs.docarray.org/user_guide/storing/index_elastic/from docarray.index import ElasticDocIndex# initialize the indexdb = ElasticDocIndex[MyDoc]( hosts="http://localhost:9200", index_name="docarray_retriever")# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green",
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
bb2e92064692-6
color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = {"range": {"year": {"lte": 90}}}# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc) [Document(page_content='My document 46', metadata={'id': 'edbc721bac1c2ad323414ad1301528a4', 'year': 46, 'color': 'green'})]QdrantDocumentIndex​QdrantDocumentIndex is a document index that is build upon Qdrant vector databaseLearn more here: https://docs.docarray.org/user_guide/storing/index_qdrant/from docarray.index import QdrantDocumentIndexfrom qdrant_client.http import models as rest# initialize the indexqdrant_config = QdrantDocumentIndex.DBConfig(path=":memory:")db = QdrantDocumentIndex[MyDoc](qdrant_config)# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]),
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
bb2e92064692-7
color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = rest.Filter( must=[ rest.FieldCondition( key="year", range=rest.Range( gte=10, lt=90, ), ) ]) WARNING:root:Payload indexes have no effect in the local Qdrant. Please use server Qdrant if you need payload indexes.# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc) [Document(page_content='My document 80', metadata={'id': '97465f98d0810f1f330e4ecc29b13d20', 'year': 80, 'color': 'blue'})]Movie Retrieval using HnswDocumentIndexmovies = [ { "title": "Inception", "description": "A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.",
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
bb2e92064692-8
the task of planting an idea into the mind of a CEO.", "director": "Christopher Nolan", "rating": 8.8, }, { "title": "The Dark Knight", "description": "When the menace known as the Joker wreaks havoc and chaos on the people of Gotham, Batman must accept one of the greatest psychological and physical tests of his ability to fight injustice.", "director": "Christopher Nolan", "rating": 9.0, }, { "title": "Interstellar", "description": "Interstellar explores the boundaries of human exploration as a group of astronauts venture through a wormhole in space. In their quest to ensure the survival of humanity, they confront the vastness of space-time and grapple with love and sacrifice.", "director": "Christopher Nolan", "rating": 8.6, }, { "title": "Pulp Fiction", "description": "The lives of two mob hitmen, a boxer, a gangster's wife, and a pair of diner bandits intertwine in four tales of violence and redemption.", "director": "Quentin Tarantino", "rating": 8.9, }, { "title": "Reservoir Dogs", "description": "When a simple jewelry heist goes horribly wrong, the surviving criminals begin to suspect that
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
bb2e92064692-9
"When a simple jewelry heist goes horribly wrong, the surviving criminals begin to suspect that one of them is a police informant.", "director": "Quentin Tarantino", "rating": 8.3, }, { "title": "The Godfather", "description": "An aging patriarch of an organized crime dynasty transfers control of his empire to his reluctant son.", "director": "Francis Ford Coppola", "rating": 9.2, },]import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ········from docarray import BaseDoc, DocListfrom docarray.typing import NdArrayfrom langchain.embeddings.openai import OpenAIEmbeddings# define schema for your movie documentsclass MyDoc(BaseDoc): title: str description: str description_embedding: NdArray[1536] rating: float director: strembeddings = OpenAIEmbeddings()# get "description" embeddings, and create documentsdocs = DocList[MyDoc]( [ MyDoc( description_embedding=embeddings.embed_query(movie["description"]), **movie ) for movie in movies ])from docarray.index import HnswDocumentIndex# initialize the indexdb = HnswDocumentIndex[MyDoc](work_dir="movie_search")# add datadb.index(docs)Normal
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
bb2e92064692-10
add datadb.index(docs)Normal Retriever​from langchain.retrievers import DocArrayRetriever# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="description_embedding", content_field="description",)# find the relevant documentdoc = retriever.get_relevant_documents("movie about dreams")print(doc) [Document(page_content='A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.', metadata={'id': 'f1649d5b6776db04fec9a116bbb6bbe5', 'title': 'Inception', 'rating': 8.8, 'director': 'Christopher Nolan'})]Retriever with Filters​from langchain.retrievers import DocArrayRetriever# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="description_embedding", content_field="description", filters={"director": {"$eq": "Christopher Nolan"}}, top_k=2,)# find relevant documentsdocs = retriever.get_relevant_documents("space travel")print(docs) [Document(page_content='Interstellar explores the boundaries of human exploration as a group of astronauts venture through a wormhole in space. In their quest to ensure the survival of humanity, they confront the vastness of space-time and grapple with love and sacrifice.', metadata={'id': 'ab704cc7ae8573dc617f9a5e25df022a', 'title': 'Interstellar', 'rating': 8.6, 'director': 'Christopher Nolan'}), Document(page_content='A thief who steals corporate secrets through the
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
bb2e92064692-11
'director': 'Christopher Nolan'}), Document(page_content='A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.', metadata={'id': 'f1649d5b6776db04fec9a116bbb6bbe5', 'title': 'Inception', 'rating': 8.8, 'director': 'Christopher Nolan'})]Retriever with MMR search​from langchain.retrievers import DocArrayRetriever# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="description_embedding", content_field="description", filters={"rating": {"$gte": 8.7}}, search_type="mmr", top_k=3,)# find relevant documentsdocs = retriever.get_relevant_documents("action movies")print(docs) [Document(page_content="The lives of two mob hitmen, a boxer, a gangster's wife, and a pair of diner bandits intertwine in four tales of violence and redemption.", metadata={'id': 'e6aa313bbde514e23fbc80ab34511afd', 'title': 'Pulp Fiction', 'rating': 8.9, 'director': 'Quentin Tarantino'}), Document(page_content='A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.', metadata={'id': 'f1649d5b6776db04fec9a116bbb6bbe5', 'title': 'Inception', 'rating': 8.8, 'director': 'Christopher Nolan'}), Document(page_content='When the menace known as the Joker wreaks havoc and chaos on the people of Gotham, Batman
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
bb2e92064692-12
the menace known as the Joker wreaks havoc and chaos on the people of Gotham, Batman must accept one of the greatest psychological and physical tests of his ability to fight injustice.', metadata={'id': '91dec17d4272041b669fd113333a65f7', 'title': 'The Dark Knight', 'rating': 9.0, 'director': 'Christopher Nolan'})]PreviousCohere RerankerNextElasticSearch BM25InMemoryExactNNIndexHnswDocumentIndexWeaviateDocumentIndexElasticDocIndexQdrantDocumentIndexNormal RetrieverRetriever with FiltersRetriever with MMR searchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
bba6e2ef5336-0
TF-IDF | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/retrievers/tf_idf
bba6e2ef5336-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversAmazon KendraArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArray RetrieverElasticSearch BM25Google Cloud Enterprise SearchkNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedSVMTF-IDFVespaWeaviate Hybrid SearchWikipediaZepText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsRetrieversTF-IDFOn this pageTF-IDFTF-IDF means term-frequency times inverse document-frequency.This notebook goes over how to use a retriever that under the hood uses TF-IDF using scikit-learn package.For more information on the details of TF-IDF see this blog post.# !pip install scikit-learnfrom langchain.retrievers import TFIDFRetrieverCreate New Retriever with Texts​retriever = TFIDFRetriever.from_texts(["foo", "bar", "world", "hello", "foo bar"])Create a New Retriever with Documents​You can now create a new retriever with the documents you created.from langchain.schema import Documentretriever = TFIDFRetriever.from_documents( [ Document(page_content="foo"), Document(page_content="bar"), Document(page_content="world"), Document(page_content="hello"), Document(page_content="foo bar"), ])Use Retriever​We can now use the retriever!result =
https://python.langchain.com/docs/integrations/retrievers/tf_idf
bba6e2ef5336-2
])Use Retriever​We can now use the retriever!result = retriever.get_relevant_documents("foo")result [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='world', metadata={})]PreviousSVMNextVespaCreate New Retriever with TextsCreate a New Retriever with DocumentsUse RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/retrievers/tf_idf
767d5b975f00-0
Google Cloud Enterprise Search | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/retrievers/google_cloud_enterprise_search
767d5b975f00-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversAmazon KendraArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArray RetrieverElasticSearch BM25Google Cloud Enterprise SearchkNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedSVMTF-IDFVespaWeaviate Hybrid SearchWikipediaZepText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsRetrieversGoogle Cloud Enterprise SearchOn this pageGoogle Cloud Enterprise SearchEnterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud.Gen AI App Builder lets developers, even those with limited machine learning skills, quickly and easily tap into the power of Google’s foundation models, search expertise, and conversational AI technologies to create enterprise-grade generative AI applications. Enterprise Search lets organizations quickly build generative AI powered search engines for customers and employees.Enterprise Search is underpinned by a variety of Google Search technologies, including semantic search, which helps deliver more relevant results than traditional keyword-based search techniques by using natural language processing and machine learning techniques to infer relationships within the content and intent from the user’s query input. Enterprise Search also benefits from Google’s expertise in understanding how users search and factors in content relevance to order displayed results. Google Cloud offers Enterprise Search via Gen App Builder in Google Cloud Console and via an API for enterprise workflow integration. This notebook demonstrates how to configure Enterprise Search and use the Enterprise Search retriever. The Enterprise Search retriever encapsulates the Generative AI App Builder Python client library and uses it to access the Enterprise Search Search Service API.Install pre-requisites​You need to install the
https://python.langchain.com/docs/integrations/retrievers/google_cloud_enterprise_search
767d5b975f00-2
the Enterprise Search Search Service API.Install pre-requisites​You need to install the google-cloud-discoverengine package to use the Enterprise Search retriever.pip install google-cloud-discoveryengineConfigure access to Google Cloud and Google Cloud Enterprise Search​Enterprise Search is generally available for the allowlist (which means customers need to be approved for access) as of June 6, 2023. Contact your Google Cloud sales team for access and pricing details. We are previewing additional features that are coming soon to the generally available offering as part of our Trusted Tester program. Sign up for Trusted Tester and contact your Google Cloud sales team for an expedited trial.Before you can run this notebook you need to:Set or create a Google Cloud project and turn on Gen App BuilderCreate and populate an unstructured data storeSet credentials to access Enterprise Search APISet or create a Google Cloud poject and turn on Gen App Builder​Follow the instructions in the Enterprise Search Getting Started guide to set/create a GCP project and enable Gen App Builder.Create and populate an unstructured data store​Use Google Cloud Console to create an unstructured data store and populate it with the example PDF documents from the gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs Cloud Storage folder. Make sure to use the Cloud Storage (without metadata) option.Set credentials to access Enterprise Search API​The Gen App Builder client libraries used by the Enterprise Search retriever provide high-level language support for authenticating to Gen App Builder programmatically. Client libraries support Application Default Credentials (ADC); the libraries look for credentials in a set of defined locations and use those credentials to authenticate requests to the API. With ADC, you can make credentials available to your application in a variety of environments, such as local development or production, without needing to modify your application code.If running in Google Colab authenticate with google.colab.google.auth otherwise follow one of
https://python.langchain.com/docs/integrations/retrievers/google_cloud_enterprise_search