Dataset Viewer
id
stringlengths 14
15
| text
stringlengths 30
2.4k
| source
stringlengths 48
124
|
---|---|---|
059fb22a99c0-0 | Interacting with APIs | 🦜�🔗 Langchain
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKUse casesQA and Chat over DocumentsAnalyzing structured dataExtractionInteracting with APIsChatbotsSummarizationCode UnderstandingAgent simulationsAgentsAutonomous (long-running) agentsMulti-modalUse casesInteracting with APIsOn this pageInteracting with APIsLots of data and information is stored behind APIs.
This page covers all resources available in LangChain for working with APIs.Chains​If you are just getting started, and you have relatively simple apis, you should get started with chains.
Chains are a sequence of predetermined steps, so they are good to get started with as they give you more control and let you
understand what is happening better.API ChainAgents​Agents are more complex, and involve multiple queries to the LLM to understand what to do.
The downside of agents are that you have less control. The upside is that they are more powerful,
which allows you to use them on larger and more complex schemas. OpenAPI AgentPreviousExtractionNextChatbotsChainsAgentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/use_cases/apis |
a50eaf0b53a0-0 | Analyzing structured data | 🦜�🔗 Langchain | https://python.langchain.com/docs/use_cases/tabular |
a50eaf0b53a0-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKUse casesQA and Chat over DocumentsAnalyzing structured dataExtractionInteracting with APIsChatbotsSummarizationCode UnderstandingAgent simulationsAgentsAutonomous (long-running) agentsMulti-modalUse casesAnalyzing structured dataOn this pageAnalyzing structured dataLots of data and information is stored in tabular data, whether it be csvs, excel sheets, or SQL tables.
This page covers all resources available in LangChain for working with data in this format.Document loading​If you have text data stored in a tabular format, you may want to load the data into a Document and then index it as you would
other text/unstructured data. For this, you should use a document loader like the CSVLoader
and then you should create an index over that data, and query it that way.Querying​If you have more numeric tabular data, or have a large amount of data and don't want to index it, you should get started
by looking at various chains and agents we have for dealing with this data.Chains​If you are just getting started, and you have relatively small/simple tabular data, you should get started with chains.
Chains are a sequence of predetermined steps, so they are good to get started with as they give you more control and let you
understand what is happening better.SQL Database ChainAgents​Agents are more complex, and involve multiple queries to the LLM to understand what to do.
The downside of agents are that you have less control. The upside is that they are more powerful, | https://python.langchain.com/docs/use_cases/tabular |
a50eaf0b53a0-2 | which allows you to use them on larger databases and more complex schemas. SQL AgentPandas AgentCSV AgentPreviousQuestion answering over a group chat messages using Activeloop's DeepLakeNextExtractionDocument loadingQueryingChainsAgentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/use_cases/tabular |
83a1a206232c-0 | QA and Chat over Documents | 🦜�🔗 Langchain | https://python.langchain.com/docs/use_cases/question_answering/ |
83a1a206232c-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKUse casesQA and Chat over DocumentsContext aware text splitting and QA / ChatRunning LLMs locallyQuestion answering over a group chat messages using Activeloop's DeepLakeAnalyzing structured dataExtractionInteracting with APIsChatbotsSummarizationCode UnderstandingAgent simulationsAgentsAutonomous (long-running) agentsMulti-modalUse casesQA and Chat over DocumentsOn this pageQA and Chat over DocumentsChat and Question-Answering (QA) over data are popular LLM use-cases.data can include many things, including:Unstructured data (e.g., PDFs)Structured data (e.g., SQL)Code (e.g., Python)LangChain supports Chat and QA on various data types:See here and here for Code See here for Structured dataBelow we will review Chat and QA on Unstructured data. Unstructured data can be loaded from many sources.Use the LangChain integration hub to browse the full set of loaders.Each loader returns data as a LangChain Document.Documents are turned into a Chat or QA app following the general steps below:Splitting: Text splitters break Documents into splits of specified size Storage: Storage (e.g., often a vectorstore) will house and often embed the splitsRetrieval: The app retrieves splits from storage (e.g., often with similar embeddings to the input question) Output: An LLM produces an answer using a prompt that includes the question and the retrieved splits Quickstart​The above pipeline can be wrapped with a VectorstoreIndexCreator.In particular: Specify a Document loaderThe splitting, storage, retrieval, and output generation stages are wrappedLet's load this blog post on agents as an example Document.We have a QA app in a few lines of code.Set environment variables and get packages:pip install openaipip install chromadbexport | https://python.langchain.com/docs/use_cases/question_answering/ |
83a1a206232c-2 | lines of code.Set environment variables and get packages:pip install openaipip install chromadbexport OPENAI_API_KEY="..."Run:from langchain.document_loaders import WebBaseLoaderfrom langchain.indexes import VectorstoreIndexCreator# Document loaderloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")# Index that wraps above stepsindex = VectorstoreIndexCreator().from_loaders([loader])# Question-answeringquestion = "What is Task Decomposition?"index.query(question)' Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. It can be done using LLM with simple prompting, task-specific instructions, or human inputs. Tree of Thoughts (Yao et al. 2023) is an example of a task decomposition technique that explores multiple reasoning possibilities at each step and generates multiple thoughts per step, creating a tree structure.'Of course, some users do not want this level of abstraction.Below, we will discuss each stage in more detail.1. Loading, Splitting, Storage​1.1 Getting started​Specify a Document loader.# Document loaderfrom langchain.document_loaders import WebBaseLoaderloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")data = loader.load()Split the Document into chunks for embedding and vector storage.# Splitfrom langchain.text_splitter import RecursiveCharacterTextSplittertext_splitter = RecursiveCharacterTextSplitter(chunk_size = 500, chunk_overlap = 0)all_splits = text_splitter.split_documents(data)Embed and store the splits in a vector database (Chroma).# Store from langchain.vectorstores import Chromafrom langchain.embeddings import OpenAIEmbeddingsvectorstore = Chroma.from_documents(documents=all_splits,embedding=OpenAIEmbeddings())Here are the three pieces together:1.2 Going | https://python.langchain.com/docs/use_cases/question_answering/ |
83a1a206232c-3 | are the three pieces together:1.2 Going Deeper​1.2.1 Integrations​Document LoadersBrowse the > 120 data loader integrations here.See further documentation on loaders here.Document TransformersAll can ingest loaded Documents and process them (e.g., split).See further documentation on transformers here.VectorstoresBrowse the > 35 vectorstores integrations here.See further documentation on vectorstores here. 1.2.2 Retaining metadata​Context-aware splitters keep the location ("context") of each split in the original Document:Markdown filesCode (py or js)Documents2. Retrieval​2.1 Getting started​Retrieve relevant splits for any question using similarity_search.question = "What are the approaches to Task Decomposition?"docs = vectorstore.similarity_search(question)len(docs)42.2 Going Deeper​2.2.1 Retrieval​Vectorstores are commonly used for retrieval.But, they are not the only option.For example, SVMs (see thread here) can also be used.LangChain has many retrievers including, but not limited to, vectorstores.All retrievers implement some common methods, such as get_relevant_documents().from langchain.retrievers import SVMRetrieversvm_retriever = SVMRetriever.from_documents(all_splits,OpenAIEmbeddings())docs_svm=svm_retriever.get_relevant_documents(question)len(docs)42.2.2 Advanced retrieval​Improve on similarity_search:MultiQueryRetriever generates variants of the input question to improve retrieval.Max marginal relevance selects for relevance and diversity among the retrieved documents.Documents can be filtered during retrieval using metadata filters.# MultiQueryRetrieverimport loggingfrom langchain.chat_models import ChatOpenAIfrom langchain.retrievers.multi_query import | https://python.langchain.com/docs/use_cases/question_answering/ |
83a1a206232c-4 | loggingfrom langchain.chat_models import ChatOpenAIfrom langchain.retrievers.multi_query import MultiQueryRetrieverlogging.basicConfig()logging.getLogger('langchain.retrievers.multi_query').setLevel(logging.INFO)retriever_from_llm = MultiQueryRetriever.from_llm(retriever=vectorstore.as_retriever(), llm=ChatOpenAI(temperature=0))unique_docs = retriever_from_llm.get_relevant_documents(query=question)len(unique_docs)INFO:langchain.retrievers.multi_query:Generated queries: ['1. How can Task Decomposition be approached?', '2. What are the different methods for Task Decomposition?', '3. What are the various approaches to decomposing tasks?']53. QA​3.1 Getting started​Distill the retrieved documents into an answer using an LLM (e.g., gpt-3.5-turbo) with RetrievalQA chain.from langchain.chat_models import ChatOpenAIllm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)from langchain.chains import RetrievalQAqa_chain = RetrievalQA.from_chain_type(llm,retriever=vectorstore.as_retriever())qa_chain({"query": question}){'query': 'What are the approaches to Task Decomposition?', 'result': 'The approaches to task decomposition include:\n\n1. Simple prompting: This approach involves using simple prompts or questions to guide the agent in breaking down a task into smaller subgoals. For example, the agent can be prompted with "Steps for XYZ" and asked to list the subgoals for achieving XYZ.\n\n2. | https://python.langchain.com/docs/use_cases/question_answering/ |
83a1a206232c-5 | "Steps for XYZ" and asked to list the subgoals for achieving XYZ.\n\n2. Task-specific instructions: In this approach, task-specific instructions are provided to the agent to guide the decomposition process. For example, if the task is to write a novel, the agent can be instructed to "Write a story outline" as a subgoal.\n\n3. Human inputs: This approach involves incorporating human inputs in the task decomposition process. Humans can provide guidance, feedback, and suggestions to help the agent break down complex tasks into manageable subgoals.\n\nThese approaches aim to enable efficient handling of complex tasks by breaking them down into smaller, more manageable parts.'}3.2 Going Deeper​3.2.1 Integrations​LLMsBrowse the > 55 LLM integrations here.See further documentation on LLMs here. 3.2.2 Running LLMs locally​The popularity of PrivateGPT and GPT4All underscore the importance of running LLMs locally.LangChain has integrations with many open source LLMs that can be run locally.Using GPT4All is as simple as downloading the binary and then:from langchain.llms import GPT4Allfrom langchain.chains import RetrievalQAllm = GPT4All(model="/Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin",max_tokens=2048)qa_chain = RetrievalQA.from_chain_type(llm,retriever=vectorstore.as_retriever())qa_chain({"query": question}){'query': 'What are the approaches to Task Decomposition?', 'result': ' There are three main approaches to task decomposition: (1) using language models like GPT-3 for simple prompting such as "Steps for XYZ.\\n1.", (2) using task-specific instructions, and | https://python.langchain.com/docs/use_cases/question_answering/ |
83a1a206232c-6 | such as "Steps for XYZ.\\n1.", (2) using task-specific instructions, and (3) with human inputs.'}3.2.2 Customizing the prompt​The prompt in RetrievalQA chain can be easily customized.# Build promptfrom langchain.prompts import PromptTemplatetemplate = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. Use three sentences maximum and keep the answer as concise as possible. Always say "thanks for asking!" at the end of the answer. {context}Question: {question}Helpful Answer:"""QA_CHAIN_PROMPT = PromptTemplate(input_variables=["context", "question"],template=template,)# Run chainfrom langchain.chains import RetrievalQAllm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)qa_chain = RetrievalQA.from_chain_type(llm, retriever=vectorstore.as_retriever(), chain_type_kwargs={"prompt": QA_CHAIN_PROMPT})result = qa_chain({"query": question})result["result"]'The approaches to Task Decomposition are (1) using simple prompting by LLM, (2) using task-specific instructions, and (3) with human inputs. Thanks for asking!'3.2.3 Returning source documents​The full set of retrieved documents used for answer distillation can be returned using return_source_documents=True.from langchain.chains import | https://python.langchain.com/docs/use_cases/question_answering/ |
83a1a206232c-7 | documents used for answer distillation can be returned using return_source_documents=True.from langchain.chains import RetrievalQAqa_chain = RetrievalQA.from_chain_type(llm,retriever=vectorstore.as_retriever(), return_source_documents=True)result = qa_chain({"query": question})print(len(result['source_documents']))result['source_documents'][0]4Document(page_content='Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': "LLM Powered Autonomous Agents | Lil'Log", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'})3.2.4 Citations​Answer citations can be returned using RetrievalQAWithSourcesChain.from langchain.chains import RetrievalQAWithSourcesChainqa_chain = RetrievalQAWithSourcesChain.from_chain_type(llm,retriever=vectorstore.as_retriever())result = | https://python.langchain.com/docs/use_cases/question_answering/ |
83a1a206232c-8 | = qa_chain({"question": question})result{'question': 'What are the approaches to Task Decomposition?', 'answer': 'The approaches to Task Decomposition include (1) using LLM with simple prompting, (2) using task-specific instructions, and (3) incorporating human inputs.\n', 'sources': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}3.2.5 Customizing retrieved docs in the LLM prompt​Retrieved documents can be fed to an LLM for answer distillation in a few different ways.stuff, refine, map-reduce, and map-rerank chains for passing documents to an LLM prompt are well summarized here.stuff is commonly used because it simply "stuffs" all retrieved documents into the prompt.The load_qa_chain is an easy way to pass documents to an LLM using these various approaches (e.g., see chain_type).from langchain.chains.question_answering import load_qa_chainchain = load_qa_chain(llm, chain_type="stuff")chain({"input_documents": unique_docs, "question": question},return_only_outputs=True){'output_text': 'The approaches to task decomposition include (1) using simple prompting to break down tasks into subgoals, (2) providing task-specific instructions to guide the decomposition process, and (3) incorporating human inputs for task decomposition.'}We can also pass the chain_type to RetrievalQA.qa_chain = RetrievalQA.from_chain_type(llm,retriever=vectorstore.as_retriever(), chain_type="stuff")result = qa_chain({"query": question})In summary, the user can choose the desired level of abstraction for QA:4. | https://python.langchain.com/docs/use_cases/question_answering/ |
83a1a206232c-9 | question})In summary, the user can choose the desired level of abstraction for QA:4. Chat​4.1 Getting started​To keep chat history, first specify a Memory buffer to track the conversation inputs / outputs.from langchain.memory import ConversationBufferMemorymemory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)The ConversationalRetrievalChain uses chat in the Memory buffer. from langchain.chains import ConversationalRetrievalChainretriever=vectorstore.as_retriever()chat = ConversationalRetrievalChain.from_llm(llm,retriever=retriever,memory=memory)result = chat({"question": "What are some of the main ideas in self-reflection?"})result['answer']"Some of the main ideas in self-reflection include:\n1. Iterative improvement: Self-reflection allows autonomous agents to improve by refining past action decisions and correcting mistakes.\n2. Trial and error: Self-reflection is crucial in real-world tasks where trial and error are inevitable.\n3. Two-shot examples: Self-reflection is created by showing pairs of failed trajectories and ideal reflections for guiding future changes in the plan.\n4. Working memory: Reflections are added to the agent's working memory, up to three, to be used as context for querying.\n5. Performance evaluation: Self-reflection involves continuously reviewing and analyzing actions, self-criticizing behavior, and reflecting on past decisions and strategies to refine approaches.\n6. Efficiency: Self-reflection encourages being smart and efficient, aiming to complete tasks in the least number of steps."The Memory buffer has context to resolve "it" ("self-reflection") in the below question.result = chat({"question": "How does the Reflexion paper handle it?"})result['answer']"The Reflexion paper handles self-reflection by showing two-shot examples to the Learning Language Model (LLM). Each example consists of | https://python.langchain.com/docs/use_cases/question_answering/ |
83a1a206232c-10 | by showing two-shot examples to the Learning Language Model (LLM). Each example consists of a failed trajectory and an ideal reflection that guides future changes in the agent's plan. These reflections are then added to the agent's working memory, up to a maximum of three, to be used as context for querying the LLM. This allows the agent to iteratively improve its reasoning skills by refining past action decisions and correcting previous mistakes."4.2 Going deeper​The documentation on ConversationalRetrievalChain offers a few extensions, such as streaming and source documents.PreviousUse casesNextContext aware text splitting and QA / ChatQuickstart1. Loading, Splitting, Storage1.1 Getting started1.2 Going Deeper2. Retrieval2.1 Getting started2.2 Going Deeper3. QA3.1 Getting started3.2 Going Deeper4. Chat4.1 Getting started4.2 Going deeperCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/use_cases/question_answering/ |
3e1fce09fdb1-0 | Question answering over a group chat messages using Activeloop's DeepLake | 🦜�🔗 Langchain | https://python.langchain.com/docs/use_cases/question_answering/semantic-search-over-chat |
3e1fce09fdb1-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKUse casesQA and Chat over DocumentsContext aware text splitting and QA / ChatRunning LLMs locallyQuestion answering over a group chat messages using Activeloop's DeepLakeAnalyzing structured dataExtractionInteracting with APIsChatbotsSummarizationCode UnderstandingAgent simulationsAgentsAutonomous (long-running) agentsMulti-modalUse casesQA and Chat over DocumentsQuestion answering over a group chat messages using Activeloop's DeepLakeOn this pageQuestion answering over a group chat messages using Activeloop's DeepLakeIn this tutorial, we are going to use Langchain + Activeloop's Deep Lake with GPT4 to semantically search and ask questions over a group chat.View a working demo here1. Install required packages​python3 -m pip install --upgrade langchain 'deeplake[enterprise]' openai tiktoken2. Add API keys​import osimport getpassfrom langchain.document_loaders import PyPDFLoader, TextLoaderfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import ( RecursiveCharacterTextSplitter, CharacterTextSplitter,)from langchain.vectorstores import DeepLakefrom langchain.chains import ConversationalRetrievalChain, RetrievalQAfrom langchain.chat_models import ChatOpenAIfrom langchain.llms import OpenAIos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")activeloop_token = getpass.getpass("Activeloop Token:")os.environ["ACTIVELOOP_TOKEN"] = activeloop_tokenos.environ["ACTIVELOOP_ORG"] = getpass.getpass("Activeloop Org:")org_id = os.environ["ACTIVELOOP_ORG"]embeddings = OpenAIEmbeddings()dataset_path = "hub://" | https://python.langchain.com/docs/use_cases/question_answering/semantic-search-over-chat |
3e1fce09fdb1-2 | = OpenAIEmbeddings()dataset_path = "hub://" + org_id + "/data"2. Create sample data​You can generate a sample group chat conversation using ChatGPT with this prompt:Generate a group chat conversation with three friends talking about their day, referencing real places and fictional names. Make it funny and as detailed as possible.I've already generated such a chat in messages.txt. We can keep it simple and use this for our example.3. Ingest chat embeddings​We load the messages in the text file, chunk and upload to ActiveLoop Vector store.with open("messages.txt") as f: state_of_the_union = f.read()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)pages = text_splitter.split_text(state_of_the_union)text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)texts = text_splitter.create_documents(pages)print(texts)dataset_path = "hub://" + org + "/data"embeddings = OpenAIEmbeddings()db = DeepLake.from_documents( texts, embeddings, dataset_path=dataset_path, overwrite=True)Optional: You can also use Deep Lake's Managed Tensor Database as a hosting service and run queries there. In order to do so, it is necessary to specify the runtime parameter as {'tensor_db': True} during the creation of the vector store. This configuration enables the execution of queries on the Managed Tensor Database, rather than on the client side. It should be noted that this functionality is not applicable to datasets stored locally or in-memory. In the event that a vector store has already been created outside of the Managed Tensor Database, it is possible to transfer it to the Managed Tensor Database by following the prescribed steps.# with open("messages.txt") as f:# state_of_the_union = f.read()# text_splitter = | https://python.langchain.com/docs/use_cases/question_answering/semantic-search-over-chat |
3e1fce09fdb1-3 | as f:# state_of_the_union = f.read()# text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)# pages = text_splitter.split_text(state_of_the_union)# text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)# texts = text_splitter.create_documents(pages)# print(texts)# dataset_path = "hub://" + org + "/data"# embeddings = OpenAIEmbeddings()# db = DeepLake.from_documents(# texts, embeddings, dataset_path=dataset_path, overwrite=True, runtime="tensor_db"# )4. Ask questions​Now we can ask a question and get an answer back with a semantic search:db = DeepLake(dataset_path=dataset_path, read_only=True, embedding_function=embeddings)retriever = db.as_retriever()retriever.search_kwargs["distance_metric"] = "cos"retriever.search_kwargs["k"] = 4qa = RetrievalQA.from_chain_type( llm=OpenAI(), chain_type="stuff", retriever=retriever, return_source_documents=False)# What was the restaurant the group was talking about called?query = input("Enter query:")# The Hungry Lobsterans = qa({"query": query})print(ans)PreviousRunning LLMs locallyNextAnalyzing structured data1. Install required packages2. Add API keys2. Create sample data3. Ingest chat embeddings4. Ask questionsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/use_cases/question_answering/semantic-search-over-chat |
378e5d7b06d6-0 | Context aware text splitting and QA / Chat | 🦜�🔗 Langchain | https://python.langchain.com/docs/use_cases/question_answering/document-context-aware-QA |
378e5d7b06d6-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKUse casesQA and Chat over DocumentsContext aware text splitting and QA / ChatRunning LLMs locallyQuestion answering over a group chat messages using Activeloop's DeepLakeAnalyzing structured dataExtractionInteracting with APIsChatbotsSummarizationCode UnderstandingAgent simulationsAgentsAutonomous (long-running) agentsMulti-modalUse casesQA and Chat over DocumentsContext aware text splitting and QA / ChatContext aware text splitting and QA / ChatText splitting for vector storage often uses sentences or other delimiters to keep related text together. But many documents (such as Markdown files) have structure (headers) that can be explicitly used in splitting. The MarkdownHeaderTextSplitter lets a user split Markdown files files based on specified headers. This results in chunks that retain the header(s) that it came from in the metadata.This works nicely w/ SelfQueryRetriever.First, tell the retriever about our splits.Then, query based on the doc structure (e.g., "summarize the doc introduction"). Chunks only from that section of the Document will be filtered and used in chat / Q+A.Let's test this out on an example Notion page!First, I download the page to Markdown as explained here.# Load Notion page as a markdownfile filefrom langchain.document_loaders import NotionDirectoryLoaderpath = "../Notion_DB/"loader = NotionDirectoryLoader(path)docs = loader.load()md_file = docs[0].page_content# Let's create groups based on the section headers in our pagefrom langchain.text_splitter import MarkdownHeaderTextSplitterheaders_to_split_on = [ ("###", "Section"),]markdown_splitter = MarkdownHeaderTextSplitter(headers_to_split_on=headers_to_split_on)md_header_splits = markdown_splitter.split_text(md_file)Now, perform | https://python.langchain.com/docs/use_cases/question_answering/document-context-aware-QA |
378e5d7b06d6-2 | = markdown_splitter.split_text(md_file)Now, perform text splitting on the header grouped documents. # Define our text splitterfrom langchain.text_splitter import RecursiveCharacterTextSplitterchunk_size = 500chunk_overlap = 0text_splitter = RecursiveCharacterTextSplitter( chunk_size=chunk_size, chunk_overlap=chunk_overlap)all_splits = text_splitter.split_documents(md_header_splits)This sets us up well do perform metadata filtering based on the document structure.Let's bring this all togther by building a vectorstore first.pip install chromadb# Build vectorstore and keep the metadatafrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.vectorstores import Chromavectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())Let's create a SelfQueryRetriever that can filter based upon metadata we defined.# Create retrieverfrom langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfo# Define our metadatametadata_field_info = [ AttributeInfo( name="Section", description="Part of the document that the text comes from", type="string or list[string]", ),]document_content_description = "Major sections of the document"# Define self query retriverllm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)We can see that we can query only for texts in the Introduction of the document!# Testretriever.get_relevant_documents("Summarize the Introduction section of the document") query='Introduction' filter=Comparison(comparator=<Comparator.EQ: | https://python.langchain.com/docs/use_cases/question_answering/document-context-aware-QA |
378e5d7b06d6-3 | the document") query='Introduction' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='Section', value='Introduction') limit=None [Document(page_content='', metadata={'Section': 'Introduction'}), Document(page_content='Q+A systems often use a two-step approach: retrieve relevant text chunks and then synthesize them into an answer. There many ways to approach this. For example, we recently [discussed](https://blog.langchain.dev/auto-evaluation-of-anthropic-100k-context-window/) the Retriever-Less option (at bottom in the below diagram), highlighting the Anthropic 100k context window model. Metadata filtering is an alternative approach that pre-filters chunks based on a user-defined criteria in a VectorDB using', metadata={'Section': 'Introduction'}), Document(page_content='metadata tags prior to semantic search.', metadata={'Section': 'Introduction'})]# Testretriever.get_relevant_documents("Summarize the Introduction section of the document") query='Introduction' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='Section', value='Introduction') limit=None [Document(page_content='', metadata={'Section': 'Introduction'}), Document(page_content='Q+A systems often use a two-step approach: retrieve relevant text chunks and then synthesize them into an answer. There many ways to approach this. For example, we recently | https://python.langchain.com/docs/use_cases/question_answering/document-context-aware-QA |
378e5d7b06d6-4 | then synthesize them into an answer. There many ways to approach this. For example, we recently [discussed](https://blog.langchain.dev/auto-evaluation-of-anthropic-100k-context-window/) the Retriever-Less option (at bottom in the below diagram), highlighting the Anthropic 100k context window model. Metadata filtering is an alternative approach that pre-filters chunks based on a user-defined criteria in a VectorDB using', metadata={'Section': 'Introduction'}), Document(page_content='metadata tags prior to semantic search.', metadata={'Section': 'Introduction'})]We can also look at other parts of the document.retriever.get_relevant_documents("Summarize the Testing section of the document") query='Testing' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='Section', value='Testing') limit=None [Document(page_content='', metadata={'Section': 'Testing'}), Document(page_content='`SelfQueryRetriever` works well in [many cases](https://twitter.com/hwchase17/status/1656791488569954304/photo/1). For example, given [this test case](https://twitter.com/hwchase17/status/1656791488569954304?s=20): \n \nThe query can be nicely broken up into semantic query and metadata filter: \n```python\nsemantic query: "prompt injection"', metadata={'Section': 'Testing'}), | https://python.langchain.com/docs/use_cases/question_answering/document-context-aware-QA |
378e5d7b06d6-5 | query: "prompt injection"', metadata={'Section': 'Testing'}), Document(page_content='Below, we can see detailed results from the app: \n- Kor extraction is above to perform the transformation between query and metadata format ✅\n- Self-querying attempts to filter using the episode ID (`252`) in the query and fails 🚫\n- Baseline returns docs from 3 different episodes (one from `252`), confusing the answer 🚫', metadata={'Section': 'Testing'}), Document(page_content='will use in retrieval [here](https://github.com/langchain-ai/auto-evaluator/blob/main/streamlit/kor_retriever_lex.py).', metadata={'Section': 'Testing'})]Now, we can create chat or Q+A apps that are aware of the explict document structure. The ability to retain document structure for metadata filtering can be helpful for complicated or longer documents.from langchain.chains import RetrievalQAfrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)qa_chain = RetrievalQA.from_chain_type(llm, retriever=retriever)qa_chain.run("Summarize the Testing section of the document") query='Testing' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='Section', value='Testing') limit=None 'The Testing section of the document describes the evaluation of the `SelfQueryRetriever` component in comparison to a baseline model. The evaluation was performed on a test case where the query was broken down into a semantic query and a metadata filter. The results showed that the `SelfQueryRetriever` component was able to perform the transformation between query and metadata format, but failed to filter using the episode ID in the query. The baseline model returned documents | https://python.langchain.com/docs/use_cases/question_answering/document-context-aware-QA |
378e5d7b06d6-6 | metadata format, but failed to filter using the episode ID in the query. The baseline model returned documents from three different episodes, which confused the answer. The `SelfQueryRetriever` component was deemed to work well in many cases and will be used in retrieval.'PreviousQA and Chat over DocumentsNextRunning LLMs locallyCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/use_cases/question_answering/document-context-aware-QA |
15100381c040-0 | Running LLMs locally | 🦜�🔗 Langchain | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
15100381c040-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKUse casesQA and Chat over DocumentsContext aware text splitting and QA / ChatRunning LLMs locallyQuestion answering over a group chat messages using Activeloop's DeepLakeAnalyzing structured dataExtractionInteracting with APIsChatbotsSummarizationCode UnderstandingAgent simulationsAgentsAutonomous (long-running) agentsMulti-modalUse casesQA and Chat over DocumentsRunning LLMs locallyOn this pageRunning LLMs locallyThe popularity of projects like PrivateGPT, llama.cpp, and GPT4All underscore the importance of running LLMs locally.LangChain has integrations with many open source LLMs that can be run locally.For example, here we show how to run GPT4All or Llama-v2 locally (e.g., on your laptop) using local embeddings and a local LLM.Document Loading​First, install packages needed for local embeddings and vector storage.pip install gpt4all pip install chromadbLoad and split an example docucment.We'll use a blog post on agents as an example.from langchain.document_loaders import WebBaseLoaderloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")data = loader.load()from langchain.text_splitter import RecursiveCharacterTextSplittertext_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)all_splits = text_splitter.split_documents(data)Next, the below steps will download the GPT4All embeddings locally (if you don't already have them).from langchain.vectorstores import Chromafrom langchain.embeddings import GPT4AllEmbeddingsvectorstore = Chroma.from_documents(documents=all_splits, embedding=GPT4AllEmbeddings()) Found model file at | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
15100381c040-2 | embedding=GPT4AllEmbeddings()) Found model file at /Users/rlm/.cache/gpt4all/ggml-all-MiniLM-L6-v2-f16.binTest similarity search is working with our local embeddings.question = "What are the approaches to Task Decomposition?"docs = vectorstore.similarity_search(question)len(docs) 4docs[0] Document(page_content='Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.', metadata={'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': "LLM Powered Autonomous Agents | Lil'Log"})Model​Llama-v2​Download a GGML converted model (e.g., here).pip install llama-cpp-pythonTo enable use of GPU on Apple Silicon, follow the steps here to use the Python binding with Metal support.In particular, ensure that conda is using the correct virtual enviorment that you created (miniforge3).E.g., for me:conda activate | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
15100381c040-3 | enviorment that you created (miniforge3).E.g., for me:conda activate /Users/rlm/miniforge3/envs/llamaWith this confirmed:CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dirfrom langchain.llms import LlamaCppfrom langchain.callbacks.manager import CallbackManagerfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerSetting model parameters as noted in the llama.cpp docs.n_gpu_layers = 1 # Metal set to 1 is enough.n_batch = 512 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])# Make sure the model path is correct for your system!llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama.cpp/llama-2-13b-chat.ggmlv3.q4_0.bin", n_gpu_layers=n_gpu_layers, n_batch=n_batch, n_ctx=2048, f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls callback_manager=callback_manager, verbose=True,) llama.cpp: loading model from /Users/rlm/Desktop/Code/llama.cpp/llama-2-13b-chat.ggmlv3.q4_0.bin llama_model_load_internal: format = ggjt v3 (latest) llama_model_load_internal: n_vocab = 32000 llama_model_load_internal: n_ctx = 2048 llama_model_load_internal: n_embd = 5120 | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
15100381c040-4 | llama_model_load_internal: n_embd = 5120 llama_model_load_internal: n_mult = 256 llama_model_load_internal: n_head = 40 llama_model_load_internal: n_layer = 40 llama_model_load_internal: n_rot = 128 llama_model_load_internal: freq_base = 10000.0 llama_model_load_internal: freq_scale = 1 llama_model_load_internal: ftype = 2 (mostly Q4_0) llama_model_load_internal: n_ff = 13824 llama_model_load_internal: model size = 13B llama_model_load_internal: ggml ctx size = 0.09 MB llama_model_load_internal: mem required = 8819.71 MB (+ 1608.00 MB per state) llama_new_context_with_model: kv self size = 1600.00 MB ggml_metal_init: allocating ggml_metal_init: using MPS ggml_metal_init: loading '/Users/rlm/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/ggml-metal.metal' ggml_metal_init: loaded kernel_add 0x76add7460 ggml_metal_init: loaded kernel_mul 0x76add5090 | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
15100381c040-5 | 0x76add5090 ggml_metal_init: loaded kernel_mul_row 0x76addae00 ggml_metal_init: loaded kernel_scale 0x76adb2940 ggml_metal_init: loaded kernel_silu 0x76adb8610 ggml_metal_init: loaded kernel_relu 0x76addb700 ggml_metal_init: loaded kernel_gelu 0x76addc100 ggml_metal_init: loaded kernel_soft_max 0x76addcb80 ggml_metal_init: loaded kernel_diag_mask_inf 0x76addd600 ggml_metal_init: loaded kernel_get_rows_f16 0x295f16380 ggml_metal_init: loaded kernel_get_rows_q4_0 | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
15100381c040-6 | 0x295f165e0 ggml_metal_init: loaded kernel_get_rows_q4_1 0x295f16840 ggml_metal_init: loaded kernel_get_rows_q2_K 0x295f16aa0 ggml_metal_init: loaded kernel_get_rows_q3_K 0x295f16d00 ggml_metal_init: loaded kernel_get_rows_q4_K 0x295f16f60 ggml_metal_init: loaded kernel_get_rows_q5_K 0x295f171c0 ggml_metal_init: loaded kernel_get_rows_q6_K 0x295f17420 ggml_metal_init: loaded kernel_rms_norm 0x295f17680 ggml_metal_init: loaded kernel_norm 0x295f178e0 ggml_metal_init: loaded kernel_mul_mat_f16_f32 0x295f17b40 | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
15100381c040-7 | 0x295f17b40 ggml_metal_init: loaded kernel_mul_mat_q4_0_f32 0x295f17da0 ggml_metal_init: loaded kernel_mul_mat_q4_1_f32 0x295f18000 ggml_metal_init: loaded kernel_mul_mat_q2_K_f32 0x7962b9900 ggml_metal_init: loaded kernel_mul_mat_q3_K_f32 0x7962bf5f0 ggml_metal_init: loaded kernel_mul_mat_q4_K_f32 0x7962bc630 ggml_metal_init: loaded kernel_mul_mat_q5_K_f32 0x142045960 ggml_metal_init: loaded kernel_mul_mat_q6_K_f32 0x7962ba2b0 ggml_metal_init: loaded kernel_rope 0x7962c35f0 ggml_metal_init: loaded kernel_alibi_f32 0x7962c30b0 | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
15100381c040-8 | 0x7962c30b0 ggml_metal_init: loaded kernel_cpy_f32_f16 0x7962c15b0 ggml_metal_init: loaded kernel_cpy_f32_f32 0x7962beb10 ggml_metal_init: loaded kernel_cpy_f16_f16 0x7962bf060 ggml_metal_init: recommendedMaxWorkingSetSize = 21845.34 MB ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: maxTransferRate = built-in GPU ggml_metal_add_buffer: allocated 'data ' buffer, size = 6984.06 MB, (35852.94 / 21845.34), warning: current allocated size is greater than the recommended max working set size ggml_metal_add_buffer: allocated 'eval ' buffer, size = 1026.00 MB, (36878.94 / 21845.34), warning: current allocated size is greater than the recommended max working set size ggml_metal_add_buffer: allocated 'kv ' buffer, size = 1602.00 MB, (38480.94 / | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
15100381c040-9 | ' buffer, size = 1602.00 MB, (38480.94 / 21845.34), warning: current allocated size is greater than the recommended max working set size ggml_metal_add_buffer: allocated 'scr0 ' buffer, size = 298.00 MB, (38778.94 / 21845.34), warning: current allocated size is greater than the recommended max working set size AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | ggml_metal_add_buffer: allocated 'scr1 ' buffer, size = 512.00 MB, (39290.94 / 21845.34), warning: current allocated size is greater than the recommended max working set sizeNote that these indicate that Metal was enabled properly:ggml_metal_init: allocatingggml_metal_init: using MPSprompt = """Question: A rap battle between Stephen Colbert and John Oliver"""llm(prompt) Llama.generate: prefix-match hit Setting: The Late Show with Stephen Colbert. The studio audience is filled with fans of both comedians, and the energy is electric. The two comedians are seated at a table, ready to begin their epic rap battle. Stephen Colbert: (smirking) Oh, you think you can take me | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
15100381c040-10 | Stephen Colbert: (smirking) Oh, you think you can take me down, John? You're just a Brit with a funny accent, and I'm the king of comedy! John Oliver: (grinning) Oh, you think you're tough, Stephen? You're just a has-been from South Carolina, and I'm the future of comedy! The battle begins, with each comedian delivering clever rhymes and witty insults. Here are a few lines that might be included: Stephen Colbert: (rapping) You may have a big brain, John, but you can't touch my charm / I've got the audience in stitches, while you're just a blemish on the screen / Your accent is so thick, it's like trying to hear a speech through a mouthful of marshmallows / You may have llama_print_timings: load time = 2201.54 ms llama_print_timings: sample time = 182.54 ms / 256 runs ( 0.71 ms per token, 1402.41 tokens per second) llama_print_timings: prompt eval time = 0.00 ms / 1 tokens ( 0.00 ms per token, inf tokens per second) llama_print_timings: eval time = 8484.62 ms / 256 runs ( 33.14 ms per token, 30.17 tokens per second) llama_print_timings: total time = 9000.62 ms | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
15100381c040-11 | total time = 9000.62 ms "\nSetting: The Late Show with Stephen Colbert. The studio audience is filled with fans of both comedians, and the energy is electric. The two comedians are seated at a table, ready to begin their epic rap battle.\n\nStephen Colbert: (smirking) Oh, you think you can take me down, John? You're just a Brit with a funny accent, and I'm the king of comedy!\nJohn Oliver: (grinning) Oh, you think you're tough, Stephen? You're just a has-been from South Carolina, and I'm the future of comedy!\nThe battle begins, with each comedian delivering clever rhymes and witty insults. Here are a few lines that might be included:\nStephen Colbert: (rapping) You may have a big brain, John, but you can't touch my charm / I've got the audience in stitches, while you're just a blemish on the screen / Your accent is so thick, it's like trying to hear a speech through a mouthful of marshmallows / You may have"GPT4All​Similarly, we can use GPT4All.Download the GPT4All model binary.The Model Explorer on the GPT4All is a great way to choose and download a model.Then, specify the path that you downloaded to to.E.g., for me, the model lives here:/Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.binfrom langchain.llms import GPT4Allllm = GPT4All( model="/Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin", max_tokens=2048,) Found model | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
15100381c040-12 | max_tokens=2048,) Found model file at /Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin objc[47842]: Class GGMLMetalClass is implemented in both /Users/rlm/anaconda3/envs/lcn2/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libreplit-mainline-metal.dylib (0x29f48c208) and /Users/rlm/anaconda3/envs/lcn2/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libllamamodel-mainline-metal.dylib (0x29f970208). One of the two will be used. Which one is undefined. llama.cpp: using Metal llama.cpp: loading model from /Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin llama_model_load_internal: format = ggjt v3 (latest) llama_model_load_internal: n_vocab = 32001 llama_model_load_internal: n_ctx = 2048 llama_model_load_internal: n_embd = 5120 llama_model_load_internal: n_mult = 256 llama_model_load_internal: n_head = 40 llama_model_load_internal: n_layer = 40 llama_model_load_internal: n_rot = 128 llama_model_load_internal: ftype = 2 (mostly Q4_0) llama_model_load_internal: | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
15100381c040-13 | = 2 (mostly Q4_0) llama_model_load_internal: n_ff = 13824 llama_model_load_internal: n_parts = 1 llama_model_load_internal: model size = 13B llama_model_load_internal: ggml ctx size = 0.09 MB llama_model_load_internal: mem required = 9031.71 MB (+ 1608.00 MB per state) llama_new_context_with_model: kv self size = 1600.00 MB ggml_metal_init: allocating ggml_metal_init: using MPS ggml_metal_init: loading '/Users/rlm/anaconda3/envs/lcn2/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/ggml-metal.metal' ggml_metal_init: loaded kernel_add 0x115fcbfb0 ggml_metal_init: loaded kernel_mul 0x115fcd4a0 ggml_metal_init: loaded kernel_mul_row 0x115fce850 ggml_metal_init: loaded kernel_scale 0x115fcd700 ggml_metal_init: loaded kernel_silu | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
15100381c040-14 | ggml_metal_init: loaded kernel_silu 0x115fcd960 ggml_metal_init: loaded kernel_relu 0x115fcfd50 ggml_metal_init: loaded kernel_gelu 0x115fd03c0 ggml_metal_init: loaded kernel_soft_max 0x115fcf640 ggml_metal_init: loaded kernel_diag_mask_inf 0x115fd07f0 ggml_metal_init: loaded kernel_get_rows_f16 0x1147b2450 ggml_metal_init: loaded kernel_get_rows_q4_0 0x11479d1d0 ggml_metal_init: loaded kernel_get_rows_q4_1 0x1147ad1f0 ggml_metal_init: loaded kernel_get_rows_q2_k 0x1147aef50 ggml_metal_init: loaded | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
15100381c040-15 | 0x1147aef50 ggml_metal_init: loaded kernel_get_rows_q3_k 0x1147af1b0 ggml_metal_init: loaded kernel_get_rows_q4_k 0x1147af410 ggml_metal_init: loaded kernel_get_rows_q5_k 0x1147affa0 ggml_metal_init: loaded kernel_get_rows_q6_k 0x1147b0200 ggml_metal_init: loaded kernel_rms_norm 0x1147b0460 ggml_metal_init: loaded kernel_norm 0x1147bfc90 ggml_metal_init: loaded kernel_mul_mat_f16_f32 0x1147c0230 ggml_metal_init: loaded kernel_mul_mat_q4_0_f32 0x1147c0490 ggml_metal_init: loaded kernel_mul_mat_q4_1_f32 0x1147c06f0 ggml_metal_init: loaded | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
15100381c040-16 | 0x1147c06f0 ggml_metal_init: loaded kernel_mul_mat_q2_k_f32 0x1147c0950 ggml_metal_init: loaded kernel_mul_mat_q3_k_f32 0x1147c0bb0 ggml_metal_init: loaded kernel_mul_mat_q4_k_f32 0x1147c0e10 ggml_metal_init: loaded kernel_mul_mat_q5_k_f32 0x1147c1070 ggml_metal_init: loaded kernel_mul_mat_q6_k_f32 0x1147c13d0 ggml_metal_init: loaded kernel_rope 0x1147c1a00 ggml_metal_init: loaded kernel_alibi_f32 0x1147c2120 ggml_metal_init: loaded kernel_cpy_f32_f16 0x115fd1690 ggml_metal_init: loaded kernel_cpy_f32_f32 0x115fd1c60 | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
15100381c040-17 | 0x115fd1c60 ggml_metal_init: loaded kernel_cpy_f16_f16 0x115fd2d40 ggml_metal_init: recommendedMaxWorkingSetSize = 21845.34 MB ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: maxTransferRate = built-in GPU ggml_metal_add_buffer: allocated 'data ' buffer, size = 6984.06 MB, ( 6984.45 / 21845.34) ggml_metal_add_buffer: allocated 'eval ' buffer, size = 1024.00 MB, ( 8008.45 / 21845.34) ggml_metal_add_buffer: allocated 'kv ' buffer, size = 1602.00 MB, ( 9610.45 / 21845.34) ggml_metal_add_buffer: allocated 'scr0 ' buffer, size = 512.00 MB, (10122.45 / 21845.34) ggml_metal_add_buffer: allocated 'scr1 ' buffer, size = 512.00 MB, (10634.45 / 21845.34)LLMChain​Run an | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
15100381c040-18 | / 21845.34)LLMChain​Run an LLMChain (see here) with either model by passing in the retrieved docs and a simple prompt.It formats the prompt template using the input key values provided and passes the formatted string to GPT4All, LLama-V2, or another specified LLM.In this case, the list of retrieved documents (docs) above are pass into {context}.from langchain import PromptTemplate, LLMChain# Promptprompt = PromptTemplate.from_template( "Summarize the main themes in these retrieved docs: {docs}")# Chainllm_chain = LLMChain(llm=llm, prompt=prompt)# Runquestion = "What are the approaches to Task Decomposition?"docs = vectorstore.similarity_search(question)result = llm_chain(docs)# Outputresult["text"] Llama.generate: prefix-match hit Based on the retrieved documents, the main themes are: 1. Task decomposition: The ability to break down complex tasks into smaller subtasks, which can be handled by an LLM or other components of the agent system. 2. LLM as the core controller: The use of a large language model (LLM) as the primary controller of an autonomous agent system, complemented by other key components such as a knowledge graph and a planner. 3. Potentiality of LLM: The idea that LLMs have the potential to be used as powerful general problem solvers, not just for generating well-written copies but also for solving complex tasks and achieving human-like intelligence. 4. Challenges in long-term planning: The challenges in planning over a lengthy history and effectively exploring the solution space, which are important limitations of current LLM-based autonomous agent systems. llama_print_timings: | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
15100381c040-19 | llama_print_timings: load time = 1191.88 ms llama_print_timings: sample time = 134.47 ms / 193 runs ( 0.70 ms per token, 1435.25 tokens per second) llama_print_timings: prompt eval time = 39470.18 ms / 1055 tokens ( 37.41 ms per token, 26.73 tokens per second) llama_print_timings: eval time = 8090.85 ms / 192 runs ( 42.14 ms per token, 23.73 tokens per second) llama_print_timings: total time = 47943.12 ms '\nBased on the retrieved documents, the main themes are:\n1. Task decomposition: The ability to break down complex tasks into smaller subtasks, which can be handled by an LLM or other components of the agent system.\n2. LLM as the core controller: The use of a large language model (LLM) as the primary controller of an autonomous agent system, complemented by other key components such as a knowledge graph and a planner.\n3. Potentiality of LLM: The idea that LLMs have the potential to be used as powerful general problem solvers, not just for generating well-written copies but also for solving complex tasks and achieving human-like intelligence.\n4. Challenges in long-term planning: The challenges in planning over a lengthy history and effectively exploring the solution space, which are important limitations of current LLM-based autonomous agent systems.'QA Chain​We can use a QA chain to handle | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
15100381c040-20 | autonomous agent systems.'QA Chain​We can use a QA chain to handle our question above.chain_type="stuff" (see here) means that all the docs will be added (stuffed) into a prompt.from langchain.chains.question_answering import load_qa_chain# Prompttemplate = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. Use three sentences maximum and keep the answer as concise as possible. Always say "thanks for asking!" at the end of the answer. {context}Question: {question}Helpful Answer:"""QA_CHAIN_PROMPT = PromptTemplate( input_variables=["context", "question"], template=template,)# Chainchain = load_qa_chain(llm, chain_type="stuff", prompt=QA_CHAIN_PROMPT)# Runchain({"input_documents": docs, "question": question}, return_only_outputs=True) Llama.generate: prefix-match hit Hi there! There are three main approaches to task decomposition. One is using LLM with simple prompting like "Steps for XYZ." or "What are the subgoals for achieving XYZ?" Another approach is by using task-specific instructions, such as "Write a story outline" for writing a novel. Finally, task decomposition can also be done with human inputs. Thanks for asking! llama_print_timings: load time = 1191.88 ms llama_print_timings: sample time = 61.21 ms / 85 runs ( 0.72 ms per token, 1388.64 tokens per second) llama_print_timings: prompt eval time = | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
15100381c040-21 | tokens per second) llama_print_timings: prompt eval time = 8014.11 ms / 267 tokens ( 30.02 ms per token, 33.32 tokens per second) llama_print_timings: eval time = 2908.17 ms / 84 runs ( 34.62 ms per token, 28.88 tokens per second) llama_print_timings: total time = 11096.23 ms {'output_text': ' Hi there! There are three main approaches to task decomposition. One is using LLM with simple prompting like "Steps for XYZ." or "What are the subgoals for achieving XYZ?" Another approach is by using task-specific instructions, such as "Write a story outline" for writing a novel. Finally, task decomposition can also be done with human inputs. Thanks for asking!'}RetrievalQA​For an even simpler flow, use RetrievalQA.This will use a QA default prompt (shown here) and will retrieve from the vectorDB.But, you can still pass in a prompt, as before, if desired.from langchain.chains import RetrievalQAqa_chain = RetrievalQA.from_chain_type( llm, retriever=vectorstore.as_retriever(), chain_type_kwargs={"prompt": QA_CHAIN_PROMPT},)qa_chain({"query": question}) Llama.generate: prefix-match hit The three approaches to Task decomposition are LLMs with simple prompting, task-specific instructions, or human inputs. Thanks for asking! llama_print_timings: load time = 1191.88 ms | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
15100381c040-22 | load time = 1191.88 ms llama_print_timings: sample time = 22.78 ms / 31 runs ( 0.73 ms per token, 1360.66 tokens per second) llama_print_timings: prompt eval time = 0.00 ms / 1 tokens ( 0.00 ms per token, inf tokens per second) llama_print_timings: eval time = 1320.23 ms / 31 runs ( 42.59 ms per token, 23.48 tokens per second) llama_print_timings: total time = 1387.70 ms {'query': 'What are the approaches to Task Decomposition?', 'result': ' \nThe three approaches to Task decomposition are LLMs with simple prompting, task-specific instructions, or human inputs. Thanks for asking!'}PreviousContext aware text splitting and QA / ChatNextQuestion answering over a group chat messages using Activeloop's DeepLakeDocument LoadingModelLlama-v2GPT4AllLLMChainQA ChainRetrievalQACommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa |
52a29af09995-0 | Page Not Found | 🦜�🔗 Langchain
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKPage Not FoundWe could not find what you were looking for.Please contact the owner of the site that linked you to the original URL and let them know their link is broken.CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/use_cases/question_answering/(https://python.langchain.com/docs/integrations/llms/gpt4all) |
617da6df39a0-0 | Chatbots | 🦜�🔗 Langchain
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKUse casesQA and Chat over DocumentsAnalyzing structured dataExtractionInteracting with APIsChatbotsVoice AssistantSummarizationCode UnderstandingAgent simulationsAgentsAutonomous (long-running) agentsMulti-modalUse casesChatbotsChatbotsSince language models are good at producing text, that makes them ideal for creating chatbots.
Aside from the base prompts/LLMs, an important concept to know for Chatbots is memory.
Most chat based applications rely on remembering what happened in previous interactions, which memory is designed to help with.The following resources exist:ChatGPT Clone: A notebook walking through how to recreate a ChatGPT-like experience with LangChain.Conversation Agent: A notebook walking through how to create an agent optimized for conversation.Additional related resources include:Memory concepts and examples: Explanation of key concepts related to memory along with how-to's and examples.More end-to-end examples include:Voice Assistant: A notebook walking through how to create a voice assistant using LangChain.PreviousInteracting with APIsNextVoice AssistantCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/use_cases/chatbots/ |
c81e577b4642-0 | Page Not Found | 🦜�🔗 Langchain
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKPage Not FoundWe could not find what you were looking for.Please contact the owner of the site that linked you to the original URL and let them know their link is broken.CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant.html |
12d65597711a-0 | Voice Assistant | 🦜�🔗 Langchain | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKUse casesQA and Chat over DocumentsAnalyzing structured dataExtractionInteracting with APIsChatbotsVoice AssistantSummarizationCode UnderstandingAgent simulationsAgentsAutonomous (long-running) agentsMulti-modalUse casesChatbotsVoice AssistantVoice AssistantThis chain creates a clone of ChatGPT with a few modifications to make it a voice assistant. | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-2 | It uses the pyttsx3 and speech_recognition libraries to convert text to speech and speech to text respectively. The prompt template is also changed to make it more suitable for voice assistant use.from langchain import OpenAI, LLMChain, PromptTemplatefrom langchain.memory import ConversationBufferWindowMemorytemplate = """Assistant is a large language model trained by OpenAI.Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.{history}Human: {human_input}Assistant:"""prompt = PromptTemplate(input_variables=["history", "human_input"], template=template)chatgpt_chain = LLMChain( | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-3 | "human_input"], template=template)chatgpt_chain = LLMChain( llm=OpenAI(temperature=0), prompt=prompt, verbose=True, memory=ConversationBufferWindowMemory(k=2),)import speech_recognition as srimport pyttsx3engine = pyttsx3.init()def listen(): r = sr.Recognizer() with sr.Microphone() as source: print("Calibrating...") r.adjust_for_ambient_noise(source, duration=5) # optional parameters to adjust microphone sensitivity # r.energy_threshold = 200 # r.pause_threshold=0.5 print("Okay, go!") while 1: text = "" print("listening now...") try: audio = r.listen(source, timeout=5, phrase_time_limit=30) print("Recognizing...") # whisper model options are found here: https://github.com/openai/whisper#available-models-and-languages # other speech recognition models are also available. text = r.recognize_whisper( | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-4 | audio, model="medium.en", show_dict=True, )["text"] except Exception as e: unrecognized_speech_text = ( f"Sorry, I didn't catch that. Exception was: {e}s" ) text = unrecognized_speech_text print(text) response_text = chatgpt_chain.predict(human_input=text) print(response_text) engine.say(response_text) engine.runAndWait()listen(None) Calibrating... Okay, go! listening now... Recognizing... C:\Users\jaden\AppData\Roaming\Python\Python310\site-packages\tqdm\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm Hello, Assistant. | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-5 | .autonotebook import tqdm as notebook_tqdm Hello, Assistant. What's going on? > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: Hello, Assistant. What's going on? | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-6 | Human: Hello, Assistant. What's going on? Assistant: > Finished chain. Hi there! It's great to hear from you. I'm doing well. How can I help you today? listening now... Recognizing... That's cool. Isn't that neat? Yeah, I'm doing great. > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-7 | from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: Hello, Assistant. What's going on? AI: Hi there! It's great to hear from you. I'm doing well. How can I help you today? Human: That's cool. Isn't that neat? Yeah, I'm doing great. Assistant: > Finished chain. That's great to hear! What can I do for you today? listening now... Recognizing... Thank you. > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-8 | in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: Hello, Assistant. What's going on? AI: Hi there! It's great to hear from you. I'm doing well. How can I help you today? Human: That's cool. Isn't that neat? Yeah, I'm doing great. AI: That's great to hear! What can I do for you today? Human: Thank you. Assistant: > Finished chain. You're welcome! Is there anything else I can help you with? listening now... Recognizing... I'd like to learn more about neural networks. > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-9 | to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: That's cool. Isn't that neat? Yeah, I'm doing great. AI: That's great to hear! What can I do for you today? Human: Thank you. AI: You're welcome! Is there anything else I can help you with? Human: I'd like to learn more about neural networks. Assistant: > Finished chain. Sure! Neural networks are a | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-10 | > Finished chain. Sure! Neural networks are a type of artificial intelligence that use a network of interconnected nodes to process data and make decisions. They are used in a variety of applications, from image recognition to natural language processing. Neural networks are often used to solve complex problems that are too difficult for traditional algorithms. listening now... Recognizing... Tell me a fun fact about neural networks. > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-11 | human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: Thank you. AI: You're welcome! Is there anything else I can help you with? Human: I'd like to learn more about neural networks. AI: Sure! Neural networks are a type of artificial intelligence that use a network of interconnected nodes to process data and make decisions. They are used in a variety of applications, from image recognition to natural language processing. Neural networks are often used to solve complex problems that are too difficult for traditional algorithms. Human: Tell me a fun fact about neural networks. Assistant: > Finished chain. Neural networks are inspired by the way the human brain works. They are composed of interconnected nodes that process data and make decisions, just like neurons in the brain. Neural networks can learn from their mistakes and improve their performance over time, just like humans do. listening now... Recognizing... Tell me about a brand new discovered bird species. > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-12 | to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: I'd like to learn more about neural networks. AI: Sure! Neural networks are a type of artificial intelligence that use a network of interconnected nodes to process data and make decisions. They are used in a variety of applications, from image recognition to natural language processing. Neural networks are often used to solve complex problems that are too difficult for traditional algorithms. Human: Tell me a fun fact about neural networks. AI: Neural networks are inspired by the way the human brain works. They are composed of interconnected nodes that process data and make decisions, just like neurons in the brain. Neural networks can | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-13 | of interconnected nodes that process data and make decisions, just like neurons in the brain. Neural networks can learn from their mistakes and improve their performance over time, just like humans do. Human: Tell me about a brand new discovered bird species. Assistant: > Finished chain. A new species of bird was recently discovered in the Amazon rainforest. The species, called the Spix's Macaw, is a small, blue parrot that is believed to be extinct in the wild. It is the first new species of bird to be discovered in the Amazon in over 100 years. listening now... Recognizing... Tell me a children's story about the importance of honesty and trust. > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-14 | Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: Tell me a fun fact about neural networks. AI: Neural networks are inspired by the way the human brain works. They are composed of interconnected nodes that process data and make decisions, just like neurons in the brain. Neural networks can learn from their mistakes and improve their performance over time, just like humans do. Human: Tell me about a brand new discovered bird species. AI: A new species of bird was recently discovered in the Amazon rainforest. The species, called the Spix's Macaw, is a small, blue parrot that is believed to be extinct in the wild. It is the first new species of bird to be discovered in the Amazon in over 100 years. Human: Tell me a children's story about the importance of honesty and trust. Assistant: > Finished chain. Once upon a time, there was a young boy named Jack who lived in a small village. Jack was always honest and trustworthy, and his friends and family knew they could always count on him. One day, Jack was walking through the forest when he stumbled upon a magical tree. The tree told Jack that if he was honest and | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-15 | the forest when he stumbled upon a magical tree. The tree told Jack that if he was honest and trustworthy, he would be rewarded with a special gift. Jack was so excited, and he promised to always be honest and trustworthy. Sure enough, the tree rewarded Jack with a beautiful golden apple. From that day forward, Jack was always honest and trustworthy, and he was rewarded with many more magical gifts. The moral of the story is that honesty and trust are the most important things in life. listening now... Recognizing... Wow, Assistant, that was a really good story. Congratulations! > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-16 | specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: Tell me about a brand new discovered bird species. AI: A new species of bird was recently discovered in the Amazon rainforest. The species, called the Spix's Macaw, is a small, blue parrot that is believed to be extinct in the wild. It is the first new species of bird to be discovered in the Amazon in over 100 years. Human: Tell me a children's story about the importance of honesty and trust. AI: Once upon a time, there was a young boy named Jack who lived in a small village. Jack was always honest and trustworthy, and his friends and family knew they could always count on him. One day, Jack was walking through the forest when he stumbled upon a magical tree. The tree told Jack that if he was honest and trustworthy, he would be rewarded with a special gift. Jack was so excited, and he promised to always be honest and trustworthy. Sure enough, the tree rewarded Jack with a beautiful golden apple. From that day forward, Jack was always honest and trustworthy, and he was rewarded with many more magical gifts. The moral of the story is that honesty and trust are the most important things in life. Human: Wow, Assistant, that was a really good story. Congratulations! Assistant: > Finished chain. Thank you! I'm | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-17 | > Finished chain. Thank you! I'm glad you enjoyed it. listening now... Recognizing... Thank you. > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-18 | the audio channel since it takes time to listen to a response. Human: Tell me a children's story about the importance of honesty and trust. AI: Once upon a time, there was a young boy named Jack who lived in a small village. Jack was always honest and trustworthy, and his friends and family knew they could always count on him. One day, Jack was walking through the forest when he stumbled upon a magical tree. The tree told Jack that if he was honest and trustworthy, he would be rewarded with a special gift. Jack was so excited, and he promised to always be honest and trustworthy. Sure enough, the tree rewarded Jack with a beautiful golden apple. From that day forward, Jack was always honest and trustworthy, and he was rewarded with many more magical gifts. The moral of the story is that honesty and trust are the most important things in life. Human: Wow, Assistant, that was a really good story. Congratulations! AI: Thank you! I'm glad you enjoyed it. Human: Thank you. Assistant: > Finished chain. You're welcome! listening now... Recognizing... Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-19 | range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: Wow, Assistant, that was a really good story. Congratulations! AI: Thank you! I'm glad you enjoyed it. Human: Thank you. AI: You're welcome! Human: Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-20 | know of online brands like Photoshop and Freq that you don't have to download in some sort of way? Assistant: > Finished chain. Yes, there are several online brands that offer photo editing and other creative tools without the need to download any software. Adobe Photoshop Express, Pixlr, and Fotor are some of the most popular online photo editing tools. Freq is an online music production platform that allows users to create and share music without downloading any software. listening now... Recognizing... Our whole process of awesome is free. > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-21 | help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: Thank you. AI: You're welcome! Human: Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? AI: Yes, there are several online brands that offer photo editing and other creative tools without the need to download any software. Adobe Photoshop Express, Pixlr, and Fotor are some of the most popular online photo editing tools. Freq is an online music production platform that allows users to create and share music without downloading any software. Human: Our whole process of awesome is free. Assistant: > Finished chain. That's great! It's always nice to have access to free tools and resources. listening now... Recognizing... No, I meant to ask, are those options that you mentioned free? No, I meant to ask, are those options that you mentioned free? > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-22 | model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? AI: Yes, there are several online brands that offer photo editing and other creative tools without the need to download any | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-23 | there are several online brands that offer photo editing and other creative tools without the need to download any software. Adobe Photoshop Express, Pixlr, and Fotor are some of the most popular online photo editing tools. Freq is an online music production platform that allows users to create and share music without downloading any software. Human: Our whole process of awesome is free. AI: That's great! It's always nice to have access to free tools and resources. Human: No, I meant to ask, are those options that you mentioned free? No, I meant to ask, are those options that you mentioned free? Assistant: > Finished chain. Yes, the online brands I mentioned are all free to use. Adobe Photoshop Express, Pixlr, and Fotor are all free to use, and Freq is a free music production platform. listening now... --------------------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) Cell In[6], line 1 ----> 1 listen(None) Cell In[5], line 20, in listen(command_queue) 18 print('listening now...') 19 try: ---> 20 audio = r.listen(source, timeout=5, phrase_time_limit=30) 21 # audio = r.record(source,duration = 5) 22 print('Recognizing...') File | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-24 | 22 print('Recognizing...') File c:\ProgramData\miniconda3\envs\lang\lib\site-packages\speech_recognition\__init__.py:523, in Recognizer.listen(self, source, timeout, phrase_time_limit, snowboy_configuration) 520 if phrase_time_limit and elapsed_time - phrase_start_time > phrase_time_limit: 521 break --> 523 buffer = source.stream.read(source.CHUNK) 524 if len(buffer) == 0: break # reached end of the stream 525 frames.append(buffer) File c:\ProgramData\miniconda3\envs\lang\lib\site-packages\speech_recognition\__init__.py:199, in Microphone.MicrophoneStream.read(self, size) 198 def read(self, size): --> 199 return self.pyaudio_stream.read(size, exception_on_overflow=False) File c:\ProgramData\miniconda3\envs\lang\lib\site-packages\pyaudio\__init__.py:570, in PyAudio.Stream.read(self, num_frames, exception_on_overflow) 567 if not self._is_input: 568 raise IOError("Not input stream", 569 paCanNotReadFromAnOutputOnlyStream) --> 570 return pa.read_stream(self._stream, num_frames, 571 | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
12d65597711a-25 | num_frames, 571 exception_on_overflow) KeyboardInterrupt: PreviousChatbotsNextSummarizationCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/use_cases/chatbots/voice_assistant |
34aab0164e16-0 | Code Understanding | 🦜�🔗 Langchain | https://python.langchain.com/docs/use_cases/code/ |
34aab0164e16-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKUse casesQA and Chat over DocumentsAnalyzing structured dataExtractionInteracting with APIsChatbotsSummarizationCode UnderstandingUse LangChain, GPT and Activeloop's Deep Lake to work with code baseAnalysis of Twitter the-algorithm source code with LangChain, GPT4 and Activeloop's Deep LakeAgent simulationsAgentsAutonomous (long-running) agentsMulti-modalUse casesCode UnderstandingOn this pageCode UnderstandingOverviewLangChain is a useful tool designed to parse GitHub code repositories. By leveraging VectorStores, Conversational RetrieverChain, and GPT-4, it can answer questions in the context of an entire GitHub repository or generate new code. This documentation page outlines the essential components of the system and guides using LangChain for better code comprehension, contextual question answering, and code generation in GitHub repositories.Conversational Retriever Chain​Conversational RetrieverChain is a retrieval-focused system that interacts with the data stored in a VectorStore. Utilizing advanced techniques, like context-aware filtering and ranking, it retrieves the most relevant code snippets and information for a given user query. Conversational RetrieverChain is engineered to deliver high-quality, pertinent results while considering conversation history and context.LangChain Workflow for Code Understanding and GenerationIndex the code base: Clone the target repository, load all files within, chunk the files, and execute the indexing process. Optionally, you can skip this step and use an already indexed dataset.Embedding and Code Store: Code snippets are embedded using a code-aware embedding model and stored in a VectorStore. | https://python.langchain.com/docs/use_cases/code/ |
34aab0164e16-2 | Query Understanding: GPT-4 processes user queries, grasping the context and extracting relevant details.Construct the Retriever: Conversational RetrieverChain searches the VectorStore to identify the most relevant code snippets for a given query.Build the Conversational Chain: Customize the retriever settings and define any user-defined filters as needed. Ask questions: Define a list of questions to ask about the codebase, and then use the ConversationalRetrievalChain to generate context-aware answers. The LLM (GPT-4) generates comprehensive, context-aware answers based on retrieved code snippets and conversation history.The full tutorial is available below.Twitter the-algorithm codebase analysis with Deep Lake: A notebook walking through how to parse github source code and run queries conversation.LangChain codebase analysis with Deep Lake: A notebook walking through how to analyze and do question answering over THIS code base.PreviousSummarizationNextUse LangChain, GPT and Activeloop's Deep Lake to work with code baseConversational Retriever ChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/use_cases/code/ |
5c30ab1c9a73-0 | Analysis of Twitter the-algorithm source code with LangChain, GPT4 and Activeloop's Deep Lake | 🦜�🔗 Langchain | https://python.langchain.com/docs/use_cases/code/twitter-the-algorithm-analysis-deeplake |
5c30ab1c9a73-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKUse casesQA and Chat over DocumentsAnalyzing structured dataExtractionInteracting with APIsChatbotsSummarizationCode UnderstandingUse LangChain, GPT and Activeloop's Deep Lake to work with code baseAnalysis of Twitter the-algorithm source code with LangChain, GPT4 and Activeloop's Deep LakeAgent simulationsAgentsAutonomous (long-running) agentsMulti-modalUse casesCode UnderstandingAnalysis of Twitter the-algorithm source code with LangChain, GPT4 and Activeloop's Deep LakeOn this pageAnalysis of Twitter the-algorithm source code with LangChain, GPT4 and Activeloop's Deep LakeIn this tutorial, we are going to use Langchain + Activeloop's Deep Lake with GPT4 to analyze the code base of the twitter algorithm. python3 -m pip install --upgrade langchain 'deeplake[enterprise]' openai tiktokenDefine OpenAI embeddings, Deep Lake multi-modal vector store api and authenticate. For full documentation of Deep Lake please follow docs and API reference.Authenticate into Deep Lake if you want to create your own dataset and publish it. You can get an API key from the platformimport osimport getpassfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import DeepLakeos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")activeloop_token = getpass.getpass("Activeloop Token:")os.environ["ACTIVELOOP_TOKEN"] = activeloop_tokenembeddings = OpenAIEmbeddings(disallowed_special=())disallowed_special=() is required to avoid Exception: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte from tiktoken for some repositories1. Index the code base | https://python.langchain.com/docs/use_cases/code/twitter-the-algorithm-analysis-deeplake |
5c30ab1c9a73-2 | in position 0: invalid start byte from tiktoken for some repositories1. Index the code base (optional)​You can directly skip this part and directly jump into using already indexed dataset. To begin with, first we will clone the repository, then parse and chunk the code base and use OpenAI indexing.git clone https://github.com/twitter/the-algorithm # replace any repository of your choiceLoad all files inside the repositoryimport osfrom langchain.document_loaders import TextLoaderroot_dir = "./the-algorithm"docs = []for dirpath, dirnames, filenames in os.walk(root_dir): for file in filenames: try: loader = TextLoader(os.path.join(dirpath, file), encoding="utf-8") docs.extend(loader.load_and_split()) except Exception as e: passThen, chunk the filesfrom langchain.text_splitter import CharacterTextSplittertext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(docs)Execute the indexing. This will take about ~4 mins to compute embeddings and upload to Activeloop. You can then publish the dataset to be public.username = "davitbun" # replace with your username from app.activeloop.aidb = DeepLake( dataset_path=f"hub://{username}/twitter-algorithm", embedding_function=embeddings,)db.add_documents(texts)Optional: You can also use Deep Lake's Managed Tensor Database as a hosting service and run queries there. In order to do so, it is necessary to specify the runtime parameter as {'tensor_db': True} during the creation of the vector store. This configuration enables the execution of queries on | https://python.langchain.com/docs/use_cases/code/twitter-the-algorithm-analysis-deeplake |
5c30ab1c9a73-3 | True} during the creation of the vector store. This configuration enables the execution of queries on the Managed Tensor Database, rather than on the client side. It should be noted that this functionality is not applicable to datasets stored locally or in-memory. In the event that a vector store has already been created outside of the Managed Tensor Database, it is possible to transfer it to the Managed Tensor Database by following the prescribed steps.# username = "davitbun" # replace with your username from app.activeloop.ai# db = DeepLake(# dataset_path=f"hub://{username}/twitter-algorithm",# embedding_function=embeddings,# runtime={"tensor_db": True}# )# db.add_documents(texts)2. Question Answering on Twitter algorithm codebase​First load the dataset, construct the retriever, then construct the Conversational Chaindb = DeepLake( dataset_path="hub://davitbun/twitter-algorithm", read_only=True, embedding_function=embeddings,) Deep Lake Dataset in hub://davitbun/twitter-algorithm already exists, loading from the storageretriever = db.as_retriever()retriever.search_kwargs["distance_metric"] = "cos"retriever.search_kwargs["fetch_k"] = 100retriever.search_kwargs["maximal_marginal_relevance"] = Trueretriever.search_kwargs["k"] = 10You can also specify user defined functions using Deep Lake filtersdef filter(x): # filter based on source code if "com.google" in x["text"].data()["value"]: return False # filter based on path e.g. extension metadata = x["metadata"].data()["value"] return "scala" in metadata["source"] or "py" in | https://python.langchain.com/docs/use_cases/code/twitter-the-algorithm-analysis-deeplake |
5c30ab1c9a73-4 | return "scala" in metadata["source"] or "py" in metadata["source"]### turn on below for custom filtering# retriever.search_kwargs['filter'] = filterfrom langchain.chat_models import ChatOpenAIfrom langchain.chains import ConversationalRetrievalChainmodel = ChatOpenAI(model_name="gpt-3.5-turbo") # switch to 'gpt-4'qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)questions = [ "What does favCountParams do?", "is it Likes + Bookmarks, or not clear from the code?", "What are the major negative modifiers that lower your linear ranking parameters?", "How do you get assigned to SimClusters?", "What is needed to migrate from one SimClusters to another SimClusters?", "How much do I get boosted within my cluster?", "How does Heavy ranker work. what are it’s main inputs?", "How can one influence Heavy ranker?", "why threads and long tweets do so well on the platform?", "Are thread and long tweet creators building a following that reacts to only threads?", "Do you need to follow different strategies to get most followers vs to get most likes and bookmarks per tweet?", "Content meta data and how it impacts virality (e.g. ALT in images).", "What are some unexpected fingerprints for spam factors?", "Is there any difference between company verified checkmarks and blue verified individual checkmarks?",]chat_history = []for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: | https://python.langchain.com/docs/use_cases/code/twitter-the-algorithm-analysis-deeplake |
5c30ab1c9a73-5 | chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n")-> Question: What does favCountParams do? Answer: favCountParams is an optional ThriftLinearFeatureRankingParams instance that represents the parameters related to the "favorite count" feature in the ranking process. It is used to control the weight of the favorite count feature while ranking tweets. The favorite count is the number of times a tweet has been marked as a favorite by users, and it is considered an important signal in the ranking of tweets. By using favCountParams, the system can adjust the importance of the favorite count while calculating the final ranking score of a tweet. -> Question: is it Likes + Bookmarks, or not clear from the code?Answer: From the provided code, it is not clear if the favorite count metric is determined by the sum of likes and bookmarks. The favorite count is mentioned in the code, but there is no explicit reference to how it is calculated in terms of likes and bookmarks. -> Question: What are the major negative modifiers that lower your linear ranking parameters?Answer: In the given code, major negative modifiers that lower the linear ranking parameters are:scoringData.querySpecificScore: This score adjustment is based on the query-specific information. If its value is negative, it will lower the linear ranking parameters.scoringData.authorSpecificScore: This score adjustment is based on the author-specific information. If its value is negative, it will also lower the linear ranking parameters.Please note that I cannot provide more information on the exact calculations of these negative modifiers, as the code for their determination is not provided. -> Question: How do you get assigned to SimClusters?Answer: The assignment to SimClusters occurs through a Metropolis-Hastings sampling-based community detection algorithm that is run on the Producer-Producer similarity graph. This graph is created | https://python.langchain.com/docs/use_cases/code/twitter-the-algorithm-analysis-deeplake |
5c30ab1c9a73-6 | sampling-based community detection algorithm that is run on the Producer-Producer similarity graph. This graph is created by computing the cosine similarity scores between the users who follow each producer. The algorithm identifies communities or clusters of Producers with similar followers, and takes a parameter k for specifying the number of communities to be detected.After the community detection, different users and content are represented as sparse, interpretable vectors within these identified communities (SimClusters). The resulting SimClusters embeddings can be used for various recommendation tasks. -> Question: What is needed to migrate from one SimClusters to another SimClusters?Answer: To migrate from one SimClusters representation to another, you can follow these general steps:Prepare the new representation: Create the new SimClusters representation using any necessary updates or changes in the clustering algorithm, similarity measures, or other model parameters. Ensure that this new representation is properly stored and indexed as needed.Update the relevant code and configurations: Modify the relevant code and configuration files to reference the new SimClusters representation. This may involve updating paths or dataset names to point to the new representation, as well as changing code to use the new clustering method or similarity functions if applicable.Test the new representation: Before deploying the changes to production, thoroughly test the new SimClusters representation to ensure its effectiveness and stability. This may involve running offline jobs like candidate generation and label candidates, validating the output, as well as testing the new representation in the evaluation environment using evaluation tools like TweetSimilarityEvaluationAdhocApp.Deploy the changes: Once the new representation has been tested and validated, deploy the changes to production. This may involve creating a zip file, uploading it to the packer, and then scheduling it with Aurora. Be sure to monitor the system to ensure a smooth transition between representations and verify that the new representation is being used in recommendations as expected.Monitor and assess the new representation: After the new representation has been deployed, continue to monitor its performance and impact on recommendations. Take note of any improvements or issues that arise and | https://python.langchain.com/docs/use_cases/code/twitter-the-algorithm-analysis-deeplake |
5c30ab1c9a73-7 | continue to monitor its performance and impact on recommendations. Take note of any improvements or issues that arise and be prepared to iterate on the new representation if needed. Always ensure that the results and performance metrics align with the system's goals and objectives. -> Question: How much do I get boosted within my cluster?Answer: It's not possible to determine the exact amount your content is boosted within your cluster in the SimClusters representation without specific data about your content and its engagement metrics. However, a combination of factors, such as the favorite score and follow score, alongside other engagement signals and SimCluster calculations, influence the boosting of content. -> Question: How does Heavy ranker work. what are it’s main inputs?Answer: The Heavy Ranker is a machine learning model that plays a crucial role in ranking and scoring candidates within the recommendation algorithm. Its primary purpose is to predict the likelihood of a user engaging with a tweet or connecting with another user on the platform.Main inputs to the Heavy Ranker consist of:Static Features: These are features that can be computed directly from a tweet at the time it's created, such as whether it has a URL, has cards, has quotes, etc. These features are produced by the Index Ingester as the tweets are generated and stored in the index.Real-time Features: These per-tweet features can change after the tweet has been indexed. They mostly consist of social engagements like retweet count, favorite count, reply count, and some spam signals that are computed with later activities. The Signal Ingester, which is part of a Heron topology, processes multiple event streams to collect and compute these real-time features.User Table Features: These per-user features are obtained from the User Table Updater that processes a stream written by the user service. This input is used to store sparse real-time user information, which is later propagated to the tweet being scored by looking up the author of the tweet.Search Context Features: These features represent the context of the current | https://python.langchain.com/docs/use_cases/code/twitter-the-algorithm-analysis-deeplake |
5c30ab1c9a73-8 | by looking up the author of the tweet.Search Context Features: These features represent the context of the current searcher, like their UI language, their content consumption, and the current time (implied). They are combined with Tweet Data to compute some of the features used in scoring.These inputs are then processed by the Heavy Ranker to score and rank candidates based on their relevance and likelihood of engagement by the user. -> Question: How can one influence Heavy ranker?Answer: To influence the Heavy Ranker's output or ranking of content, consider the following actions:Improve content quality: Create high-quality and engaging content that is relevant, informative, and valuable to users. High-quality content is more likely to receive positive user engagement, which the Heavy Ranker considers when ranking content.Increase user engagement: Encourage users to interact with content through likes, retweets, replies, and comments. Higher engagement levels can lead to better ranking in the Heavy Ranker's output.Optimize your user profile: A user's reputation, based on factors such as their follower count and follower-to-following ratio, may impact the ranking of their content. Maintain a good reputation by following relevant users, keeping a reasonable follower-to-following ratio and engaging with your followers.Enhance content discoverability: Use relevant keywords, hashtags, and mentions in your tweets, making it easier for users to find and engage with your content. This increased discoverability may help improve the ranking of your content by the Heavy Ranker.Leverage multimedia content: Experiment with different content formats, such as videos, images, and GIFs, which may capture users' attention and increase engagement, resulting in better ranking by the Heavy Ranker.User feedback: Monitor and respond to feedback for your content. Positive feedback may improve your ranking, while negative feedback provides an opportunity to learn and improve.Note that the Heavy Ranker uses a combination of machine learning models and various features to rank the content. While the above actions may help influence the | https://python.langchain.com/docs/use_cases/code/twitter-the-algorithm-analysis-deeplake |
5c30ab1c9a73-9 | of machine learning models and various features to rank the content. While the above actions may help influence the ranking, there are no guarantees as the ranking process is determined by a complex algorithm, which evolves over time. -> Question: why threads and long tweets do so well on the platform?Answer: Threads and long tweets perform well on the platform for several reasons:More content and context: Threads and long tweets provide more information and context about a topic, which can make the content more engaging and informative for users. People tend to appreciate a well-structured and detailed explanation of a subject or a story, and threads and long tweets can do that effectively.Increased user engagement: As threads and long tweets provide more content, they also encourage users to engage with the tweets through replies, retweets, and likes. This increased engagement can lead to better visibility of the content, as the Twitter algorithm considers user engagement when ranking and surfacing tweets.Narrative structure: Threads enable users to tell stories or present arguments in a step-by-step manner, making the information more accessible and easier to follow. This narrative structure can capture users' attention and encourage them to read through the entire thread and interact with the content.Expanded reach: When users engage with a thread, their interactions can bring the content to the attention of their followers, helping to expand the reach of the thread. This increased visibility can lead to more interactions and higher performance for the threaded tweets.Higher content quality: Generally, threads and long tweets require more thought and effort to create, which may lead to higher quality content. Users are more likely to appreciate and interact with high-quality, well-reasoned content, further improving the performance of these tweets within the platform.Overall, threads and long tweets perform well on Twitter because they encourage user engagement and provide a richer, more informative experience that users find valuable. -> Question: Are thread and long tweet creators building a following that reacts to only threads?Answer: Based on the provided code and context, | https://python.langchain.com/docs/use_cases/code/twitter-the-algorithm-analysis-deeplake |
5c30ab1c9a73-10 | creators building a following that reacts to only threads?Answer: Based on the provided code and context, there isn't enough information to conclude if the creators of threads and long tweets primarily build a following that engages with only thread-based content. The code provided is focused on Twitter's recommendation and ranking algorithms, as well as infrastructure components like Kafka, partitions, and the Follow Recommendations Service (FRS). To answer your question, data analysis of user engagement and results of specific edge cases would be required. -> Question: Do you need to follow different strategies to get most followers vs to get most likes and bookmarks per tweet?Answer: Yes, different strategies need to be followed to maximize the number of followers compared to maximizing likes and bookmarks per tweet. While there may be some overlap in the approaches, they target different aspects of user engagement.Maximizing followers: The primary focus is on growing your audience on the platform. Strategies include:Consistently sharing high-quality content related to your niche or industry.Engaging with others on the platform by replying, retweeting, and mentioning other users.Using relevant hashtags and participating in trending conversations.Collaborating with influencers and other users with a large following.Posting at optimal times when your target audience is most active.Optimizing your profile by using a clear profile picture, catchy bio, and relevant links.Maximizing likes and bookmarks per tweet: The focus is on creating content that resonates with your existing audience and encourages engagement. Strategies include:Crafting engaging and well-written tweets that encourage users to like or save them.Incorporating visually appealing elements, such as images, GIFs, or videos, that capture attention.Asking questions, sharing opinions, or sparking conversations that encourage users to engage with your tweets.Using analytics to understand the type of content that resonates with your audience and tailoring your tweets accordingly.Posting a mix of educational, entertaining, and promotional content to maintain variety and interest.Timing your tweets strategically to maximize | https://python.langchain.com/docs/use_cases/code/twitter-the-algorithm-analysis-deeplake |
5c30ab1c9a73-11 | of educational, entertaining, and promotional content to maintain variety and interest.Timing your tweets strategically to maximize engagement, likes, and bookmarks per tweet.Both strategies can overlap, and you may need to adapt your approach by understanding your target audience's preferences and analyzing your account's performance. However, it's essential to recognize that maximizing followers and maximizing likes and bookmarks per tweet have different focuses and require specific strategies. -> Question: Content meta data and how it impacts virality (e.g. ALT in images).Answer: There is no direct information in the provided context about how content metadata, such as ALT text in images, impacts the virality of a tweet or post. However, it's worth noting that including ALT text can improve the accessibility of your content for users who rely on screen readers, which may lead to increased engagement for a broader audience. Additionally, metadata can be used in search engine optimization, which might improve the visibility of the content, but the context provided does not mention any specific correlation with virality. -> Question: What are some unexpected fingerprints for spam factors?Answer: In the provided context, an unusual indicator of spam factors is when a tweet contains a non-media, non-news link. If the tweet has a link but does not have an image URL, video URL, or news URL, it is considered a potential spam vector, and a threshold for user reputation (tweepCredThreshold) is set to MIN_TWEEPCRED_WITH_LINK.While this rule may not cover all possible unusual spam indicators, it is derived from the specific codebase and logic shared in the context. -> Question: Is there any difference between company verified checkmarks and blue verified individual checkmarks?Answer: Yes, there is a distinction between the verified checkmarks for companies and blue verified checkmarks for individuals. The code snippet provided mentions "Blue-verified account boost" which indicates that there is a separate category for blue verified accounts. Typically, blue verified checkmarks are used to indicate notable | https://python.langchain.com/docs/use_cases/code/twitter-the-algorithm-analysis-deeplake |
5c30ab1c9a73-12 | is a separate category for blue verified accounts. Typically, blue verified checkmarks are used to indicate notable individuals, while verified checkmarks are for companies or organizations.PreviousUse LangChain, GPT and Activeloop's Deep Lake to work with code baseNextAgent simulations1. Index the code base (optional)2. Question Answering on Twitter algorithm codebaseCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/use_cases/code/twitter-the-algorithm-analysis-deeplake |
b4eb12d0b901-0 | Page Not Found | 🦜�🔗 Langchain
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKPage Not FoundWe could not find what you were looking for.Please contact the owner of the site that linked you to the original URL and let them know their link is broken.CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
4fc703b077e5-0 | Use LangChain, GPT and Activeloop's Deep Lake to work with code base | 🦜�🔗 Langchain | https://python.langchain.com/docs/use_cases/code/code-analysis-deeplake |
4fc703b077e5-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKUse casesQA and Chat over DocumentsAnalyzing structured dataExtractionInteracting with APIsChatbotsSummarizationCode UnderstandingUse LangChain, GPT and Activeloop's Deep Lake to work with code baseAnalysis of Twitter the-algorithm source code with LangChain, GPT4 and Activeloop's Deep LakeAgent simulationsAgentsAutonomous (long-running) agentsMulti-modalUse casesCode UnderstandingUse LangChain, GPT and Activeloop's Deep Lake to work with code baseOn this pageUse LangChain, GPT and Activeloop's Deep Lake to work with code baseIn this tutorial, we are going to use Langchain + Activeloop's Deep Lake with GPT to analyze the code base of the LangChain itself. Design​Prepare data:Upload all python project files using the langchain.document_loaders.TextLoader. We will call these files the documents.Split all documents to chunks using the langchain.text_splitter.CharacterTextSplitter.Embed chunks and upload them into the DeepLake using langchain.embeddings.openai.OpenAIEmbeddings and langchain.vectorstores.DeepLakeQuestion-Answering:Build a chain from langchain.chat_models.ChatOpenAI and langchain.chains.ConversationalRetrievalChainPrepare questions.Get answers running the chain.Implementation​Integration preparations​We need to set up keys for external services and install necessary python libraries.#!python3 -m pip install --upgrade langchain deeplake openaiSet up OpenAI embeddings, Deep Lake multi-modal vector store api and authenticate. For full documentation of Deep Lake please follow https://docs.activeloop.ai/ and API reference https://docs.deeplake.ai/en/latest/import osfrom getpass import getpassos.environ["OPENAI_API_KEY"] = | https://python.langchain.com/docs/use_cases/code/code-analysis-deeplake |
4fc703b077e5-2 | osfrom getpass import getpassos.environ["OPENAI_API_KEY"] = getpass()# Please manually enter OpenAI KeyAuthenticate into Deep Lake if you want to create your own dataset and publish it. You can get an API key from the platform at app.activeloop.aiactiveloop_token = getpass("Activeloop Token:")os.environ["ACTIVELOOP_TOKEN"] = activeloop_tokenPrepare data​Load all repository files. Here we assume this notebook is downloaded as the part of the langchain fork and we work with the python files of the langchain repo.If you want to use files from different repo, change root_dir to the root dir of your repo.ls "../../../.."from langchain.document_loaders import TextLoaderroot_dir = "../../../.."docs = []for dirpath, dirnames, filenames in os.walk(root_dir): for file in filenames: if file.endswith(".py") and "/.venv/" not in dirpath: try: loader = TextLoader(os.path.join(dirpath, file), encoding="utf-8") docs.extend(loader.load_and_split()) except Exception as e: passprint(f"{len(docs)}")Then, chunk the filesfrom langchain.text_splitter import CharacterTextSplittertext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(docs)print(f"{len(texts)}")Then embed chunks and upload them to the DeepLake.This can take several minutes. from langchain.embeddings.openai import OpenAIEmbeddingsembeddings = | https://python.langchain.com/docs/use_cases/code/code-analysis-deeplake |
4fc703b077e5-3 | take several minutes. from langchain.embeddings.openai import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()embeddingsfrom langchain.vectorstores import DeepLakedb = DeepLake.from_documents( texts, embeddings, dataset_path=f"hub://{<org_id>}/langchain-code")dbOptional: You can also use Deep Lake's Managed Tensor Database as a hosting service and run queries there. In order to do so, it is necessary to specify the runtime parameter as {'tensor_db': True} during the creation of the vector store. This configuration enables the execution of queries on the Managed Tensor Database, rather than on the client side. It should be noted that this functionality is not applicable to datasets stored locally or in-memory. In the event that a vector store has already been created outside of the Managed Tensor Database, it is possible to transfer it to the Managed Tensor Database by following the prescribed steps.# from langchain.vectorstores import DeepLake# db = DeepLake.from_documents(# texts, embeddings, dataset_path=f"hub://{<org_id>}/langchain-code", runtime={"tensor_db": True}# )# dbQuestion Answering​First load the dataset, construct the retriever, then construct the Conversational Chaindb = DeepLake( dataset_path=f"hub://{<org_id>}/langchain-code", read_only=True, embedding_function=embeddings,)retriever = db.as_retriever()retriever.search_kwargs["distance_metric"] = "cos"retriever.search_kwargs["fetch_k"] = 20retriever.search_kwargs["maximal_marginal_relevance"] = Trueretriever.search_kwargs["k"] = 20You can also specify user defined functions using Deep Lake filtersdef filter(x): # filter based on source code if "something" in | https://python.langchain.com/docs/use_cases/code/code-analysis-deeplake |
4fc703b077e5-4 | filter(x): # filter based on source code if "something" in x["text"].data()["value"]: return False # filter based on path e.g. extension metadata = x["metadata"].data()["value"] return "only_this" in metadata["source"] or "also_that" in metadata["source"]### turn on below for custom filtering# retriever.search_kwargs['filter'] = filterfrom langchain.chat_models import ChatOpenAIfrom langchain.chains import ConversationalRetrievalChainmodel = ChatOpenAI(model_name="gpt-3.5-turbo") # 'ada' 'gpt-3.5-turbo' 'gpt-4',qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)questions = [ "What is the class hierarchy?", # "What classes are derived from the Chain class?", # "What classes and functions in the ./langchain/utilities/ forlder are not covered by unit tests?", # "What one improvement do you propose in code in relation to the class herarchy for the Chain class?",]chat_history = []for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n")-> Question: What is the class hierarchy? Answer: There are several class hierarchies in the provided code, so I'll list a few:BaseModel -> ConstitutionalPrinciple: ConstitutionalPrinciple is a subclass of BaseModel.BasePromptTemplate -> StringPromptTemplate, AIMessagePromptTemplate, BaseChatPromptTemplate, | https://python.langchain.com/docs/use_cases/code/code-analysis-deeplake |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 56