diff --git "a/train.jsonl" "b/train.jsonl"
new file mode 100644--- /dev/null
+++ "b/train.jsonl"
@@ -0,0 +1,4511 @@
+{"id": "7fb9f2a39073-0", "text": ".rst\n.pdf\nWelcome to LangChain\n Contents \nGetting Started\nModules\nUse Cases\nReference Docs\nEcosystem\nAdditional Resources\nWelcome to LangChain#\nLangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model, but will also be:\nData-aware: connect a language model to other sources of data\nAgentic: allow a language model to interact with its environment\nThe LangChain framework is designed around these principles.\nThis is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.\nGetting Started#\nHow to get started using LangChain to create an Language Model application.\nQuickstart Guide\nConcepts and terminology.\nConcepts and terminology\nTutorials created by community experts and presented on YouTube.\nTutorials\nModules#\nThese modules are the core abstractions which we view as the building blocks of any LLM-powered application.\nFor each module LangChain provides standard, extendable interfaces. LangChain also provides external integrations and even end-to-end implementations for off-the-shelf use.\nThe docs for each module contain quickstart examples, how-to guides, reference docs, and conceptual guides.\nThe modules are (from least to most complex):\nModels: Supported model types and integrations.\nPrompts: Prompt management, optimization, and serialization.\nMemory: Memory refers to state that is persisted between calls of a chain/agent.\nIndexes: Language models become much more powerful when combined with application-specific data - this module contains interfaces and integrations for loading, querying and updating external data.\nChains: Chains are structured sequences of calls (to an LLM or to a different utility).", "source": "https://langchain.readthedocs.io/en/latest/index.html"}
+{"id": "7fb9f2a39073-1", "text": "Agents: An agent is a Chain in which an LLM, given a high-level directive and a set of tools, repeatedly decides an action, executes the action and observes the outcome until the high-level directive is complete.\nCallbacks: Callbacks let you log and stream the intermediate steps of any chain, making it easy to observe, debug, and evaluate the internals of an application.\nUse Cases#\nBest practices and built-in implementations for common LangChain use cases:\nAutonomous Agents: Autonomous agents are long-running agents that take many steps in an attempt to accomplish an objective. Examples include AutoGPT and BabyAGI.\nAgent Simulations: Putting agents in a sandbox and observing how they interact with each other and react to events can be an effective way to evaluate their long-range reasoning and planning abilities.\nPersonal Assistants: One of the primary LangChain use cases. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\nQuestion Answering: Another common LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\nChatbots: Language models love to chat, making this a very natural use of them.\nQuerying Tabular Data: Recommended reading if you want to use language models to query structured data (CSVs, SQL, dataframes, etc).\nCode Understanding: Recommended reading if you want to use language models to analyze code.\nInteracting with APIs: Enabling language models to interact with APIs is extremely powerful. It gives them access to up-to-date information and allows them to take actions.\nExtraction: Extract structured information from text.\nSummarization: Compressing longer documents. A type of Data-Augmented Generation.\nEvaluation: Generative models are hard to evaluate with traditional metrics. One promising approach is to use language models themselves to do the evaluation.\nReference Docs#", "source": "https://langchain.readthedocs.io/en/latest/index.html"}
+{"id": "7fb9f2a39073-2", "text": "Reference Docs#\nFull documentation on all methods, classes, installation methods, and integration setups for LangChain.\nLangChain Installation\nReference Documentation\nEcosystem#\nLangChain integrates a lot of different LLMs, systems, and products.\nFrom the other side, many systems and products depend on LangChain.\nIt creates a vibrant and thriving ecosystem.\nIntegrations: Guides for how other products can be used with LangChain.\nDependents: List of repositories that use LangChain.\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\nAdditional Resources#\nAdditional resources we think may be useful as you develop your application!\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\nGallery: A collection of great projects that use Langchain, compiled by the folks at Kyrolabs. Useful for finding inspiration and example implementations.\nDeploying LLMs in Production: A collection of best practices and tutorials for deploying LLMs in production.\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\nModel Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\nDiscord: Join us on our Discord to discuss all things LangChain!\nYouTube: A collection of the LangChain tutorials and videos.\nProduction Support: As you move your LangChains into production, we\u2019d love to offer more comprehensive support. Please fill out this form and we\u2019ll set up a dedicated support Slack channel.\nnext\nQuickstart Guide\n Contents\n \nGetting Started\nModules\nUse Cases\nReference Docs\nEcosystem\nAdditional Resources\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.", "source": "https://langchain.readthedocs.io/en/latest/index.html"}
+{"id": "7fb9f2a39073-3", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://langchain.readthedocs.io/en/latest/index.html"}
+{"id": "c43b5579b62a-0", "text": ".rst\n.pdf\nAPI References\nAPI References#\nFull documentation on all methods, classes, and APIs in LangChain.\nModels\nPrompts\nIndexes\nMemory\nChains\nAgents\nUtilities\nExperimental Modules\nprevious\nInstallation\nnext\nModels\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference.html"}
+{"id": "7a97a6ba13fd-0", "text": "Search\nError\nPlease activate JavaScript to enable the search functionality.\nCtrl+K\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/search.html"}
+{"id": "961c25351698-0", "text": "Index\n_\n | A\n | B\n | C\n | D\n | E\n | F\n | G\n | H\n | I\n | J\n | K\n | L\n | M\n | N\n | O\n | P\n | Q\n | R\n | S\n | T\n | U\n | V\n | W\n | Y\n | Z\n_\n__call__() (langchain.llms.AI21 method)\n(langchain.llms.AlephAlpha method)\n(langchain.llms.Anthropic method)\n(langchain.llms.Anyscale method)\n(langchain.llms.Aviary method)\n(langchain.llms.AzureOpenAI method)\n(langchain.llms.Banana method)\n(langchain.llms.Baseten method)\n(langchain.llms.Beam method)\n(langchain.llms.Bedrock method)\n(langchain.llms.CerebriumAI method)\n(langchain.llms.Cohere method)\n(langchain.llms.CTransformers method)\n(langchain.llms.Databricks method)\n(langchain.llms.DeepInfra method)\n(langchain.llms.FakeListLLM method)\n(langchain.llms.ForefrontAI method)\n(langchain.llms.GooglePalm method)\n(langchain.llms.GooseAI method)\n(langchain.llms.GPT4All method)\n(langchain.llms.HuggingFaceEndpoint method)\n(langchain.llms.HuggingFaceHub method)\n(langchain.llms.HuggingFacePipeline method)\n(langchain.llms.HuggingFaceTextGenInference method)\n(langchain.llms.HumanInputLLM method)\n(langchain.llms.LlamaCpp method)\n(langchain.llms.Modal method)\n(langchain.llms.MosaicML method)\n(langchain.llms.NLPCloud method)\n(langchain.llms.OpenAI method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-1", "text": "(langchain.llms.NLPCloud method)\n(langchain.llms.OpenAI method)\n(langchain.llms.OpenAIChat method)\n(langchain.llms.OpenLM method)\n(langchain.llms.Petals method)\n(langchain.llms.PipelineAI method)\n(langchain.llms.PredictionGuard method)\n(langchain.llms.PromptLayerOpenAI method)\n(langchain.llms.PromptLayerOpenAIChat method)\n(langchain.llms.Replicate method)\n(langchain.llms.RWKV method)\n(langchain.llms.SagemakerEndpoint method)\n(langchain.llms.SelfHostedHuggingFaceLLM method)\n(langchain.llms.SelfHostedPipeline method)\n(langchain.llms.StochasticAI method)\n(langchain.llms.VertexAI method)\n(langchain.llms.Writer method)\nA\naadd_documents() (langchain.retrievers.TimeWeightedVectorStoreRetriever method)\n(langchain.vectorstores.VectorStore method)\naadd_texts() (langchain.vectorstores.VectorStore method)\naapply() (langchain.chains.LLMChain method)\naapply_and_parse() (langchain.chains.LLMChain method)\nacall_actor() (langchain.utilities.ApifyWrapper method)\naccess_token (langchain.document_loaders.DocugamiLoader attribute)\naccount_sid (langchain.utilities.TwilioAPIWrapper attribute)\nacompress_documents() (langchain.retrievers.document_compressors.CohereRerank method)\n(langchain.retrievers.document_compressors.DocumentCompressorPipeline method)\n(langchain.retrievers.document_compressors.EmbeddingsFilter method)\n(langchain.retrievers.document_compressors.LLMChainExtractor method)\n(langchain.retrievers.document_compressors.LLMChainFilter method)\naction_id (langchain.tools.ZapierNLARunAction attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-2", "text": "action_id (langchain.tools.ZapierNLARunAction attribute)\nadd() (langchain.docstore.InMemoryDocstore method)\nadd_documents() (langchain.retrievers.TimeWeightedVectorStoreRetriever method)\n(langchain.retrievers.WeaviateHybridSearchRetriever method)\n(langchain.vectorstores.VectorStore method)\nadd_embeddings() (langchain.vectorstores.FAISS method)\nadd_example() (langchain.prompts.example_selector.LengthBasedExampleSelector method)\n(langchain.prompts.example_selector.SemanticSimilarityExampleSelector method)\nadd_memories() (langchain.experimental.GenerativeAgentMemory method)\nadd_memory() (langchain.experimental.GenerativeAgentMemory method)\nadd_message() (langchain.memory.CassandraChatMessageHistory method)\n(langchain.memory.ChatMessageHistory method)\n(langchain.memory.CosmosDBChatMessageHistory method)\n(langchain.memory.DynamoDBChatMessageHistory method)\n(langchain.memory.FileChatMessageHistory method)\n(langchain.memory.MomentoChatMessageHistory method)\n(langchain.memory.MongoDBChatMessageHistory method)\n(langchain.memory.PostgresChatMessageHistory method)\n(langchain.memory.RedisChatMessageHistory method)\nadd_texts() (langchain.retrievers.ElasticSearchBM25Retriever method)\n(langchain.retrievers.PineconeHybridSearchRetriever method)\n(langchain.vectorstores.AnalyticDB method)\n(langchain.vectorstores.Annoy method)\n(langchain.vectorstores.AtlasDB method)\n(langchain.vectorstores.AwaDB method)\n(langchain.vectorstores.Chroma method)\n(langchain.vectorstores.Clickhouse method)\n(langchain.vectorstores.DeepLake method)\n(langchain.vectorstores.ElasticVectorSearch method)\n(langchain.vectorstores.FAISS method)\n(langchain.vectorstores.LanceDB method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-3", "text": "(langchain.vectorstores.FAISS method)\n(langchain.vectorstores.LanceDB method)\n(langchain.vectorstores.MatchingEngine method)\n(langchain.vectorstores.Milvus method)\n(langchain.vectorstores.MongoDBAtlasVectorSearch method)\n(langchain.vectorstores.MyScale method)\n(langchain.vectorstores.OpenSearchVectorSearch method)\n(langchain.vectorstores.Pinecone method)\n(langchain.vectorstores.Qdrant method)\n(langchain.vectorstores.Redis method)\n(langchain.vectorstores.SingleStoreDB method)\n(langchain.vectorstores.SKLearnVectorStore method)\n(langchain.vectorstores.SupabaseVectorStore method)\n(langchain.vectorstores.Tair method)\n(langchain.vectorstores.Tigris method)\n(langchain.vectorstores.Typesense method)\n(langchain.vectorstores.Vectara method)\n(langchain.vectorstores.VectorStore method)\n(langchain.vectorstores.Weaviate method)\nadd_vectors() (langchain.vectorstores.SupabaseVectorStore method)\nadd_video_info (langchain.document_loaders.GoogleApiYoutubeLoader attribute)\nadelete() (langchain.utilities.TextRequestsWrapper method)\nafrom_documents() (langchain.vectorstores.VectorStore class method)\nafrom_texts() (langchain.vectorstores.VectorStore class method)\nage (langchain.experimental.GenerativeAgent attribute)\nagenerate() (langchain.chains.LLMChain method)\n(langchain.llms.AI21 method)\n(langchain.llms.AlephAlpha method)\n(langchain.llms.Anthropic method)\n(langchain.llms.Anyscale method)\n(langchain.llms.Aviary method)\n(langchain.llms.AzureOpenAI method)\n(langchain.llms.Banana method)\n(langchain.llms.Baseten method)\n(langchain.llms.Beam method)\n(langchain.llms.Bedrock method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-4", "text": "(langchain.llms.Beam method)\n(langchain.llms.Bedrock method)\n(langchain.llms.CerebriumAI method)\n(langchain.llms.Cohere method)\n(langchain.llms.CTransformers method)\n(langchain.llms.Databricks method)\n(langchain.llms.DeepInfra method)\n(langchain.llms.FakeListLLM method)\n(langchain.llms.ForefrontAI method)\n(langchain.llms.GooglePalm method)\n(langchain.llms.GooseAI method)\n(langchain.llms.GPT4All method)\n(langchain.llms.HuggingFaceEndpoint method)\n(langchain.llms.HuggingFaceHub method)\n(langchain.llms.HuggingFacePipeline method)\n(langchain.llms.HuggingFaceTextGenInference method)\n(langchain.llms.HumanInputLLM method)\n(langchain.llms.LlamaCpp method)\n(langchain.llms.Modal method)\n(langchain.llms.MosaicML method)\n(langchain.llms.NLPCloud method)\n(langchain.llms.OpenAI method)\n(langchain.llms.OpenAIChat method)\n(langchain.llms.OpenLM method)\n(langchain.llms.Petals method)\n(langchain.llms.PipelineAI method)\n(langchain.llms.PredictionGuard method)\n(langchain.llms.PromptLayerOpenAI method)\n(langchain.llms.PromptLayerOpenAIChat method)\n(langchain.llms.Replicate method)\n(langchain.llms.RWKV method)\n(langchain.llms.SagemakerEndpoint method)\n(langchain.llms.SelfHostedHuggingFaceLLM method)\n(langchain.llms.SelfHostedPipeline method)\n(langchain.llms.StochasticAI method)\n(langchain.llms.VertexAI method)\n(langchain.llms.Writer method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-5", "text": "(langchain.llms.VertexAI method)\n(langchain.llms.Writer method)\nagenerate_prompt() (langchain.llms.AI21 method)\n(langchain.llms.AlephAlpha method)\n(langchain.llms.Anthropic method)\n(langchain.llms.Anyscale method)\n(langchain.llms.Aviary method)\n(langchain.llms.AzureOpenAI method)\n(langchain.llms.Banana method)\n(langchain.llms.Baseten method)\n(langchain.llms.Beam method)\n(langchain.llms.Bedrock method)\n(langchain.llms.CerebriumAI method)\n(langchain.llms.Cohere method)\n(langchain.llms.CTransformers method)\n(langchain.llms.Databricks method)\n(langchain.llms.DeepInfra method)\n(langchain.llms.FakeListLLM method)\n(langchain.llms.ForefrontAI method)\n(langchain.llms.GooglePalm method)\n(langchain.llms.GooseAI method)\n(langchain.llms.GPT4All method)\n(langchain.llms.HuggingFaceEndpoint method)\n(langchain.llms.HuggingFaceHub method)\n(langchain.llms.HuggingFacePipeline method)\n(langchain.llms.HuggingFaceTextGenInference method)\n(langchain.llms.HumanInputLLM method)\n(langchain.llms.LlamaCpp method)\n(langchain.llms.Modal method)\n(langchain.llms.MosaicML method)\n(langchain.llms.NLPCloud method)\n(langchain.llms.OpenAI method)\n(langchain.llms.OpenAIChat method)\n(langchain.llms.OpenLM method)\n(langchain.llms.Petals method)\n(langchain.llms.PipelineAI method)\n(langchain.llms.PredictionGuard method)\n(langchain.llms.PromptLayerOpenAI method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-6", "text": "(langchain.llms.PromptLayerOpenAI method)\n(langchain.llms.PromptLayerOpenAIChat method)\n(langchain.llms.Replicate method)\n(langchain.llms.RWKV method)\n(langchain.llms.SagemakerEndpoint method)\n(langchain.llms.SelfHostedHuggingFaceLLM method)\n(langchain.llms.SelfHostedPipeline method)\n(langchain.llms.StochasticAI method)\n(langchain.llms.VertexAI method)\n(langchain.llms.Writer method)\nagent (langchain.agents.AgentExecutor attribute)\nAgentType (class in langchain.agents)\naget() (langchain.utilities.TextRequestsWrapper method)\naget_relevant_documents() (langchain.retrievers.ArxivRetriever method)\n(langchain.retrievers.AwsKendraIndexRetriever method)\n(langchain.retrievers.AzureCognitiveSearchRetriever method)\n(langchain.retrievers.ChatGPTPluginRetriever method)\n(langchain.retrievers.ContextualCompressionRetriever method)\n(langchain.retrievers.DataberryRetriever method)\n(langchain.retrievers.ElasticSearchBM25Retriever method)\n(langchain.retrievers.KNNRetriever method)\n(langchain.retrievers.MergerRetriever method)\n(langchain.retrievers.MetalRetriever method)\n(langchain.retrievers.PineconeHybridSearchRetriever method)\n(langchain.retrievers.PubMedRetriever method)\n(langchain.retrievers.RemoteLangChainRetriever method)\n(langchain.retrievers.SelfQueryRetriever method)\n(langchain.retrievers.SVMRetriever method)\n(langchain.retrievers.TFIDFRetriever method)\n(langchain.retrievers.TimeWeightedVectorStoreRetriever method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-7", "text": "(langchain.retrievers.TimeWeightedVectorStoreRetriever method)\n(langchain.retrievers.VespaRetriever method)\n(langchain.retrievers.WeaviateHybridSearchRetriever method)\n(langchain.retrievers.WikipediaRetriever method)\n(langchain.retrievers.ZepRetriever method)\naget_table_info() (langchain.utilities.PowerBIDataset method)\naggregate_importance (langchain.experimental.GenerativeAgentMemory attribute)\nai_prefix (langchain.agents.ConversationalAgent attribute)\n(langchain.memory.ConversationBufferMemory attribute)\n(langchain.memory.ConversationBufferWindowMemory attribute)\n(langchain.memory.ConversationEntityMemory attribute)\n(langchain.memory.ConversationKGMemory attribute)\n(langchain.memory.ConversationStringBufferMemory attribute)\n(langchain.memory.ConversationTokenBufferMemory attribute)\naiosession (langchain.retrievers.AzureCognitiveSearchRetriever attribute)\n(langchain.retrievers.ChatGPTPluginRetriever attribute)\n(langchain.serpapi.SerpAPIWrapper attribute)\n(langchain.utilities.GoogleSerperAPIWrapper attribute)\n(langchain.utilities.PowerBIDataset attribute)\n(langchain.utilities.searx_search.SearxSearchWrapper attribute)\n(langchain.utilities.SearxSearchWrapper attribute)\n(langchain.utilities.SerpAPIWrapper attribute)\n(langchain.utilities.TextRequestsWrapper attribute)\nAirbyteJSONLoader (class in langchain.document_loaders)\nAirtableLoader (class in langchain.document_loaders)\naleph_alpha_api_key (langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding attribute)\n(langchain.llms.AlephAlpha attribute)\nallow_download (langchain.llms.GPT4All attribute)\nallowed_special (langchain.llms.AzureOpenAI attribute)\n(langchain.llms.OpenAI attribute)\n(langchain.llms.OpenAIChat attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-8", "text": "(langchain.llms.OpenAI attribute)\n(langchain.llms.OpenAIChat attribute)\n(langchain.llms.OpenLM attribute)\n(langchain.llms.PromptLayerOpenAIChat attribute)\nallowed_tools (langchain.agents.Agent attribute)\naload() (langchain.document_loaders.WebBaseLoader method)\nalpha (langchain.retrievers.PineconeHybridSearchRetriever attribute)\namax_marginal_relevance_search() (langchain.vectorstores.VectorStore method)\namax_marginal_relevance_search_by_vector() (langchain.vectorstores.VectorStore method)\namerge_documents() (langchain.retrievers.MergerRetriever method)\nAnalyticDB (class in langchain.vectorstores)\nAnnoy (class in langchain.vectorstores)\nanswers (langchain.utilities.searx_search.SearxResults property)\napatch() (langchain.utilities.TextRequestsWrapper method)\napi (langchain.document_loaders.DocugamiLoader attribute)\napi_answer_chain (langchain.chains.APIChain attribute)\napi_docs (langchain.chains.APIChain attribute)\napi_key (langchain.retrievers.AzureCognitiveSearchRetriever attribute)\n(langchain.retrievers.DataberryRetriever attribute)\napi_operation (langchain.chains.OpenAPIEndpointChain attribute)\napi_request_chain (langchain.chains.APIChain attribute)\n(langchain.chains.OpenAPIEndpointChain attribute)\napi_resource (langchain.agents.agent_toolkits.GmailToolkit attribute)\napi_response_chain (langchain.chains.OpenAPIEndpointChain attribute)\napi_spec (langchain.tools.AIPluginTool attribute)\napi_token (langchain.llms.Databricks attribute)\napi_url (langchain.llms.StochasticAI attribute)\napi_version (langchain.retrievers.AzureCognitiveSearchRetriever attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-9", "text": "api_version (langchain.retrievers.AzureCognitiveSearchRetriever attribute)\napi_wrapper (langchain.tools.BingSearchResults attribute)\n(langchain.tools.BingSearchRun attribute)\n(langchain.tools.DuckDuckGoSearchResults attribute)\n(langchain.tools.DuckDuckGoSearchRun attribute)\n(langchain.tools.GooglePlacesTool attribute)\n(langchain.tools.GoogleSearchResults attribute)\n(langchain.tools.GoogleSearchRun attribute)\n(langchain.tools.GoogleSerperResults attribute)\n(langchain.tools.GoogleSerperRun attribute)\n(langchain.tools.MetaphorSearchResults attribute)\n(langchain.tools.OpenWeatherMapQueryRun attribute)\n(langchain.tools.PubmedQueryRun attribute)\n(langchain.tools.SceneXplainTool attribute)\n(langchain.tools.WikipediaQueryRun attribute)\n(langchain.tools.WolframAlphaQueryRun attribute)\n(langchain.tools.ZapierNLAListActions attribute)\n(langchain.tools.ZapierNLARunAction attribute)\napify_client (langchain.document_loaders.ApifyDatasetLoader attribute)\n(langchain.utilities.ApifyWrapper attribute)\napify_client_async (langchain.utilities.ApifyWrapper attribute)\naplan() (langchain.agents.Agent method)\n(langchain.agents.BaseMultiActionAgent method)\n(langchain.agents.BaseSingleActionAgent method)\n(langchain.agents.LLMSingleActionAgent method)\napost() (langchain.utilities.TextRequestsWrapper method)\napp_creation() (langchain.llms.Beam method)\napply() (langchain.chains.LLMChain method)\napply_and_parse() (langchain.chains.LLMChain method)\napredict() (langchain.chains.LLMChain method)\n(langchain.llms.AI21 method)\n(langchain.llms.AlephAlpha method)\n(langchain.llms.Anthropic method)\n(langchain.llms.Anyscale method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-10", "text": "(langchain.llms.Anthropic method)\n(langchain.llms.Anyscale method)\n(langchain.llms.Aviary method)\n(langchain.llms.AzureOpenAI method)\n(langchain.llms.Banana method)\n(langchain.llms.Baseten method)\n(langchain.llms.Beam method)\n(langchain.llms.Bedrock method)\n(langchain.llms.CerebriumAI method)\n(langchain.llms.Cohere method)\n(langchain.llms.CTransformers method)\n(langchain.llms.Databricks method)\n(langchain.llms.DeepInfra method)\n(langchain.llms.FakeListLLM method)\n(langchain.llms.ForefrontAI method)\n(langchain.llms.GooglePalm method)\n(langchain.llms.GooseAI method)\n(langchain.llms.GPT4All method)\n(langchain.llms.HuggingFaceEndpoint method)\n(langchain.llms.HuggingFaceHub method)\n(langchain.llms.HuggingFacePipeline method)\n(langchain.llms.HuggingFaceTextGenInference method)\n(langchain.llms.HumanInputLLM method)\n(langchain.llms.LlamaCpp method)\n(langchain.llms.Modal method)\n(langchain.llms.MosaicML method)\n(langchain.llms.NLPCloud method)\n(langchain.llms.OpenAI method)\n(langchain.llms.OpenAIChat method)\n(langchain.llms.OpenLM method)\n(langchain.llms.Petals method)\n(langchain.llms.PipelineAI method)\n(langchain.llms.PredictionGuard method)\n(langchain.llms.PromptLayerOpenAI method)\n(langchain.llms.PromptLayerOpenAIChat method)\n(langchain.llms.Replicate method)\n(langchain.llms.RWKV method)\n(langchain.llms.SagemakerEndpoint method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-11", "text": "(langchain.llms.SagemakerEndpoint method)\n(langchain.llms.SelfHostedHuggingFaceLLM method)\n(langchain.llms.SelfHostedPipeline method)\n(langchain.llms.StochasticAI method)\n(langchain.llms.VertexAI method)\n(langchain.llms.Writer method)\napredict_and_parse() (langchain.chains.LLMChain method)\napredict_messages() (langchain.llms.AI21 method)\n(langchain.llms.AlephAlpha method)\n(langchain.llms.Anthropic method)\n(langchain.llms.Anyscale method)\n(langchain.llms.Aviary method)\n(langchain.llms.AzureOpenAI method)\n(langchain.llms.Banana method)\n(langchain.llms.Baseten method)\n(langchain.llms.Beam method)\n(langchain.llms.Bedrock method)\n(langchain.llms.CerebriumAI method)\n(langchain.llms.Cohere method)\n(langchain.llms.CTransformers method)\n(langchain.llms.Databricks method)\n(langchain.llms.DeepInfra method)\n(langchain.llms.FakeListLLM method)\n(langchain.llms.ForefrontAI method)\n(langchain.llms.GooglePalm method)\n(langchain.llms.GooseAI method)\n(langchain.llms.GPT4All method)\n(langchain.llms.HuggingFaceEndpoint method)\n(langchain.llms.HuggingFaceHub method)\n(langchain.llms.HuggingFacePipeline method)\n(langchain.llms.HuggingFaceTextGenInference method)\n(langchain.llms.HumanInputLLM method)\n(langchain.llms.LlamaCpp method)\n(langchain.llms.Modal method)\n(langchain.llms.MosaicML method)\n(langchain.llms.NLPCloud method)\n(langchain.llms.OpenAI method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-12", "text": "(langchain.llms.NLPCloud method)\n(langchain.llms.OpenAI method)\n(langchain.llms.OpenAIChat method)\n(langchain.llms.OpenLM method)\n(langchain.llms.Petals method)\n(langchain.llms.PipelineAI method)\n(langchain.llms.PredictionGuard method)\n(langchain.llms.PromptLayerOpenAI method)\n(langchain.llms.PromptLayerOpenAIChat method)\n(langchain.llms.Replicate method)\n(langchain.llms.RWKV method)\n(langchain.llms.SagemakerEndpoint method)\n(langchain.llms.SelfHostedHuggingFaceLLM method)\n(langchain.llms.SelfHostedPipeline method)\n(langchain.llms.StochasticAI method)\n(langchain.llms.VertexAI method)\n(langchain.llms.Writer method)\naprep_prompts() (langchain.chains.LLMChain method)\naput() (langchain.utilities.TextRequestsWrapper method)\narbitrary_types_allowed (langchain.experimental.BabyAGI.Config attribute)\n(langchain.experimental.GenerativeAgent.Config attribute)\n(langchain.retrievers.WeaviateHybridSearchRetriever.Config attribute)\nare_all_true_prompt (langchain.chains.LLMSummarizationCheckerChain attribute)\naresults() (langchain.serpapi.SerpAPIWrapper method)\n(langchain.utilities.GoogleSerperAPIWrapper method)\n(langchain.utilities.searx_search.SearxSearchWrapper method)\n(langchain.utilities.SearxSearchWrapper method)\n(langchain.utilities.SerpAPIWrapper method)\nargs (langchain.agents.Tool property)\n(langchain.tools.BaseTool property)\n(langchain.tools.StructuredTool property)\n(langchain.tools.Tool property)\nargs_schema (langchain.tools.AIPluginTool attribute)\n(langchain.tools.BaseTool attribute)\n(langchain.tools.ClickTool attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-13", "text": "(langchain.tools.BaseTool attribute)\n(langchain.tools.ClickTool attribute)\n(langchain.tools.CopyFileTool attribute)\n(langchain.tools.CurrentWebPageTool attribute)\n(langchain.tools.DeleteFileTool attribute)\n(langchain.tools.ExtractHyperlinksTool attribute)\n(langchain.tools.ExtractTextTool attribute)\n(langchain.tools.FileSearchTool attribute)\n(langchain.tools.GetElementsTool attribute)\n(langchain.tools.GmailCreateDraft attribute)\n(langchain.tools.GmailGetMessage attribute)\n(langchain.tools.GmailGetThread attribute)\n(langchain.tools.GmailSearch attribute)\n(langchain.tools.GooglePlacesTool attribute)\n(langchain.tools.ListDirectoryTool attribute)\n(langchain.tools.MoveFileTool attribute)\n(langchain.tools.NavigateBackTool attribute)\n(langchain.tools.NavigateTool attribute)\n(langchain.tools.ReadFileTool attribute)\n(langchain.tools.ShellTool attribute)\n(langchain.tools.StructuredTool attribute)\n(langchain.tools.Tool attribute)\n(langchain.tools.WriteFileTool attribute)\narun() (langchain.serpapi.SerpAPIWrapper method)\n(langchain.tools.BaseTool method)\n(langchain.utilities.GoogleSerperAPIWrapper method)\n(langchain.utilities.PowerBIDataset method)\n(langchain.utilities.searx_search.SearxSearchWrapper method)\n(langchain.utilities.SearxSearchWrapper method)\n(langchain.utilities.SerpAPIWrapper method)\narxiv_exceptions (langchain.utilities.ArxivAPIWrapper attribute)\nArxivLoader (class in langchain.document_loaders)\nas_retriever() (langchain.vectorstores.Redis method)\n(langchain.vectorstores.SingleStoreDB method)\n(langchain.vectorstores.Vectara method)\n(langchain.vectorstores.VectorStore method)\nasearch() (langchain.vectorstores.VectorStore method)\nasimilarity_search() (langchain.vectorstores.VectorStore method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-14", "text": "asimilarity_search() (langchain.vectorstores.VectorStore method)\nasimilarity_search_by_vector() (langchain.vectorstores.VectorStore method)\nasimilarity_search_with_relevance_scores() (langchain.vectorstores.VectorStore method)\nassignee (langchain.document_loaders.GitHubIssuesLoader attribute)\nasync_browser (langchain.agents.agent_toolkits.PlayWrightBrowserToolkit attribute)\nAtlasDB (class in langchain.vectorstores)\natransform_documents() (langchain.document_transformers.EmbeddingsRedundantFilter method)\n(langchain.text_splitter.TextSplitter method)\nauth_token (langchain.utilities.TwilioAPIWrapper attribute)\nauth_with_token (langchain.document_loaders.OneDriveLoader attribute)\nAutoGPT (class in langchain.experimental)\nAwaDB (class in langchain.vectorstores)\nAwsKendraIndexRetriever (class in langchain.retrievers)\nawslambda_tool_description (langchain.utilities.LambdaWrapper attribute)\nawslambda_tool_name (langchain.utilities.LambdaWrapper attribute)\nAZLyricsLoader (class in langchain.document_loaders)\nAzureBlobStorageContainerLoader (class in langchain.document_loaders)\nAzureBlobStorageFileLoader (class in langchain.document_loaders)\nB\nBabyAGI (class in langchain.experimental)\nbad_words (langchain.llms.NLPCloud attribute)\nbase_compressor (langchain.retrievers.ContextualCompressionRetriever attribute)\nbase_embeddings (langchain.chains.HypotheticalDocumentEmbedder attribute)\nbase_prompt (langchain.tools.ZapierNLARunAction attribute)\nbase_retriever (langchain.retrievers.ContextualCompressionRetriever attribute)\nbase_url (langchain.document_loaders.BlackboardLoader attribute)\n(langchain.llms.AI21 attribute)\n(langchain.llms.ForefrontAI attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-15", "text": "(langchain.llms.AI21 attribute)\n(langchain.llms.ForefrontAI attribute)\n(langchain.llms.Writer attribute)\n(langchain.tools.APIOperation attribute)\n(langchain.tools.OpenAPISpec property)\nBashProcess (class in langchain.utilities)\nbatch_size (langchain.llms.AzureOpenAI attribute)\n(langchain.llms.OpenAI attribute)\n(langchain.llms.OpenLM attribute)\nbearer_token (langchain.retrievers.ChatGPTPluginRetriever attribute)\nbest_of (langchain.llms.AlephAlpha attribute)\n(langchain.llms.AzureOpenAI attribute)\n(langchain.llms.OpenAI attribute)\n(langchain.llms.OpenLM attribute)\n(langchain.llms.Writer attribute)\nBibtexLoader (class in langchain.document_loaders)\nBigQueryLoader (class in langchain.document_loaders)\nBiliBiliLoader (class in langchain.document_loaders)\nbinary_location (langchain.document_loaders.SeleniumURLLoader attribute)\nbing_search_url (langchain.utilities.BingSearchAPIWrapper attribute)\nbing_subscription_key (langchain.utilities.BingSearchAPIWrapper attribute)\nBlackboardLoader (class in langchain.document_loaders)\nBlockchainDocumentLoader (class in langchain.document_loaders)\nbody_params (langchain.tools.APIOperation property)\nbrowser (langchain.document_loaders.SeleniumURLLoader attribute)\nBSHTMLLoader (class in langchain.document_loaders)\nbuffer (langchain.memory.ConversationBufferMemory property)\n(langchain.memory.ConversationBufferWindowMemory property)\n(langchain.memory.ConversationEntityMemory property)\n(langchain.memory.ConversationStringBufferMemory attribute)\n(langchain.memory.ConversationSummaryBufferMemory property)\n(langchain.memory.ConversationSummaryMemory attribute)\n(langchain.memory.ConversationTokenBufferMemory property)\nC\ncache_folder (langchain.embeddings.HuggingFaceEmbeddings attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-16", "text": "C\ncache_folder (langchain.embeddings.HuggingFaceEmbeddings attribute)\n(langchain.embeddings.HuggingFaceInstructEmbeddings attribute)\ncall_actor() (langchain.utilities.ApifyWrapper method)\ncallback_manager (langchain.agents.agent_toolkits.PowerBIToolkit attribute)\n(langchain.tools.BaseTool attribute)\n(langchain.tools.Tool attribute)\ncallbacks (langchain.tools.BaseTool attribute)\n(langchain.tools.Tool attribute)\ncaptions_language (langchain.document_loaders.GoogleApiYoutubeLoader attribute)\nCassandraChatMessageHistory (class in langchain.memory)\ncategories (langchain.utilities.searx_search.SearxSearchWrapper attribute)\n(langchain.utilities.SearxSearchWrapper attribute)\nchain (langchain.chains.ConstitutionalChain attribute)\nchains (langchain.chains.SequentialChain attribute)\n(langchain.chains.SimpleSequentialChain attribute)\nchannel_name (langchain.document_loaders.GoogleApiYoutubeLoader attribute)\nCharacterTextSplitter (class in langchain.text_splitter)\nCHAT_CONVERSATIONAL_REACT_DESCRIPTION (langchain.agents.AgentType attribute)\nchat_history_key (langchain.memory.ConversationEntityMemory attribute)\nCHAT_ZERO_SHOT_REACT_DESCRIPTION (langchain.agents.AgentType attribute)\nChatGPTLoader (class in langchain.document_loaders)\ncheck_assertions_prompt (langchain.chains.LLMCheckerChain attribute)\n(langchain.chains.LLMSummarizationCheckerChain attribute)\ncheck_bs4() (langchain.document_loaders.BlackboardLoader method)\nChroma (class in langchain.vectorstores)\nCHUNK_LEN (langchain.llms.RWKV attribute)\nchunk_overlap (langchain.text_splitter.Tokenizer attribute)\nchunk_size (langchain.embeddings.OpenAIEmbeddings attribute)\nclean_pdf() (langchain.document_loaders.MathpixPDFLoader method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-17", "text": "clean_pdf() (langchain.document_loaders.MathpixPDFLoader method)\nclear() (langchain.experimental.GenerativeAgentMemory method)\n(langchain.memory.CassandraChatMessageHistory method)\n(langchain.memory.ChatMessageHistory method)\n(langchain.memory.CombinedMemory method)\n(langchain.memory.ConversationEntityMemory method)\n(langchain.memory.ConversationKGMemory method)\n(langchain.memory.ConversationStringBufferMemory method)\n(langchain.memory.ConversationSummaryBufferMemory method)\n(langchain.memory.ConversationSummaryMemory method)\n(langchain.memory.CosmosDBChatMessageHistory method)\n(langchain.memory.DynamoDBChatMessageHistory method)\n(langchain.memory.FileChatMessageHistory method)\n(langchain.memory.InMemoryEntityStore method)\n(langchain.memory.MomentoChatMessageHistory method)\n(langchain.memory.MongoDBChatMessageHistory method)\n(langchain.memory.PostgresChatMessageHistory method)\n(langchain.memory.ReadOnlySharedMemory method)\n(langchain.memory.RedisChatMessageHistory method)\n(langchain.memory.RedisEntityStore method)\n(langchain.memory.SimpleMemory method)\n(langchain.memory.SQLiteEntityStore method)\n(langchain.memory.VectorStoreRetrieverMemory method)\nClickhouse (class in langchain.vectorstores)\nclient (langchain.llms.Petals attribute)\n(langchain.retrievers.document_compressors.CohereRerank attribute)\nclient_search() (langchain.vectorstores.ElasticVectorSearch method)\ncluster_driver_port (langchain.llms.Databricks attribute)\ncluster_id (langchain.llms.Databricks attribute)\nCollegeConfidentialLoader (class in langchain.document_loaders)\ncolumn_map (langchain.vectorstores.ClickhouseSettings attribute)\n(langchain.vectorstores.MyScaleSettings attribute)\ncombine_docs_chain (langchain.chains.AnalyzeDocumentChain attribute)\ncombine_documents_chain (langchain.chains.MapReduceChain attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-18", "text": "combine_documents_chain (langchain.chains.MapReduceChain attribute)\ncombine_embeddings() (langchain.chains.HypotheticalDocumentEmbedder method)\ncompletion_bias_exclusion_first_token_only (langchain.llms.AlephAlpha attribute)\ncompletion_with_retry() (langchain.chat_models.ChatOpenAI method)\ncompress_documents() (langchain.retrievers.document_compressors.CohereRerank method)\n(langchain.retrievers.document_compressors.DocumentCompressorPipeline method)\n(langchain.retrievers.document_compressors.EmbeddingsFilter method)\n(langchain.retrievers.document_compressors.LLMChainExtractor method)\n(langchain.retrievers.document_compressors.LLMChainFilter method)\ncompress_to_size (langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding attribute)\nconfig (langchain.llms.CTransformers attribute)\nConfluenceLoader (class in langchain.document_loaders)\nCoNLLULoader (class in langchain.document_loaders)\nconnect() (langchain.vectorstores.AnalyticDB method)\nconnection_kwargs (langchain.vectorstores.SingleStoreDB attribute)\nconnection_string_from_db_params() (langchain.vectorstores.AnalyticDB class method)\nconstitutional_principles (langchain.chains.ConstitutionalChain attribute)\nconstruct() (langchain.llms.AI21 class method)\n(langchain.llms.AlephAlpha class method)\n(langchain.llms.Anthropic class method)\n(langchain.llms.Anyscale class method)\n(langchain.llms.Aviary class method)\n(langchain.llms.AzureOpenAI class method)\n(langchain.llms.Banana class method)\n(langchain.llms.Baseten class method)\n(langchain.llms.Beam class method)\n(langchain.llms.Bedrock class method)\n(langchain.llms.CerebriumAI class method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-19", "text": "(langchain.llms.CerebriumAI class method)\n(langchain.llms.Cohere class method)\n(langchain.llms.CTransformers class method)\n(langchain.llms.Databricks class method)\n(langchain.llms.DeepInfra class method)\n(langchain.llms.FakeListLLM class method)\n(langchain.llms.ForefrontAI class method)\n(langchain.llms.GooglePalm class method)\n(langchain.llms.GooseAI class method)\n(langchain.llms.GPT4All class method)\n(langchain.llms.HuggingFaceEndpoint class method)\n(langchain.llms.HuggingFaceHub class method)\n(langchain.llms.HuggingFacePipeline class method)\n(langchain.llms.HuggingFaceTextGenInference class method)\n(langchain.llms.HumanInputLLM class method)\n(langchain.llms.LlamaCpp class method)\n(langchain.llms.Modal class method)\n(langchain.llms.MosaicML class method)\n(langchain.llms.NLPCloud class method)\n(langchain.llms.OpenAI class method)\n(langchain.llms.OpenAIChat class method)\n(langchain.llms.OpenLM class method)\n(langchain.llms.Petals class method)\n(langchain.llms.PipelineAI class method)\n(langchain.llms.PredictionGuard class method)\n(langchain.llms.PromptLayerOpenAI class method)\n(langchain.llms.PromptLayerOpenAIChat class method)\n(langchain.llms.Replicate class method)\n(langchain.llms.RWKV class method)\n(langchain.llms.SagemakerEndpoint class method)\n(langchain.llms.SelfHostedHuggingFaceLLM class method)\n(langchain.llms.SelfHostedPipeline class method)\n(langchain.llms.StochasticAI class method)\n(langchain.llms.VertexAI class method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-20", "text": "(langchain.llms.VertexAI class method)\n(langchain.llms.Writer class method)\ncontent_handler (langchain.embeddings.SagemakerEndpointEmbeddings attribute)\n(langchain.llms.SagemakerEndpoint attribute)\ncontent_key (langchain.retrievers.AzureCognitiveSearchRetriever attribute)\nCONTENT_KEY (langchain.vectorstores.Qdrant attribute)\ncontext_erase (langchain.llms.GPT4All attribute)\ncontextual_control_threshold (langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding attribute)\n(langchain.llms.AlephAlpha attribute)\ncontinue_on_failure (langchain.document_loaders.GoogleApiYoutubeLoader attribute)\n(langchain.document_loaders.PlaywrightURLLoader attribute)\n(langchain.document_loaders.SeleniumURLLoader attribute)\ncontrol_log_additive (langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding attribute)\n(langchain.llms.AlephAlpha attribute)\nCONVERSATIONAL_REACT_DESCRIPTION (langchain.agents.AgentType attribute)\ncopy() (langchain.llms.AI21 method)\n(langchain.llms.AlephAlpha method)\n(langchain.llms.Anthropic method)\n(langchain.llms.Anyscale method)\n(langchain.llms.Aviary method)\n(langchain.llms.AzureOpenAI method)\n(langchain.llms.Banana method)\n(langchain.llms.Baseten method)\n(langchain.llms.Beam method)\n(langchain.llms.Bedrock method)\n(langchain.llms.CerebriumAI method)\n(langchain.llms.Cohere method)\n(langchain.llms.CTransformers method)\n(langchain.llms.Databricks method)\n(langchain.llms.DeepInfra method)\n(langchain.llms.FakeListLLM method)\n(langchain.llms.ForefrontAI method)\n(langchain.llms.GooglePalm method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-21", "text": "(langchain.llms.ForefrontAI method)\n(langchain.llms.GooglePalm method)\n(langchain.llms.GooseAI method)\n(langchain.llms.GPT4All method)\n(langchain.llms.HuggingFaceEndpoint method)\n(langchain.llms.HuggingFaceHub method)\n(langchain.llms.HuggingFacePipeline method)\n(langchain.llms.HuggingFaceTextGenInference method)\n(langchain.llms.HumanInputLLM method)\n(langchain.llms.LlamaCpp method)\n(langchain.llms.Modal method)\n(langchain.llms.MosaicML method)\n(langchain.llms.NLPCloud method)\n(langchain.llms.OpenAI method)\n(langchain.llms.OpenAIChat method)\n(langchain.llms.OpenLM method)\n(langchain.llms.Petals method)\n(langchain.llms.PipelineAI method)\n(langchain.llms.PredictionGuard method)\n(langchain.llms.PromptLayerOpenAI method)\n(langchain.llms.PromptLayerOpenAIChat method)\n(langchain.llms.Replicate method)\n(langchain.llms.RWKV method)\n(langchain.llms.SagemakerEndpoint method)\n(langchain.llms.SelfHostedHuggingFaceLLM method)\n(langchain.llms.SelfHostedPipeline method)\n(langchain.llms.StochasticAI method)\n(langchain.llms.VertexAI method)\n(langchain.llms.Writer method)\ncoroutine (langchain.agents.Tool attribute)\n(langchain.tools.StructuredTool attribute)\n(langchain.tools.Tool attribute)\nCosmosDBChatMessageHistory (class in langchain.memory)\ncount_tokens() (langchain.text_splitter.SentenceTransformersTokenTextSplitter method)\ncountPenalty (langchain.llms.AI21 attribute)\nCPP (langchain.text_splitter.Language attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-22", "text": "CPP (langchain.text_splitter.Language attribute)\ncreate() (langchain.retrievers.ElasticSearchBM25Retriever class method)\ncreate_assertions_prompt (langchain.chains.LLMSummarizationCheckerChain attribute)\ncreate_collection() (langchain.vectorstores.AnalyticDB method)\ncreate_csv_agent() (in module langchain.agents)\n(in module langchain.agents.agent_toolkits)\ncreate_documents() (langchain.text_splitter.TextSplitter method)\ncreate_draft_answer_prompt (langchain.chains.LLMCheckerChain attribute)\ncreate_index() (langchain.vectorstores.AtlasDB method)\n(langchain.vectorstores.ElasticVectorSearch method)\ncreate_index_if_not_exist() (langchain.vectorstores.Tair method)\ncreate_json_agent() (in module langchain.agents)\n(in module langchain.agents.agent_toolkits)\ncreate_llm_result() (langchain.llms.AzureOpenAI method)\n(langchain.llms.OpenAI method)\n(langchain.llms.OpenLM method)\n(langchain.llms.PromptLayerOpenAI method)\ncreate_openapi_agent() (in module langchain.agents)\n(in module langchain.agents.agent_toolkits)\ncreate_outputs() (langchain.chains.LLMChain method)\ncreate_pandas_dataframe_agent() (in module langchain.agents)\n(in module langchain.agents.agent_toolkits)\ncreate_pbi_agent() (in module langchain.agents)\n(in module langchain.agents.agent_toolkits)\ncreate_pbi_chat_agent() (in module langchain.agents)\n(in module langchain.agents.agent_toolkits)\ncreate_prompt() (langchain.agents.Agent class method)\n(langchain.agents.ConversationalAgent class method)\n(langchain.agents.ConversationalChatAgent class method)\n(langchain.agents.ReActTextWorldAgent class method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-23", "text": "(langchain.agents.ReActTextWorldAgent class method)\n(langchain.agents.StructuredChatAgent class method)\n(langchain.agents.ZeroShotAgent class method)\ncreate_python_agent() (in module langchain.agents.agent_toolkits)\ncreate_spark_dataframe_agent() (in module langchain.agents)\n(in module langchain.agents.agent_toolkits)\ncreate_spark_sql_agent() (in module langchain.agents)\n(in module langchain.agents.agent_toolkits)\ncreate_sql_agent() (in module langchain.agents)\n(in module langchain.agents.agent_toolkits)\ncreate_tables_if_not_exists() (langchain.vectorstores.AnalyticDB method)\ncreate_vectorstore_agent() (in module langchain.agents)\n(in module langchain.agents.agent_toolkits)\ncreate_vectorstore_router_agent() (in module langchain.agents)\n(in module langchain.agents.agent_toolkits)\ncreator (langchain.document_loaders.GitHubIssuesLoader attribute)\ncredential (langchain.utilities.PowerBIDataset attribute)\ncredentials (langchain.llms.VertexAI attribute)\ncredentials_path (langchain.document_loaders.GoogleApiClient attribute)\n(langchain.document_loaders.GoogleDriveLoader attribute)\ncredentials_profile_name (langchain.embeddings.BedrockEmbeddings attribute)\n(langchain.embeddings.SagemakerEndpointEmbeddings attribute)\n(langchain.llms.Bedrock attribute)\n(langchain.llms.SagemakerEndpoint attribute)\ncritique_chain (langchain.chains.ConstitutionalChain attribute)\nCSVLoader (class in langchain.document_loaders)\ncurrent_plan (langchain.experimental.GenerativeAgentMemory attribute)\ncustom_headers (langchain.utilities.GraphQLAPIWrapper attribute)\ncypher_generation_chain (langchain.chains.GraphCypherQAChain attribute)\nD\ndaily_summaries (langchain.experimental.GenerativeAgent attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-24", "text": "D\ndaily_summaries (langchain.experimental.GenerativeAgent attribute)\ndata (langchain.document_loaders.MathpixPDFLoader property)\ndatabase (langchain.chains.SQLDatabaseChain attribute)\n(langchain.vectorstores.ClickhouseSettings attribute)\n(langchain.vectorstores.MyScaleSettings attribute)\nDataberryRetriever (class in langchain.retrievers)\nDataFrameLoader (class in langchain.document_loaders)\ndataset_id (langchain.document_loaders.ApifyDatasetLoader attribute)\n(langchain.utilities.PowerBIDataset attribute)\ndataset_mapping_function (langchain.document_loaders.ApifyDatasetLoader attribute)\ndatastore_url (langchain.retrievers.DataberryRetriever attribute)\ndb (langchain.agents.agent_toolkits.SparkSQLToolkit attribute)\n(langchain.agents.agent_toolkits.SQLDatabaseToolkit attribute)\ndecay_rate (langchain.retrievers.TimeWeightedVectorStoreRetriever attribute)\ndecider_chain (langchain.chains.SQLDatabaseSequentialChain attribute)\ndecode (langchain.text_splitter.Tokenizer attribute)\nDeepLake (class in langchain.vectorstores)\ndefault_output_key (langchain.output_parsers.RegexParser attribute)\ndefault_parser (langchain.document_loaders.WebBaseLoader attribute)\ndefault_request_timeout (langchain.llms.Anthropic attribute)\ndefault_salience (langchain.retrievers.TimeWeightedVectorStoreRetriever attribute)\ndelete() (langchain.memory.InMemoryEntityStore method)\n(langchain.memory.RedisEntityStore method)\n(langchain.memory.SQLiteEntityStore method)\n(langchain.utilities.TextRequestsWrapper method)\n(langchain.vectorstores.DeepLake method)\ndelete_collection() (langchain.vectorstores.AnalyticDB method)\n(langchain.vectorstores.Chroma method)\ndelete_dataset() (langchain.vectorstores.DeepLake method)\ndeployment_name (langchain.chat_models.AzureChatOpenAI attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-25", "text": "deployment_name (langchain.chat_models.AzureChatOpenAI attribute)\n(langchain.llms.AzureOpenAI attribute)\ndescription (langchain.agents.agent_toolkits.VectorStoreInfo attribute)\n(langchain.agents.Tool attribute)\n(langchain.output_parsers.ResponseSchema attribute)\n(langchain.tools.APIOperation attribute)\n(langchain.tools.BaseTool attribute)\n(langchain.tools.ClickTool attribute)\n(langchain.tools.CopyFileTool attribute)\n(langchain.tools.CurrentWebPageTool attribute)\n(langchain.tools.DeleteFileTool attribute)\n(langchain.tools.ExtractHyperlinksTool attribute)\n(langchain.tools.ExtractTextTool attribute)\n(langchain.tools.FileSearchTool attribute)\n(langchain.tools.GetElementsTool attribute)\n(langchain.tools.GmailCreateDraft attribute)\n(langchain.tools.GmailGetMessage attribute)\n(langchain.tools.GmailGetThread attribute)\n(langchain.tools.GmailSearch attribute)\n(langchain.tools.GmailSendMessage attribute)\n(langchain.tools.ListDirectoryTool attribute)\n(langchain.tools.MoveFileTool attribute)\n(langchain.tools.NavigateBackTool attribute)\n(langchain.tools.NavigateTool attribute)\n(langchain.tools.ReadFileTool attribute)\n(langchain.tools.ShellTool attribute)\n(langchain.tools.StructuredTool attribute)\n(langchain.tools.Tool attribute)\n(langchain.tools.WriteFileTool attribute)\ndeserialize_json_input() (langchain.chains.OpenAPIEndpointChain method)\ndevice (langchain.llms.SelfHostedHuggingFaceLLM attribute)\ndialect (langchain.agents.agent_toolkits.SQLDatabaseToolkit property)\ndict() (langchain.agents.Agent method)\n(langchain.agents.BaseMultiActionAgent method)\n(langchain.agents.BaseSingleActionAgent method)\n(langchain.agents.LLMSingleActionAgent method)\n(langchain.llms.AI21 method)\n(langchain.llms.AlephAlpha method)\n(langchain.llms.Anthropic method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-26", "text": "(langchain.llms.Anthropic method)\n(langchain.llms.Anyscale method)\n(langchain.llms.Aviary method)\n(langchain.llms.AzureOpenAI method)\n(langchain.llms.Banana method)\n(langchain.llms.Baseten method)\n(langchain.llms.Beam method)\n(langchain.llms.Bedrock method)\n(langchain.llms.CerebriumAI method)\n(langchain.llms.Cohere method)\n(langchain.llms.CTransformers method)\n(langchain.llms.Databricks method)\n(langchain.llms.DeepInfra method)\n(langchain.llms.FakeListLLM method)\n(langchain.llms.ForefrontAI method)\n(langchain.llms.GooglePalm method)\n(langchain.llms.GooseAI method)\n(langchain.llms.GPT4All method)\n(langchain.llms.HuggingFaceEndpoint method)\n(langchain.llms.HuggingFaceHub method)\n(langchain.llms.HuggingFacePipeline method)\n(langchain.llms.HuggingFaceTextGenInference method)\n(langchain.llms.HumanInputLLM method)\n(langchain.llms.LlamaCpp method)\n(langchain.llms.Modal method)\n(langchain.llms.MosaicML method)\n(langchain.llms.NLPCloud method)\n(langchain.llms.OpenAI method)\n(langchain.llms.OpenAIChat method)\n(langchain.llms.OpenLM method)\n(langchain.llms.Petals method)\n(langchain.llms.PipelineAI method)\n(langchain.llms.PredictionGuard method)\n(langchain.llms.PromptLayerOpenAI method)\n(langchain.llms.PromptLayerOpenAIChat method)\n(langchain.llms.Replicate method)\n(langchain.llms.RWKV method)\n(langchain.llms.SagemakerEndpoint method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-27", "text": "(langchain.llms.SagemakerEndpoint method)\n(langchain.llms.SelfHostedHuggingFaceLLM method)\n(langchain.llms.SelfHostedPipeline method)\n(langchain.llms.StochasticAI method)\n(langchain.llms.VertexAI method)\n(langchain.llms.Writer method)\n(langchain.prompts.BasePromptTemplate method)\n(langchain.prompts.FewShotPromptTemplate method)\n(langchain.prompts.FewShotPromptWithTemplates method)\nDiffbotLoader (class in langchain.document_loaders)\ndirection (langchain.document_loaders.GitHubIssuesLoader attribute)\nDirectoryLoader (class in langchain.document_loaders)\ndisallowed_special (langchain.llms.AzureOpenAI attribute)\n(langchain.llms.OpenAI attribute)\n(langchain.llms.OpenAIChat attribute)\n(langchain.llms.OpenLM attribute)\n(langchain.llms.PromptLayerOpenAIChat attribute)\nDiscordChatLoader (class in langchain.document_loaders)\ndo_sample (langchain.llms.NLPCloud attribute)\n(langchain.llms.Petals attribute)\ndoc_content_chars_max (langchain.utilities.ArxivAPIWrapper attribute)\n(langchain.utilities.PubMedAPIWrapper attribute)\n(langchain.utilities.WikipediaAPIWrapper attribute)\nDocArrayHnswSearch (class in langchain.vectorstores)\nDocArrayInMemorySearch (class in langchain.vectorstores)\ndocs (langchain.retrievers.TFIDFRetriever attribute)\ndocset_id (langchain.document_loaders.DocugamiLoader attribute)\ndocument_ids (langchain.document_loaders.DocugamiLoader attribute)\n(langchain.document_loaders.GoogleDriveLoader attribute)\nDocx2txtLoader (class in langchain.document_loaders)\ndownload() (langchain.document_loaders.BlackboardLoader method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-28", "text": "download() (langchain.document_loaders.BlackboardLoader method)\ndrive_id (langchain.document_loaders.OneDriveLoader attribute)\ndrop() (langchain.vectorstores.Clickhouse method)\n(langchain.vectorstores.MyScale method)\ndrop_index() (langchain.vectorstores.Redis static method)\n(langchain.vectorstores.Tair static method)\ndrop_tables() (langchain.vectorstores.AnalyticDB method)\nDuckDBLoader (class in langchain.document_loaders)\nDynamoDBChatMessageHistory (class in langchain.memory)\nE\nearly_stopping (langchain.llms.NLPCloud attribute)\nearly_stopping_method (langchain.agents.AgentExecutor attribute)\necho (langchain.llms.AlephAlpha attribute)\n(langchain.llms.GPT4All attribute)\n(langchain.llms.LlamaCpp attribute)\nElasticSearchBM25Retriever (class in langchain.retrievers)\nElasticsearchEmbeddings (class in langchain.embeddings)\nElasticVectorSearch (class in langchain.vectorstores)\nemail (langchain.utilities.PubMedAPIWrapper attribute)\nembed_documents() (langchain.chains.HypotheticalDocumentEmbedder method)\n(langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding method)\n(langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding method)\n(langchain.embeddings.BedrockEmbeddings method)\n(langchain.embeddings.CohereEmbeddings method)\n(langchain.embeddings.DeepInfraEmbeddings method)\n(langchain.embeddings.ElasticsearchEmbeddings method)\n(langchain.embeddings.FakeEmbeddings method)\n(langchain.embeddings.HuggingFaceEmbeddings method)\n(langchain.embeddings.HuggingFaceHubEmbeddings method)\n(langchain.embeddings.HuggingFaceInstructEmbeddings method)\n(langchain.embeddings.LlamaCppEmbeddings method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-29", "text": "(langchain.embeddings.LlamaCppEmbeddings method)\n(langchain.embeddings.MiniMaxEmbeddings method)\n(langchain.embeddings.ModelScopeEmbeddings method)\n(langchain.embeddings.MosaicMLInstructorEmbeddings method)\n(langchain.embeddings.OpenAIEmbeddings method)\n(langchain.embeddings.SagemakerEndpointEmbeddings method)\n(langchain.embeddings.SelfHostedEmbeddings method)\n(langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings method)\n(langchain.embeddings.TensorflowHubEmbeddings method)\nembed_instruction (langchain.embeddings.DeepInfraEmbeddings attribute)\n(langchain.embeddings.HuggingFaceInstructEmbeddings attribute)\n(langchain.embeddings.MosaicMLInstructorEmbeddings attribute)\n(langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings attribute)\nembed_query() (langchain.chains.HypotheticalDocumentEmbedder method)\n(langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding method)\n(langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding method)\n(langchain.embeddings.BedrockEmbeddings method)\n(langchain.embeddings.CohereEmbeddings method)\n(langchain.embeddings.DeepInfraEmbeddings method)\n(langchain.embeddings.ElasticsearchEmbeddings method)\n(langchain.embeddings.FakeEmbeddings method)\n(langchain.embeddings.HuggingFaceEmbeddings method)\n(langchain.embeddings.HuggingFaceHubEmbeddings method)\n(langchain.embeddings.HuggingFaceInstructEmbeddings method)\n(langchain.embeddings.LlamaCppEmbeddings method)\n(langchain.embeddings.MiniMaxEmbeddings method)\n(langchain.embeddings.ModelScopeEmbeddings method)\n(langchain.embeddings.MosaicMLInstructorEmbeddings method)\n(langchain.embeddings.OpenAIEmbeddings method)\n(langchain.embeddings.SagemakerEndpointEmbeddings method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-30", "text": "(langchain.embeddings.SagemakerEndpointEmbeddings method)\n(langchain.embeddings.SelfHostedEmbeddings method)\n(langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings method)\n(langchain.embeddings.TensorflowHubEmbeddings method)\nembed_type_db (langchain.embeddings.MiniMaxEmbeddings attribute)\nembed_type_query (langchain.embeddings.MiniMaxEmbeddings attribute)\nembedding (langchain.llms.GPT4All attribute)\nembeddings (langchain.document_transformers.EmbeddingsRedundantFilter attribute)\n(langchain.retrievers.document_compressors.EmbeddingsFilter attribute)\n(langchain.retrievers.KNNRetriever attribute)\n(langchain.retrievers.PineconeHybridSearchRetriever attribute)\n(langchain.retrievers.SVMRetriever attribute)\nencode (langchain.text_splitter.Tokenizer attribute)\nencode_kwargs (langchain.embeddings.HuggingFaceEmbeddings attribute)\n(langchain.embeddings.HuggingFaceInstructEmbeddings attribute)\nendpoint_kwargs (langchain.embeddings.SagemakerEndpointEmbeddings attribute)\n(langchain.llms.SagemakerEndpoint attribute)\nendpoint_name (langchain.embeddings.SagemakerEndpointEmbeddings attribute)\n(langchain.llms.Databricks attribute)\n(langchain.llms.SagemakerEndpoint attribute)\nendpoint_url (langchain.embeddings.MiniMaxEmbeddings attribute)\n(langchain.embeddings.MosaicMLInstructorEmbeddings attribute)\n(langchain.llms.CerebriumAI attribute)\n(langchain.llms.ForefrontAI attribute)\n(langchain.llms.HuggingFaceEndpoint attribute)\n(langchain.llms.Modal attribute)\n(langchain.llms.MosaicML attribute)\nengines (langchain.utilities.searx_search.SearxSearchWrapper attribute)\n(langchain.utilities.SearxSearchWrapper attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-31", "text": "(langchain.utilities.SearxSearchWrapper attribute)\nentity_cache (langchain.memory.ConversationEntityMemory attribute)\nentity_extraction_chain (langchain.chains.GraphQAChain attribute)\nentity_extraction_prompt (langchain.memory.ConversationEntityMemory attribute)\n(langchain.memory.ConversationKGMemory attribute)\nentity_store (langchain.memory.ConversationEntityMemory attribute)\nentity_summarization_prompt (langchain.memory.ConversationEntityMemory attribute)\nerror (langchain.chains.OpenAIModerationChain attribute)\nescape_str() (langchain.vectorstores.Clickhouse method)\n(langchain.vectorstores.MyScale method)\nEverNoteLoader (class in langchain.document_loaders)\nexample_keys (langchain.prompts.example_selector.SemanticSimilarityExampleSelector attribute)\nexample_prompt (langchain.prompts.example_selector.LengthBasedExampleSelector attribute)\n(langchain.prompts.FewShotPromptTemplate attribute)\n(langchain.prompts.FewShotPromptWithTemplates attribute)\nexample_selector (langchain.prompts.FewShotPromptTemplate attribute)\n(langchain.prompts.FewShotPromptWithTemplates attribute)\nexample_separator (langchain.prompts.FewShotPromptTemplate attribute)\n(langchain.prompts.FewShotPromptWithTemplates attribute)\nexamples (langchain.agents.agent_toolkits.PowerBIToolkit attribute)\n(langchain.prompts.example_selector.LengthBasedExampleSelector attribute)\n(langchain.prompts.FewShotPromptTemplate attribute)\n(langchain.prompts.FewShotPromptWithTemplates attribute)\n(langchain.tools.QueryPowerBITool attribute)\nexecutable_path (langchain.document_loaders.SeleniumURLLoader attribute)\nexecute_task() (langchain.experimental.BabyAGI method)\nexists() (langchain.memory.InMemoryEntityStore method)\n(langchain.memory.RedisEntityStore method)\n(langchain.memory.SQLiteEntityStore method)\nextra (langchain.retrievers.WeaviateHybridSearchRetriever.Config attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-32", "text": "extra (langchain.retrievers.WeaviateHybridSearchRetriever.Config attribute)\nextract_video_id() (langchain.document_loaders.YoutubeLoader static method)\nF\nf16_kv (langchain.embeddings.LlamaCppEmbeddings attribute)\n(langchain.llms.GPT4All attribute)\n(langchain.llms.LlamaCpp attribute)\nFacebookChatLoader (class in langchain.document_loaders)\nFAISS (class in langchain.vectorstores)\nFaunaLoader (class in langchain.document_loaders)\nfetch_all() (langchain.document_loaders.WebBaseLoader method)\nfetch_data_from_telegram() (langchain.document_loaders.TelegramChatApiLoader method)\nfetch_k (langchain.prompts.example_selector.MaxMarginalRelevanceExampleSelector attribute)\nfetch_memories() (langchain.experimental.GenerativeAgentMemory method)\nfetch_place_details() (langchain.utilities.GooglePlacesAPIWrapper method)\nFigmaFileLoader (class in langchain.document_loaders)\nfile (langchain.document_loaders.OneDriveFileLoader attribute)\nfile_ids (langchain.document_loaders.GoogleDriveLoader attribute)\nfile_paths (langchain.document_loaders.DocugamiLoader attribute)\nfile_types (langchain.document_loaders.GoogleDriveLoader attribute)\nFileChatMessageHistory (class in langchain.memory)\nfilter (langchain.retrievers.ChatGPTPluginRetriever attribute)\nfolder_id (langchain.document_loaders.GoogleDriveLoader attribute)\nfolder_path (langchain.document_loaders.BlackboardLoader attribute)\n(langchain.document_loaders.OneDriveLoader attribute)\nforce_delete_by_path() (langchain.vectorstores.DeepLake class method)\nformat (langchain.output_parsers.DatetimeOutputParser attribute)\nformat() (langchain.prompts.BaseChatPromptTemplate method)\n(langchain.prompts.BasePromptTemplate method)\n(langchain.prompts.ChatPromptTemplate method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-33", "text": "(langchain.prompts.BasePromptTemplate method)\n(langchain.prompts.ChatPromptTemplate method)\n(langchain.prompts.FewShotPromptTemplate method)\n(langchain.prompts.FewShotPromptWithTemplates method)\n(langchain.prompts.PromptTemplate method)\nformat_messages() (langchain.prompts.BaseChatPromptTemplate method)\n(langchain.prompts.ChatPromptTemplate method)\n(langchain.prompts.MessagesPlaceholder method)\nformat_place_details() (langchain.utilities.GooglePlacesAPIWrapper method)\nformat_prompt() (langchain.prompts.BaseChatPromptTemplate method)\n(langchain.prompts.BasePromptTemplate method)\n(langchain.prompts.StringPromptTemplate method)\nfrequency_penalty (langchain.llms.AlephAlpha attribute)\n(langchain.llms.AzureOpenAI attribute)\n(langchain.llms.Cohere attribute)\n(langchain.llms.GooseAI attribute)\n(langchain.llms.OpenAI attribute)\n(langchain.llms.OpenLM attribute)\nfrequencyPenalty (langchain.llms.AI21 attribute)\nfrom_agent_and_tools() (langchain.agents.AgentExecutor class method)\nfrom_api_key() (langchain.tools.BraveSearch class method)\nfrom_api_operation() (langchain.chains.OpenAPIEndpointChain class method)\nfrom_bearer_token() (langchain.document_loaders.TwitterTweetLoader class method)\nfrom_browser() (langchain.agents.agent_toolkits.PlayWrightBrowserToolkit class method)\nfrom_chains() (langchain.agents.MRKLChain class method)\nfrom_client_params() (langchain.memory.MomentoChatMessageHistory class method)\n(langchain.vectorstores.Typesense class method)\nfrom_colored_object_prompt() (langchain.chains.PALChain class method)\nfrom_components() (langchain.vectorstores.MatchingEngine class method)\nfrom_connection_string() (langchain.vectorstores.MongoDBAtlasVectorSearch class method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-34", "text": "from_connection_string() (langchain.vectorstores.MongoDBAtlasVectorSearch class method)\nfrom_credentials() (langchain.document_loaders.TrelloLoader class method)\n(langchain.embeddings.ElasticsearchEmbeddings class method)\nfrom_documents() (langchain.retrievers.TFIDFRetriever class method)\n(langchain.vectorstores.AnalyticDB class method)\n(langchain.vectorstores.AtlasDB class method)\n(langchain.vectorstores.Chroma class method)\n(langchain.vectorstores.Tair class method)\n(langchain.vectorstores.VectorStore class method)\nfrom_embeddings() (langchain.vectorstores.Annoy class method)\n(langchain.vectorstores.FAISS class method)\nfrom_es_connection() (langchain.embeddings.ElasticsearchEmbeddings class method)\nfrom_examples() (langchain.prompts.example_selector.MaxMarginalRelevanceExampleSelector class method)\n(langchain.prompts.example_selector.SemanticSimilarityExampleSelector class method)\n(langchain.prompts.PromptTemplate class method)\nfrom_existing_index() (langchain.vectorstores.Pinecone class method)\n(langchain.vectorstores.Redis class method)\n(langchain.vectorstores.Tair class method)\nfrom_file() (langchain.prompts.PromptTemplate class method)\n(langchain.tools.OpenAPISpec class method)\nfrom_function() (langchain.agents.Tool class method)\n(langchain.tools.StructuredTool class method)\n(langchain.tools.Tool class method)\nfrom_huggingface_tokenizer() (langchain.text_splitter.TextSplitter class method)\nfrom_jira_api_wrapper() (langchain.agents.agent_toolkits.JiraToolkit class method)\nfrom_language() (langchain.text_splitter.RecursiveCharacterTextSplitter class method)\nfrom_llm() (langchain.agents.agent_toolkits.OpenAPIToolkit class method)\n(langchain.chains.ChatVectorDBChain class method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-35", "text": "(langchain.chains.ChatVectorDBChain class method)\n(langchain.chains.ConstitutionalChain class method)\n(langchain.chains.ConversationalRetrievalChain class method)\n(langchain.chains.FlareChain class method)\n(langchain.chains.GraphCypherQAChain class method)\n(langchain.chains.GraphQAChain class method)\n(langchain.chains.HypotheticalDocumentEmbedder class method)\n(langchain.chains.LLMBashChain class method)\n(langchain.chains.LLMCheckerChain class method)\n(langchain.chains.LLMMathChain class method)\n(langchain.chains.LLMSummarizationCheckerChain class method)\n(langchain.chains.NebulaGraphQAChain class method)\n(langchain.chains.QAGenerationChain class method)\n(langchain.chains.SQLDatabaseChain class method)\n(langchain.chains.SQLDatabaseSequentialChain class method)\n(langchain.experimental.BabyAGI class method)\n(langchain.output_parsers.OutputFixingParser class method)\n(langchain.output_parsers.RetryOutputParser class method)\n(langchain.output_parsers.RetryWithErrorOutputParser class method)\n(langchain.retrievers.document_compressors.LLMChainExtractor class method)\n(langchain.retrievers.document_compressors.LLMChainFilter class method)\n(langchain.retrievers.SelfQueryRetriever class method)\nfrom_llm_and_ai_plugin() (langchain.agents.agent_toolkits.NLAToolkit class method)\nfrom_llm_and_ai_plugin_url() (langchain.agents.agent_toolkits.NLAToolkit class method)\nfrom_llm_and_api_docs() (langchain.chains.APIChain class method)\nfrom_llm_and_spec() (langchain.agents.agent_toolkits.NLAToolkit class method)\nfrom_llm_and_tools() (langchain.agents.Agent class method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-36", "text": "from_llm_and_tools() (langchain.agents.Agent class method)\n(langchain.agents.BaseSingleActionAgent class method)\n(langchain.agents.ConversationalAgent class method)\n(langchain.agents.ConversationalChatAgent class method)\n(langchain.agents.StructuredChatAgent class method)\n(langchain.agents.ZeroShotAgent class method)\nfrom_llm_and_url() (langchain.agents.agent_toolkits.NLAToolkit class method)\nfrom_math_prompt() (langchain.chains.PALChain class method)\nfrom_messages() (langchain.memory.ConversationSummaryMemory class method)\nfrom_model_id() (langchain.llms.HuggingFacePipeline class method)\nfrom_number (langchain.utilities.TwilioAPIWrapper attribute)\nfrom_openapi_spec() (langchain.tools.APIOperation class method)\nfrom_openapi_url() (langchain.tools.APIOperation class method)\nfrom_params() (langchain.chains.MapReduceChain class method)\n(langchain.document_loaders.MaxComputeLoader class method)\n(langchain.document_loaders.WeatherDataLoader class method)\n(langchain.retrievers.VespaRetriever class method)\n(langchain.vectorstores.DocArrayHnswSearch class method)\n(langchain.vectorstores.DocArrayInMemorySearch class method)\nfrom_pipeline() (langchain.llms.SelfHostedHuggingFaceLLM class method)\n(langchain.llms.SelfHostedPipeline class method)\nfrom_plugin_url() (langchain.tools.AIPluginTool class method)\nfrom_rail() (langchain.output_parsers.GuardrailsOutputParser class method)\nfrom_rail_string() (langchain.output_parsers.GuardrailsOutputParser class method)\nfrom_response_schemas() (langchain.output_parsers.StructuredOutputParser class method)\nfrom_secrets() (langchain.document_loaders.TwitterTweetLoader class method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-37", "text": "from_secrets() (langchain.document_loaders.TwitterTweetLoader class method)\nfrom_spec_dict() (langchain.tools.OpenAPISpec class method)\nfrom_string() (langchain.chains.LLMChain class method)\nfrom_template() (langchain.prompts.PromptTemplate class method)\nfrom_text() (langchain.tools.OpenAPISpec class method)\nfrom_texts() (langchain.retrievers.KNNRetriever class method)\n(langchain.retrievers.SVMRetriever class method)\n(langchain.retrievers.TFIDFRetriever class method)\n(langchain.vectorstores.AnalyticDB class method)\n(langchain.vectorstores.Annoy class method)\n(langchain.vectorstores.AtlasDB class method)\n(langchain.vectorstores.AwaDB class method)\n(langchain.vectorstores.Chroma class method)\n(langchain.vectorstores.Clickhouse class method)\n(langchain.vectorstores.DeepLake class method)\n(langchain.vectorstores.DocArrayHnswSearch class method)\n(langchain.vectorstores.DocArrayInMemorySearch class method)\n(langchain.vectorstores.ElasticVectorSearch class method)\n(langchain.vectorstores.FAISS class method)\n(langchain.vectorstores.LanceDB class method)\n(langchain.vectorstores.MatchingEngine class method)\n(langchain.vectorstores.Milvus class method)\n(langchain.vectorstores.MongoDBAtlasVectorSearch class method)\n(langchain.vectorstores.MyScale class method)\n(langchain.vectorstores.OpenSearchVectorSearch class method)\n(langchain.vectorstores.Pinecone class method)\n(langchain.vectorstores.Qdrant class method)\n(langchain.vectorstores.Redis class method)\n(langchain.vectorstores.SingleStoreDB class method)\n(langchain.vectorstores.SKLearnVectorStore class method)\n(langchain.vectorstores.SupabaseVectorStore class method)\n(langchain.vectorstores.Tair class method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-38", "text": "(langchain.vectorstores.Tair class method)\n(langchain.vectorstores.Tigris class method)\n(langchain.vectorstores.Typesense class method)\n(langchain.vectorstores.Vectara class method)\n(langchain.vectorstores.VectorStore class method)\n(langchain.vectorstores.Weaviate class method)\n(langchain.vectorstores.Zilliz class method)\nfrom_texts_return_keys() (langchain.vectorstores.Redis class method)\nfrom_tiktoken_encoder() (langchain.text_splitter.TextSplitter class method)\nfrom_uri() (langchain.utilities.SparkSQL class method)\nfrom_url() (langchain.tools.OpenAPISpec class method)\nfrom_url_and_method() (langchain.chains.OpenAPIEndpointChain class method)\nfrom_youtube_url() (langchain.document_loaders.YoutubeLoader class method)\nfrom_zapier_nla_wrapper() (langchain.agents.agent_toolkits.ZapierToolkit class method)\nFRONT_MATTER_REGEX (langchain.document_loaders.ObsidianLoader attribute)\nfull_key_prefix (langchain.memory.RedisEntityStore property)\nfull_table_name (langchain.memory.SQLiteEntityStore property)\nfunc (langchain.agents.Tool attribute)\n(langchain.tools.StructuredTool attribute)\n(langchain.tools.Tool attribute)\nfunction_name (langchain.utilities.LambdaWrapper attribute)\nG\nGCSDirectoryLoader (class in langchain.document_loaders)\nGCSFileLoader (class in langchain.document_loaders)\ngenerate() (langchain.chains.LLMChain method)\n(langchain.llms.AI21 method)\n(langchain.llms.AlephAlpha method)\n(langchain.llms.Anthropic method)\n(langchain.llms.Anyscale method)\n(langchain.llms.Aviary method)\n(langchain.llms.AzureOpenAI method)\n(langchain.llms.Banana method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-39", "text": "(langchain.llms.AzureOpenAI method)\n(langchain.llms.Banana method)\n(langchain.llms.Baseten method)\n(langchain.llms.Beam method)\n(langchain.llms.Bedrock method)\n(langchain.llms.CerebriumAI method)\n(langchain.llms.Cohere method)\n(langchain.llms.CTransformers method)\n(langchain.llms.Databricks method)\n(langchain.llms.DeepInfra method)\n(langchain.llms.FakeListLLM method)\n(langchain.llms.ForefrontAI method)\n(langchain.llms.GooglePalm method)\n(langchain.llms.GooseAI method)\n(langchain.llms.GPT4All method)\n(langchain.llms.HuggingFaceEndpoint method)\n(langchain.llms.HuggingFaceHub method)\n(langchain.llms.HuggingFacePipeline method)\n(langchain.llms.HuggingFaceTextGenInference method)\n(langchain.llms.HumanInputLLM method)\n(langchain.llms.LlamaCpp method)\n(langchain.llms.Modal method)\n(langchain.llms.MosaicML method)\n(langchain.llms.NLPCloud method)\n(langchain.llms.OpenAI method)\n(langchain.llms.OpenAIChat method)\n(langchain.llms.OpenLM method)\n(langchain.llms.Petals method)\n(langchain.llms.PipelineAI method)\n(langchain.llms.PredictionGuard method)\n(langchain.llms.PromptLayerOpenAI method)\n(langchain.llms.PromptLayerOpenAIChat method)\n(langchain.llms.Replicate method)\n(langchain.llms.RWKV method)\n(langchain.llms.SagemakerEndpoint method)\n(langchain.llms.SelfHostedHuggingFaceLLM method)\n(langchain.llms.SelfHostedPipeline method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-40", "text": "(langchain.llms.SelfHostedPipeline method)\n(langchain.llms.StochasticAI method)\n(langchain.llms.VertexAI method)\n(langchain.llms.Writer method)\ngenerate_dialogue_response() (langchain.experimental.GenerativeAgent method)\ngenerate_prompt() (langchain.llms.AI21 method)\n(langchain.llms.AlephAlpha method)\n(langchain.llms.Anthropic method)\n(langchain.llms.Anyscale method)\n(langchain.llms.Aviary method)\n(langchain.llms.AzureOpenAI method)\n(langchain.llms.Banana method)\n(langchain.llms.Baseten method)\n(langchain.llms.Beam method)\n(langchain.llms.Bedrock method)\n(langchain.llms.CerebriumAI method)\n(langchain.llms.Cohere method)\n(langchain.llms.CTransformers method)\n(langchain.llms.Databricks method)\n(langchain.llms.DeepInfra method)\n(langchain.llms.FakeListLLM method)\n(langchain.llms.ForefrontAI method)\n(langchain.llms.GooglePalm method)\n(langchain.llms.GooseAI method)\n(langchain.llms.GPT4All method)\n(langchain.llms.HuggingFaceEndpoint method)\n(langchain.llms.HuggingFaceHub method)\n(langchain.llms.HuggingFacePipeline method)\n(langchain.llms.HuggingFaceTextGenInference method)\n(langchain.llms.HumanInputLLM method)\n(langchain.llms.LlamaCpp method)\n(langchain.llms.Modal method)\n(langchain.llms.MosaicML method)\n(langchain.llms.NLPCloud method)\n(langchain.llms.OpenAI method)\n(langchain.llms.OpenAIChat method)\n(langchain.llms.OpenLM method)\n(langchain.llms.Petals method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-41", "text": "(langchain.llms.OpenLM method)\n(langchain.llms.Petals method)\n(langchain.llms.PipelineAI method)\n(langchain.llms.PredictionGuard method)\n(langchain.llms.PromptLayerOpenAI method)\n(langchain.llms.PromptLayerOpenAIChat method)\n(langchain.llms.Replicate method)\n(langchain.llms.RWKV method)\n(langchain.llms.SagemakerEndpoint method)\n(langchain.llms.SelfHostedHuggingFaceLLM method)\n(langchain.llms.SelfHostedPipeline method)\n(langchain.llms.StochasticAI method)\n(langchain.llms.VertexAI method)\n(langchain.llms.Writer method)\ngenerate_reaction() (langchain.experimental.GenerativeAgent method)\nGenerativeAgent (class in langchain.experimental)\nGenerativeAgentMemory (class in langchain.experimental)\nget() (langchain.memory.InMemoryEntityStore method)\n(langchain.memory.RedisEntityStore method)\n(langchain.memory.SQLiteEntityStore method)\n(langchain.utilities.TextRequestsWrapper method)\n(langchain.vectorstores.Chroma method)\nget_all_tool_names() (in module langchain.agents)\nget_allowed_tools() (langchain.agents.Agent method)\n(langchain.agents.BaseMultiActionAgent method)\n(langchain.agents.BaseSingleActionAgent method)\nget_answer_expr (langchain.chains.PALChain attribute)\nget_cleaned_operation_id() (langchain.tools.OpenAPISpec static method)\nget_collection() (langchain.vectorstores.AnalyticDB method)\nget_connection_string() (langchain.vectorstores.AnalyticDB class method)\nget_current_entities() (langchain.memory.ConversationKGMemory method)\nget_description() (langchain.tools.VectorStoreQATool static method)\n(langchain.tools.VectorStoreQAWithSourcesTool static method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-42", "text": "(langchain.tools.VectorStoreQAWithSourcesTool static method)\nget_format_instructions() (langchain.output_parsers.CommaSeparatedListOutputParser method)\n(langchain.output_parsers.DatetimeOutputParser method)\n(langchain.output_parsers.GuardrailsOutputParser method)\n(langchain.output_parsers.OutputFixingParser method)\n(langchain.output_parsers.PydanticOutputParser method)\n(langchain.output_parsers.RetryOutputParser method)\n(langchain.output_parsers.RetryWithErrorOutputParser method)\n(langchain.output_parsers.StructuredOutputParser method)\nget_full_header() (langchain.experimental.GenerativeAgent method)\nget_full_inputs() (langchain.agents.Agent method)\nget_input (langchain.retrievers.document_compressors.LLMChainExtractor attribute)\n(langchain.retrievers.document_compressors.LLMChainFilter attribute)\nget_knowledge_triplets() (langchain.memory.ConversationKGMemory method)\nget_methods_for_path() (langchain.tools.OpenAPISpec method)\nget_next_task() (langchain.experimental.BabyAGI method)\nget_num_rows() (langchain.document_loaders.PySparkDataFrameLoader method)\nget_num_tokens() (langchain.chat_models.ChatAnthropic method)\n(langchain.llms.AI21 method)\n(langchain.llms.AlephAlpha method)\n(langchain.llms.Anthropic method)\n(langchain.llms.Anyscale method)\n(langchain.llms.Aviary method)\n(langchain.llms.AzureOpenAI method)\n(langchain.llms.Banana method)\n(langchain.llms.Baseten method)\n(langchain.llms.Beam method)\n(langchain.llms.Bedrock method)\n(langchain.llms.CerebriumAI method)\n(langchain.llms.Cohere method)\n(langchain.llms.CTransformers method)\n(langchain.llms.Databricks method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-43", "text": "(langchain.llms.CTransformers method)\n(langchain.llms.Databricks method)\n(langchain.llms.DeepInfra method)\n(langchain.llms.FakeListLLM method)\n(langchain.llms.ForefrontAI method)\n(langchain.llms.GooglePalm method)\n(langchain.llms.GooseAI method)\n(langchain.llms.GPT4All method)\n(langchain.llms.HuggingFaceEndpoint method)\n(langchain.llms.HuggingFaceHub method)\n(langchain.llms.HuggingFacePipeline method)\n(langchain.llms.HuggingFaceTextGenInference method)\n(langchain.llms.HumanInputLLM method)\n(langchain.llms.LlamaCpp method)\n(langchain.llms.Modal method)\n(langchain.llms.MosaicML method)\n(langchain.llms.NLPCloud method)\n(langchain.llms.OpenAI method)\n(langchain.llms.OpenAIChat method)\n(langchain.llms.OpenLM method)\n(langchain.llms.Petals method)\n(langchain.llms.PipelineAI method)\n(langchain.llms.PredictionGuard method)\n(langchain.llms.PromptLayerOpenAI method)\n(langchain.llms.PromptLayerOpenAIChat method)\n(langchain.llms.Replicate method)\n(langchain.llms.RWKV method)\n(langchain.llms.SagemakerEndpoint method)\n(langchain.llms.SelfHostedHuggingFaceLLM method)\n(langchain.llms.SelfHostedPipeline method)\n(langchain.llms.StochasticAI method)\n(langchain.llms.VertexAI method)\n(langchain.llms.Writer method)\nget_num_tokens_from_messages() (langchain.chat_models.ChatOpenAI method)\n(langchain.llms.AI21 method)\n(langchain.llms.AlephAlpha method)\n(langchain.llms.Anthropic method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-44", "text": "(langchain.llms.Anthropic method)\n(langchain.llms.Anyscale method)\n(langchain.llms.Aviary method)\n(langchain.llms.AzureOpenAI method)\n(langchain.llms.Banana method)\n(langchain.llms.Baseten method)\n(langchain.llms.Beam method)\n(langchain.llms.Bedrock method)\n(langchain.llms.CerebriumAI method)\n(langchain.llms.Cohere method)\n(langchain.llms.CTransformers method)\n(langchain.llms.Databricks method)\n(langchain.llms.DeepInfra method)\n(langchain.llms.FakeListLLM method)\n(langchain.llms.ForefrontAI method)\n(langchain.llms.GooglePalm method)\n(langchain.llms.GooseAI method)\n(langchain.llms.GPT4All method)\n(langchain.llms.HuggingFaceEndpoint method)\n(langchain.llms.HuggingFaceHub method)\n(langchain.llms.HuggingFacePipeline method)\n(langchain.llms.HuggingFaceTextGenInference method)\n(langchain.llms.HumanInputLLM method)\n(langchain.llms.LlamaCpp method)\n(langchain.llms.Modal method)\n(langchain.llms.MosaicML method)\n(langchain.llms.NLPCloud method)\n(langchain.llms.OpenAI method)\n(langchain.llms.OpenAIChat method)\n(langchain.llms.OpenLM method)\n(langchain.llms.Petals method)\n(langchain.llms.PipelineAI method)\n(langchain.llms.PredictionGuard method)\n(langchain.llms.PromptLayerOpenAI method)\n(langchain.llms.PromptLayerOpenAIChat method)\n(langchain.llms.Replicate method)\n(langchain.llms.RWKV method)\n(langchain.llms.SagemakerEndpoint method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-45", "text": "(langchain.llms.SagemakerEndpoint method)\n(langchain.llms.SelfHostedHuggingFaceLLM method)\n(langchain.llms.SelfHostedPipeline method)\n(langchain.llms.StochasticAI method)\n(langchain.llms.VertexAI method)\n(langchain.llms.Writer method)\nget_operation() (langchain.tools.OpenAPISpec method)\nget_parameters_for_operation() (langchain.tools.OpenAPISpec method)\nget_params() (langchain.serpapi.SerpAPIWrapper method)\n(langchain.utilities.SerpAPIWrapper method)\nget_principles() (langchain.chains.ConstitutionalChain class method)\nget_processed_pdf() (langchain.document_loaders.MathpixPDFLoader method)\nget_referenced_schema() (langchain.tools.OpenAPISpec method)\nget_relevant_documents() (langchain.retrievers.ArxivRetriever method)\n(langchain.retrievers.AwsKendraIndexRetriever method)\n(langchain.retrievers.AzureCognitiveSearchRetriever method)\n(langchain.retrievers.ChatGPTPluginRetriever method)\n(langchain.retrievers.ContextualCompressionRetriever method)\n(langchain.retrievers.DataberryRetriever method)\n(langchain.retrievers.ElasticSearchBM25Retriever method)\n(langchain.retrievers.KNNRetriever method)\n(langchain.retrievers.MergerRetriever method)\n(langchain.retrievers.MetalRetriever method)\n(langchain.retrievers.PineconeHybridSearchRetriever method)\n(langchain.retrievers.PubMedRetriever method)\n(langchain.retrievers.RemoteLangChainRetriever method)\n(langchain.retrievers.SelfQueryRetriever method)\n(langchain.retrievers.SVMRetriever method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-46", "text": "(langchain.retrievers.SVMRetriever method)\n(langchain.retrievers.TFIDFRetriever method)\n(langchain.retrievers.TimeWeightedVectorStoreRetriever method)\n(langchain.retrievers.VespaRetriever method)\n(langchain.retrievers.WeaviateHybridSearchRetriever method)\n(langchain.retrievers.WikipediaRetriever method)\n(langchain.retrievers.ZepRetriever method)\nget_relevant_documents_with_filter() (langchain.retrievers.VespaRetriever method)\nget_request_body_for_operation() (langchain.tools.OpenAPISpec method)\nget_salient_docs() (langchain.retrievers.TimeWeightedVectorStoreRetriever method)\nget_schemas() (langchain.utilities.PowerBIDataset method)\nget_separators_for_language() (langchain.text_splitter.RecursiveCharacterTextSplitter static method)\nget_snippets() (langchain.utilities.DuckDuckGoSearchAPIWrapper method)\nget_stateful_documents() (in module langchain.document_transformers)\nget_sub_prompts() (langchain.llms.AzureOpenAI method)\n(langchain.llms.OpenAI method)\n(langchain.llms.OpenLM method)\n(langchain.llms.PromptLayerOpenAI method)\nget_summary() (langchain.experimental.GenerativeAgent method)\nget_table_info() (langchain.utilities.PowerBIDataset method)\n(langchain.utilities.SparkSQL method)\nget_table_info_no_throw() (langchain.utilities.SparkSQL method)\nget_table_names() (langchain.utilities.PowerBIDataset method)\nget_text_length (langchain.prompts.example_selector.LengthBasedExampleSelector attribute)\nget_token_ids() (langchain.chat_models.ChatOpenAI method)\n(langchain.llms.AI21 method)\n(langchain.llms.AlephAlpha method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-47", "text": "(langchain.llms.AI21 method)\n(langchain.llms.AlephAlpha method)\n(langchain.llms.Anthropic method)\n(langchain.llms.Anyscale method)\n(langchain.llms.Aviary method)\n(langchain.llms.AzureOpenAI method)\n(langchain.llms.Banana method)\n(langchain.llms.Baseten method)\n(langchain.llms.Beam method)\n(langchain.llms.Bedrock method)\n(langchain.llms.CerebriumAI method)\n(langchain.llms.Cohere method)\n(langchain.llms.CTransformers method)\n(langchain.llms.Databricks method)\n(langchain.llms.DeepInfra method)\n(langchain.llms.FakeListLLM method)\n(langchain.llms.ForefrontAI method)\n(langchain.llms.GooglePalm method)\n(langchain.llms.GooseAI method)\n(langchain.llms.GPT4All method)\n(langchain.llms.HuggingFaceEndpoint method)\n(langchain.llms.HuggingFaceHub method)\n(langchain.llms.HuggingFacePipeline method)\n(langchain.llms.HuggingFaceTextGenInference method)\n(langchain.llms.HumanInputLLM method)\n(langchain.llms.LlamaCpp method)\n(langchain.llms.Modal method)\n(langchain.llms.MosaicML method)\n(langchain.llms.NLPCloud method)\n(langchain.llms.OpenAI method)\n(langchain.llms.OpenAIChat method)\n(langchain.llms.OpenLM method)\n(langchain.llms.Petals method)\n(langchain.llms.PipelineAI method)\n(langchain.llms.PredictionGuard method)\n(langchain.llms.PromptLayerOpenAI method)\n(langchain.llms.PromptLayerOpenAIChat method)\n(langchain.llms.Replicate method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-48", "text": "(langchain.llms.Replicate method)\n(langchain.llms.RWKV method)\n(langchain.llms.SagemakerEndpoint method)\n(langchain.llms.SelfHostedHuggingFaceLLM method)\n(langchain.llms.SelfHostedPipeline method)\n(langchain.llms.StochasticAI method)\n(langchain.llms.VertexAI method)\n(langchain.llms.Writer method)\nget_tools() (langchain.agents.agent_toolkits.AzureCognitiveServicesToolkit method)\n(langchain.agents.agent_toolkits.FileManagementToolkit method)\n(langchain.agents.agent_toolkits.GmailToolkit method)\n(langchain.agents.agent_toolkits.JiraToolkit method)\n(langchain.agents.agent_toolkits.JsonToolkit method)\n(langchain.agents.agent_toolkits.NLAToolkit method)\n(langchain.agents.agent_toolkits.OpenAPIToolkit method)\n(langchain.agents.agent_toolkits.PlayWrightBrowserToolkit method)\n(langchain.agents.agent_toolkits.PowerBIToolkit method)\n(langchain.agents.agent_toolkits.SparkSQLToolkit method)\n(langchain.agents.agent_toolkits.SQLDatabaseToolkit method)\n(langchain.agents.agent_toolkits.VectorStoreRouterToolkit method)\n(langchain.agents.agent_toolkits.VectorStoreToolkit method)\n(langchain.agents.agent_toolkits.ZapierToolkit method)\nget_usable_table_names() (langchain.utilities.SparkSQL method)\nGitbookLoader (class in langchain.document_loaders)\nGitLoader (class in langchain.document_loaders)\ngl (langchain.utilities.GoogleSerperAPIWrapper attribute)\nglobals (langchain.python.PythonREPL attribute)\n(langchain.utilities.PythonREPL attribute)\nGO (langchain.text_splitter.Language attribute)\ngoogle_api_client (langchain.document_loaders.GoogleApiYoutubeLoader attribute)\ngoogle_api_key (langchain.chat_models.ChatGooglePalm attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-49", "text": "google_api_key (langchain.chat_models.ChatGooglePalm attribute)\n(langchain.utilities.GoogleSearchAPIWrapper attribute)\ngoogle_cse_id (langchain.utilities.GoogleSearchAPIWrapper attribute)\nGoogleApiClient (class in langchain.document_loaders)\nGoogleApiYoutubeLoader (class in langchain.document_loaders)\ngplaces_api_key (langchain.utilities.GooglePlacesAPIWrapper attribute)\ngraph (langchain.chains.GraphCypherQAChain attribute)\n(langchain.chains.GraphQAChain attribute)\n(langchain.chains.NebulaGraphQAChain attribute)\ngraphql_endpoint (langchain.utilities.GraphQLAPIWrapper attribute)\ngroup_id (langchain.utilities.PowerBIDataset attribute)\nguard (langchain.output_parsers.GuardrailsOutputParser attribute)\nGutenbergLoader (class in langchain.document_loaders)\nH\nhandle_parsing_errors (langchain.agents.AgentExecutor attribute)\nhandle_tool_error (langchain.tools.BaseTool attribute)\n(langchain.tools.Tool attribute)\nhardware (langchain.embeddings.SelfHostedHuggingFaceEmbeddings attribute)\n(langchain.llms.SelfHostedHuggingFaceLLM attribute)\n(langchain.llms.SelfHostedPipeline attribute)\nheaders (langchain.document_loaders.MathpixPDFLoader property)\n(langchain.retrievers.RemoteLangChainRetriever attribute)\n(langchain.utilities.PowerBIDataset property)\n(langchain.utilities.searx_search.SearxSearchWrapper attribute)\n(langchain.utilities.SearxSearchWrapper attribute)\n(langchain.utilities.TextRequestsWrapper attribute)\nheadless (langchain.document_loaders.PlaywrightURLLoader attribute)\n(langchain.document_loaders.SeleniumURLLoader attribute)\nhl (langchain.utilities.GoogleSerperAPIWrapper attribute)\nHNLoader (class in langchain.document_loaders)\nhost (langchain.llms.Databricks attribute)\n(langchain.vectorstores.ClickhouseSettings attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-50", "text": "(langchain.vectorstores.ClickhouseSettings attribute)\n(langchain.vectorstores.MyScaleSettings attribute)\nhosting (langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding attribute)\nHTML (langchain.text_splitter.Language attribute)\nHuggingFaceDatasetLoader (class in langchain.document_loaders)\nhuman_prefix (langchain.memory.ConversationBufferMemory attribute)\n(langchain.memory.ConversationBufferWindowMemory attribute)\n(langchain.memory.ConversationEntityMemory attribute)\n(langchain.memory.ConversationKGMemory attribute)\n(langchain.memory.ConversationStringBufferMemory attribute)\n(langchain.memory.ConversationTokenBufferMemory attribute)\nI\nIFixitLoader (class in langchain.document_loaders)\nImageCaptionLoader (class in langchain.document_loaders)\nimpersonated_user_name (langchain.utilities.PowerBIDataset attribute)\nimportance_weight (langchain.experimental.GenerativeAgentMemory attribute)\nIMSDbLoader (class in langchain.document_loaders)\ninclude_prs (langchain.document_loaders.GitHubIssuesLoader attribute)\nindex (langchain.retrievers.KNNRetriever attribute)\n(langchain.retrievers.PineconeHybridSearchRetriever attribute)\n(langchain.retrievers.SVMRetriever attribute)\nindex_name (langchain.retrievers.AzureCognitiveSearchRetriever attribute)\nindex_param (langchain.vectorstores.ClickhouseSettings attribute)\n(langchain.vectorstores.MyScaleSettings attribute)\nindex_query_params (langchain.vectorstores.ClickhouseSettings attribute)\nindex_type (langchain.vectorstores.ClickhouseSettings attribute)\n(langchain.vectorstores.MyScaleSettings attribute)\ninference_fn (langchain.embeddings.SelfHostedEmbeddings attribute)\n(langchain.embeddings.SelfHostedHuggingFaceEmbeddings attribute)\n(langchain.llms.SelfHostedHuggingFaceLLM attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-51", "text": "(langchain.llms.SelfHostedHuggingFaceLLM attribute)\n(langchain.llms.SelfHostedPipeline attribute)\ninference_kwargs (langchain.embeddings.SelfHostedEmbeddings attribute)\ninitialize_agent() (in module langchain.agents)\ninject_instruction_format (langchain.llms.MosaicML attribute)\nInMemoryDocstore (class in langchain.docstore)\ninput_func (langchain.tools.HumanInputRun attribute)\ninput_key (langchain.chains.QAGenerationChain attribute)\n(langchain.memory.ConversationStringBufferMemory attribute)\n(langchain.memory.VectorStoreRetrieverMemory attribute)\n(langchain.retrievers.RemoteLangChainRetriever attribute)\ninput_keys (langchain.chains.ConstitutionalChain property)\n(langchain.chains.ConversationChain property)\n(langchain.chains.FlareChain property)\n(langchain.chains.HypotheticalDocumentEmbedder property)\n(langchain.chains.QAGenerationChain property)\n(langchain.experimental.BabyAGI property)\n(langchain.prompts.example_selector.SemanticSimilarityExampleSelector attribute)\ninput_variables (langchain.chains.SequentialChain attribute)\n(langchain.chains.TransformChain attribute)\n(langchain.prompts.BasePromptTemplate attribute)\n(langchain.prompts.FewShotPromptTemplate attribute)\n(langchain.prompts.FewShotPromptWithTemplates attribute)\n(langchain.prompts.MessagesPlaceholder property)\n(langchain.prompts.PromptTemplate attribute)\nis_public_page() (langchain.document_loaders.ConfluenceLoader method)\nis_single_input (langchain.tools.BaseTool property)\nIuguLoader (class in langchain.document_loaders)\nJ\nJAVA (langchain.text_splitter.Language attribute)\nJoplinLoader (class in langchain.document_loaders)\nJS (langchain.text_splitter.Language attribute)\njson() (langchain.llms.AI21 method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-52", "text": "json() (langchain.llms.AI21 method)\n(langchain.llms.AlephAlpha method)\n(langchain.llms.Anthropic method)\n(langchain.llms.Anyscale method)\n(langchain.llms.Aviary method)\n(langchain.llms.AzureOpenAI method)\n(langchain.llms.Banana method)\n(langchain.llms.Baseten method)\n(langchain.llms.Beam method)\n(langchain.llms.Bedrock method)\n(langchain.llms.CerebriumAI method)\n(langchain.llms.Cohere method)\n(langchain.llms.CTransformers method)\n(langchain.llms.Databricks method)\n(langchain.llms.DeepInfra method)\n(langchain.llms.FakeListLLM method)\n(langchain.llms.ForefrontAI method)\n(langchain.llms.GooglePalm method)\n(langchain.llms.GooseAI method)\n(langchain.llms.GPT4All method)\n(langchain.llms.HuggingFaceEndpoint method)\n(langchain.llms.HuggingFaceHub method)\n(langchain.llms.HuggingFacePipeline method)\n(langchain.llms.HuggingFaceTextGenInference method)\n(langchain.llms.HumanInputLLM method)\n(langchain.llms.LlamaCpp method)\n(langchain.llms.Modal method)\n(langchain.llms.MosaicML method)\n(langchain.llms.NLPCloud method)\n(langchain.llms.OpenAI method)\n(langchain.llms.OpenAIChat method)\n(langchain.llms.OpenLM method)\n(langchain.llms.Petals method)\n(langchain.llms.PipelineAI method)\n(langchain.llms.PredictionGuard method)\n(langchain.llms.PromptLayerOpenAI method)\n(langchain.llms.PromptLayerOpenAIChat method)\n(langchain.llms.Replicate method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-53", "text": "(langchain.llms.Replicate method)\n(langchain.llms.RWKV method)\n(langchain.llms.SagemakerEndpoint method)\n(langchain.llms.SelfHostedHuggingFaceLLM method)\n(langchain.llms.SelfHostedPipeline method)\n(langchain.llms.StochasticAI method)\n(langchain.llms.VertexAI method)\n(langchain.llms.Writer method)\njson_agent (langchain.agents.agent_toolkits.OpenAPIToolkit attribute)\nJSONLoader (class in langchain.document_loaders)\nK\nk (langchain.chains.QAGenerationChain attribute)\n(langchain.chains.VectorDBQA attribute)\n(langchain.chains.VectorDBQAWithSourcesChain attribute)\n(langchain.llms.Cohere attribute)\n(langchain.memory.ConversationBufferWindowMemory attribute)\n(langchain.memory.ConversationEntityMemory attribute)\n(langchain.memory.ConversationKGMemory attribute)\n(langchain.prompts.example_selector.SemanticSimilarityExampleSelector attribute)\n(langchain.retrievers.AwsKendraIndexRetriever attribute)\n(langchain.retrievers.document_compressors.EmbeddingsFilter attribute)\n(langchain.retrievers.KNNRetriever attribute)\n(langchain.retrievers.SVMRetriever attribute)\n(langchain.retrievers.TFIDFRetriever attribute)\n(langchain.retrievers.TimeWeightedVectorStoreRetriever attribute)\n(langchain.utilities.BingSearchAPIWrapper attribute)\n(langchain.utilities.DuckDuckGoSearchAPIWrapper attribute)\n(langchain.utilities.GoogleSearchAPIWrapper attribute)\n(langchain.utilities.GoogleSerperAPIWrapper attribute)\n(langchain.utilities.MetaphorSearchAPIWrapper attribute)\n(langchain.utilities.searx_search.SearxSearchWrapper attribute)\n(langchain.utilities.SearxSearchWrapper attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-54", "text": "(langchain.utilities.SearxSearchWrapper attribute)\nkclient (langchain.retrievers.AwsKendraIndexRetriever attribute)\nkendraindex (langchain.retrievers.AwsKendraIndexRetriever attribute)\nkey (langchain.memory.RedisChatMessageHistory property)\nkey_prefix (langchain.memory.RedisEntityStore attribute)\nkg (langchain.memory.ConversationKGMemory attribute)\nknowledge_extraction_prompt (langchain.memory.ConversationKGMemory attribute)\nL\nlabels (langchain.document_loaders.GitHubIssuesLoader attribute)\nLanceDB (class in langchain.vectorstores)\nlang (langchain.utilities.WikipediaAPIWrapper attribute)\n langchain.agents\n \nmodule\n langchain.agents.agent_toolkits\n \nmodule\n langchain.chains\n \nmodule\n langchain.chat_models\n \nmodule\n langchain.docstore\n \nmodule\n langchain.document_loaders\n \nmodule\n langchain.document_transformers\n \nmodule\n langchain.embeddings\n \nmodule\n langchain.llms\n \nmodule\n langchain.memory\n \nmodule\n langchain.output_parsers\n \nmodule\n langchain.prompts\n \nmodule\n langchain.prompts.example_selector\n \nmodule\n langchain.python\n \nmodule\n langchain.retrievers\n \nmodule\n langchain.retrievers.document_compressors\n \nmodule\n langchain.serpapi\n \nmodule\n langchain.text_splitter\n \nmodule\n langchain.tools\n \nmodule\n langchain.utilities\n \nmodule\n langchain.utilities.searx_search\n \nmodule\n langchain.vectorstores\n \nmodule\nLanguage (class in langchain.text_splitter)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-55", "text": "module\nLanguage (class in langchain.text_splitter)\nlanguagecode (langchain.retrievers.AwsKendraIndexRetriever attribute)\nlast_n_tokens_size (langchain.llms.LlamaCpp attribute)\nlast_refreshed (langchain.experimental.GenerativeAgent attribute)\nLATEX (langchain.text_splitter.Language attribute)\nLatexTextSplitter (class in langchain.text_splitter)\nlazy_load() (langchain.document_loaders.AirtableLoader method)\n(langchain.document_loaders.BibtexLoader method)\n(langchain.document_loaders.FaunaLoader method)\n(langchain.document_loaders.GitHubIssuesLoader method)\n(langchain.document_loaders.HuggingFaceDatasetLoader method)\n(langchain.document_loaders.JoplinLoader method)\n(langchain.document_loaders.MaxComputeLoader method)\n(langchain.document_loaders.PDFMinerLoader method)\n(langchain.document_loaders.PyPDFium2Loader method)\n(langchain.document_loaders.PyPDFLoader method)\n(langchain.document_loaders.PySparkDataFrameLoader method)\n(langchain.document_loaders.SnowflakeLoader method)\n(langchain.document_loaders.ToMarkdownLoader method)\n(langchain.document_loaders.TomlLoader method)\n(langchain.document_loaders.WeatherDataLoader method)\nlength (langchain.llms.ForefrontAI attribute)\nlength_no_input (langchain.llms.NLPCloud attribute)\nlength_penalty (langchain.llms.NLPCloud attribute)\nlib (langchain.llms.CTransformers attribute)\nlist_assertions_prompt (langchain.chains.LLMCheckerChain attribute)\nllm (langchain.agents.agent_toolkits.PowerBIToolkit attribute)\n(langchain.agents.agent_toolkits.SparkSQLToolkit attribute)\n(langchain.agents.agent_toolkits.SQLDatabaseToolkit attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-56", "text": "(langchain.agents.agent_toolkits.SQLDatabaseToolkit attribute)\n(langchain.agents.agent_toolkits.VectorStoreRouterToolkit attribute)\n(langchain.agents.agent_toolkits.VectorStoreToolkit attribute)\n(langchain.chains.LLMBashChain attribute)\n(langchain.chains.LLMChain attribute)\n(langchain.chains.LLMCheckerChain attribute)\n(langchain.chains.LLMMathChain attribute)\n(langchain.chains.LLMSummarizationCheckerChain attribute)\n(langchain.chains.PALChain attribute)\n(langchain.chains.SQLDatabaseChain attribute)\n(langchain.experimental.GenerativeAgent attribute)\n(langchain.experimental.GenerativeAgentMemory attribute)\n(langchain.memory.ConversationEntityMemory attribute)\n(langchain.memory.ConversationKGMemory attribute)\n(langchain.memory.ConversationTokenBufferMemory attribute)\nllm_chain (langchain.agents.Agent attribute)\n(langchain.agents.LLMSingleActionAgent attribute)\n(langchain.chains.HypotheticalDocumentEmbedder attribute)\n(langchain.chains.LLMBashChain attribute)\n(langchain.chains.LLMMathChain attribute)\n(langchain.chains.LLMRequestsChain attribute)\n(langchain.chains.PALChain attribute)\n(langchain.chains.QAGenerationChain attribute)\n(langchain.chains.SQLDatabaseChain attribute)\n(langchain.retrievers.document_compressors.LLMChainExtractor attribute)\n(langchain.retrievers.document_compressors.LLMChainFilter attribute)\n(langchain.retrievers.SelfQueryRetriever attribute)\n(langchain.tools.QueryPowerBITool attribute)\nllm_prefix (langchain.agents.Agent property)\n(langchain.agents.ConversationalAgent property)\n(langchain.agents.ConversationalChatAgent property)\n(langchain.agents.StructuredChatAgent property)\n(langchain.agents.ZeroShotAgent property)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-57", "text": "(langchain.agents.ZeroShotAgent property)\nload() (langchain.document_loaders.AirbyteJSONLoader method)\n(langchain.document_loaders.AirtableLoader method)\n(langchain.document_loaders.ApifyDatasetLoader method)\n(langchain.document_loaders.ArxivLoader method)\n(langchain.document_loaders.AZLyricsLoader method)\n(langchain.document_loaders.AzureBlobStorageContainerLoader method)\n(langchain.document_loaders.AzureBlobStorageFileLoader method)\n(langchain.document_loaders.BibtexLoader method)\n(langchain.document_loaders.BigQueryLoader method)\n(langchain.document_loaders.BiliBiliLoader method)\n(langchain.document_loaders.BlackboardLoader method)\n(langchain.document_loaders.BlockchainDocumentLoader method)\n(langchain.document_loaders.BSHTMLLoader method)\n(langchain.document_loaders.ChatGPTLoader method)\n(langchain.document_loaders.CollegeConfidentialLoader method)\n(langchain.document_loaders.ConfluenceLoader method)\n(langchain.document_loaders.CoNLLULoader method)\n(langchain.document_loaders.CSVLoader method)\n(langchain.document_loaders.DataFrameLoader method)\n(langchain.document_loaders.DiffbotLoader method)\n(langchain.document_loaders.DirectoryLoader method)\n(langchain.document_loaders.DiscordChatLoader method)\n(langchain.document_loaders.DocugamiLoader method)\n(langchain.document_loaders.Docx2txtLoader method)\n(langchain.document_loaders.DuckDBLoader method)\n(langchain.document_loaders.EverNoteLoader method)\n(langchain.document_loaders.FacebookChatLoader method)\n(langchain.document_loaders.FaunaLoader method)\n(langchain.document_loaders.FigmaFileLoader method)\n(langchain.document_loaders.GCSDirectoryLoader method)\n(langchain.document_loaders.GCSFileLoader method)\n(langchain.document_loaders.GitbookLoader method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-58", "text": "(langchain.document_loaders.GitbookLoader method)\n(langchain.document_loaders.GitHubIssuesLoader method)\n(langchain.document_loaders.GitLoader method)\n(langchain.document_loaders.GoogleApiYoutubeLoader method)\n(langchain.document_loaders.GoogleDriveLoader method)\n(langchain.document_loaders.GutenbergLoader method)\n(langchain.document_loaders.HNLoader method)\n(langchain.document_loaders.HuggingFaceDatasetLoader method)\n(langchain.document_loaders.IFixitLoader method)\n(langchain.document_loaders.ImageCaptionLoader method)\n(langchain.document_loaders.IMSDbLoader method)\n(langchain.document_loaders.IuguLoader method)\n(langchain.document_loaders.JoplinLoader method)\n(langchain.document_loaders.JSONLoader method)\n(langchain.document_loaders.MastodonTootsLoader method)\n(langchain.document_loaders.MathpixPDFLoader method)\n(langchain.document_loaders.MaxComputeLoader method)\n(langchain.document_loaders.ModernTreasuryLoader method)\n(langchain.document_loaders.MWDumpLoader method)\n(langchain.document_loaders.NotebookLoader method)\n(langchain.document_loaders.NotionDBLoader method)\n(langchain.document_loaders.NotionDirectoryLoader method)\n(langchain.document_loaders.ObsidianLoader method)\n(langchain.document_loaders.OneDriveFileLoader method)\n(langchain.document_loaders.OneDriveLoader method)\n(langchain.document_loaders.OnlinePDFLoader method)\n(langchain.document_loaders.OutlookMessageLoader method)\n(langchain.document_loaders.PDFMinerLoader method)\n(langchain.document_loaders.PDFMinerPDFasHTMLLoader method)\n(langchain.document_loaders.PDFPlumberLoader method)\n(langchain.document_loaders.PlaywrightURLLoader method)\n(langchain.document_loaders.PsychicLoader method)\n(langchain.document_loaders.PyMuPDFLoader method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-59", "text": "(langchain.document_loaders.PyMuPDFLoader method)\n(langchain.document_loaders.PyPDFDirectoryLoader method)\n(langchain.document_loaders.PyPDFium2Loader method)\n(langchain.document_loaders.PyPDFLoader method)\n(langchain.document_loaders.PySparkDataFrameLoader method)\n(langchain.document_loaders.ReadTheDocsLoader method)\n(langchain.document_loaders.RedditPostsLoader method)\n(langchain.document_loaders.RoamLoader method)\n(langchain.document_loaders.S3DirectoryLoader method)\n(langchain.document_loaders.S3FileLoader method)\n(langchain.document_loaders.SeleniumURLLoader method)\n(langchain.document_loaders.SitemapLoader method)\n(langchain.document_loaders.SlackDirectoryLoader method)\n(langchain.document_loaders.SnowflakeLoader method)\n(langchain.document_loaders.SpreedlyLoader method)\n(langchain.document_loaders.SRTLoader method)\n(langchain.document_loaders.StripeLoader method)\n(langchain.document_loaders.TelegramChatApiLoader method)\n(langchain.document_loaders.TelegramChatFileLoader method)\n(langchain.document_loaders.TextLoader method)\n(langchain.document_loaders.ToMarkdownLoader method)\n(langchain.document_loaders.TomlLoader method)\n(langchain.document_loaders.TrelloLoader method)\n(langchain.document_loaders.TwitterTweetLoader method)\n(langchain.document_loaders.UnstructuredURLLoader method)\n(langchain.document_loaders.WeatherDataLoader method)\n(langchain.document_loaders.WebBaseLoader method)\n(langchain.document_loaders.WhatsAppChatLoader method)\n(langchain.document_loaders.WikipediaLoader method)\n(langchain.document_loaders.YoutubeLoader method)\n(langchain.utilities.ArxivAPIWrapper method)\n(langchain.utilities.PubMedAPIWrapper method)\n(langchain.utilities.WikipediaAPIWrapper method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-60", "text": "(langchain.utilities.WikipediaAPIWrapper method)\nload_agent() (in module langchain.agents)\nload_all_available_meta (langchain.utilities.ArxivAPIWrapper attribute)\n(langchain.utilities.PubMedAPIWrapper attribute)\n(langchain.utilities.WikipediaAPIWrapper attribute)\nload_all_recursively (langchain.document_loaders.BlackboardLoader attribute)\nload_chain() (in module langchain.chains)\nload_comments() (langchain.document_loaders.HNLoader method)\nload_device() (langchain.document_loaders.IFixitLoader method)\nload_docs() (langchain.utilities.PubMedAPIWrapper method)\nload_file() (langchain.document_loaders.DirectoryLoader method)\nload_fn_kwargs (langchain.embeddings.SelfHostedHuggingFaceEmbeddings attribute)\n(langchain.llms.SelfHostedHuggingFaceLLM attribute)\n(langchain.llms.SelfHostedPipeline attribute)\nload_guide() (langchain.document_loaders.IFixitLoader method)\nload_huggingface_tool() (in module langchain.agents)\nload_local() (langchain.vectorstores.Annoy class method)\n(langchain.vectorstores.AwaDB method)\n(langchain.vectorstores.FAISS class method)\nload_max_docs (langchain.utilities.ArxivAPIWrapper attribute)\n(langchain.utilities.PubMedAPIWrapper attribute)\nload_memory_variables() (langchain.experimental.GenerativeAgentMemory method)\n(langchain.memory.CombinedMemory method)\n(langchain.memory.ConversationBufferMemory method)\n(langchain.memory.ConversationBufferWindowMemory method)\n(langchain.memory.ConversationEntityMemory method)\n(langchain.memory.ConversationKGMemory method)\n(langchain.memory.ConversationStringBufferMemory method)\n(langchain.memory.ConversationSummaryBufferMemory method)\n(langchain.memory.ConversationSummaryMemory method)\n(langchain.memory.ConversationTokenBufferMemory method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-61", "text": "(langchain.memory.ConversationTokenBufferMemory method)\n(langchain.memory.ReadOnlySharedMemory method)\n(langchain.memory.SimpleMemory method)\n(langchain.memory.VectorStoreRetrieverMemory method)\nload_messages() (langchain.memory.CosmosDBChatMessageHistory method)\nload_page() (langchain.document_loaders.NotionDBLoader method)\nload_prompt() (in module langchain.prompts)\nload_questions_and_answers() (langchain.document_loaders.IFixitLoader method)\nload_results() (langchain.document_loaders.HNLoader method)\nload_suggestions() (langchain.document_loaders.IFixitLoader static method)\nload_tools() (in module langchain.agents)\nload_trashed_files (langchain.document_loaders.GoogleDriveLoader attribute)\nlocals (langchain.python.PythonREPL attribute)\n(langchain.utilities.PythonREPL attribute)\nlocation (langchain.llms.VertexAI attribute)\nlog_probs (langchain.llms.AlephAlpha attribute)\nlogit_bias (langchain.llms.AlephAlpha attribute)\n(langchain.llms.AzureOpenAI attribute)\n(langchain.llms.GooseAI attribute)\n(langchain.llms.OpenAI attribute)\n(langchain.llms.OpenLM attribute)\nlogitBias (langchain.llms.AI21 attribute)\nlogits_all (langchain.embeddings.LlamaCppEmbeddings attribute)\n(langchain.llms.GPT4All attribute)\n(langchain.llms.LlamaCpp attribute)\nlogprobs (langchain.llms.LlamaCpp attribute)\n(langchain.llms.Writer attribute)\nlookup_tool() (langchain.agents.AgentExecutor method)\nlora_base (langchain.llms.LlamaCpp attribute)\nlora_path (langchain.llms.LlamaCpp attribute)\nM\nMARKDOWN (langchain.text_splitter.Language attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-62", "text": "M\nMARKDOWN (langchain.text_splitter.Language attribute)\nMarkdownTextSplitter (class in langchain.text_splitter)\nMastodonTootsLoader (class in langchain.document_loaders)\nMatchingEngine (class in langchain.vectorstores)\nMathpixPDFLoader (class in langchain.document_loaders)\nmax_checks (langchain.chains.LLMSummarizationCheckerChain attribute)\nmax_execution_time (langchain.agents.AgentExecutor attribute)\nmax_iter (langchain.chains.FlareChain attribute)\nmax_iterations (langchain.agents.agent_toolkits.PowerBIToolkit attribute)\n(langchain.agents.AgentExecutor attribute)\n(langchain.tools.QueryPowerBITool attribute)\nmax_length (langchain.llms.NLPCloud attribute)\n(langchain.llms.Petals attribute)\n(langchain.prompts.example_selector.LengthBasedExampleSelector attribute)\nmax_marginal_relevance_search() (langchain.vectorstores.Annoy method)\n(langchain.vectorstores.Chroma method)\n(langchain.vectorstores.DeepLake method)\n(langchain.vectorstores.FAISS method)\n(langchain.vectorstores.Milvus method)\n(langchain.vectorstores.Qdrant method)\n(langchain.vectorstores.SKLearnVectorStore method)\n(langchain.vectorstores.SupabaseVectorStore method)\n(langchain.vectorstores.VectorStore method)\n(langchain.vectorstores.Weaviate method)\nmax_marginal_relevance_search_by_vector() (langchain.vectorstores.Annoy method)\n(langchain.vectorstores.Chroma method)\n(langchain.vectorstores.DeepLake method)\n(langchain.vectorstores.FAISS method)\n(langchain.vectorstores.Milvus method)\n(langchain.vectorstores.SKLearnVectorStore method)\n(langchain.vectorstores.SupabaseVectorStore method)\n(langchain.vectorstores.VectorStore method)\n(langchain.vectorstores.Weaviate method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-63", "text": "(langchain.vectorstores.VectorStore method)\n(langchain.vectorstores.Weaviate method)\nmax_new_tokens (langchain.llms.Petals attribute)\nmax_output_tokens (langchain.llms.GooglePalm attribute)\n(langchain.llms.VertexAI attribute)\nmax_results (langchain.utilities.DuckDuckGoSearchAPIWrapper attribute)\nmax_retries (langchain.chat_models.ChatOpenAI attribute)\n(langchain.embeddings.OpenAIEmbeddings attribute)\n(langchain.llms.AzureOpenAI attribute)\n(langchain.llms.Cohere attribute)\n(langchain.llms.OpenAI attribute)\n(langchain.llms.OpenAIChat attribute)\n(langchain.llms.OpenLM attribute)\n(langchain.llms.PromptLayerOpenAIChat attribute)\nmax_token_limit (langchain.memory.ConversationSummaryBufferMemory attribute)\n(langchain.memory.ConversationTokenBufferMemory attribute)\nmax_tokens (langchain.chat_models.ChatOpenAI attribute)\n(langchain.llms.AzureOpenAI attribute)\n(langchain.llms.Cohere attribute)\n(langchain.llms.GooseAI attribute)\n(langchain.llms.LlamaCpp attribute)\n(langchain.llms.OpenAI attribute)\n(langchain.llms.OpenLM attribute)\n(langchain.llms.PredictionGuard attribute)\n(langchain.llms.Writer attribute)\nmax_tokens_for_prompt() (langchain.llms.AzureOpenAI method)\n(langchain.llms.OpenAI method)\n(langchain.llms.OpenLM method)\n(langchain.llms.PromptLayerOpenAI method)\nmax_tokens_limit (langchain.chains.ConversationalRetrievalChain attribute)\n(langchain.chains.RetrievalQAWithSourcesChain attribute)\n(langchain.chains.VectorDBQAWithSourcesChain attribute)\nmax_tokens_per_generation (langchain.llms.RWKV attribute)\nmax_tokens_to_sample (langchain.llms.Anthropic attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-64", "text": "max_tokens_to_sample (langchain.llms.Anthropic attribute)\nMaxComputeLoader (class in langchain.document_loaders)\nmaximum_tokens (langchain.llms.AlephAlpha attribute)\nmaxTokens (langchain.llms.AI21 attribute)\nmemories (langchain.memory.CombinedMemory attribute)\n(langchain.memory.SimpleMemory attribute)\nmemory (langchain.chains.ConversationChain attribute)\n(langchain.experimental.GenerativeAgent attribute)\n(langchain.memory.ReadOnlySharedMemory attribute)\nmemory_key (langchain.memory.ConversationSummaryBufferMemory attribute)\n(langchain.memory.ConversationTokenBufferMemory attribute)\n(langchain.memory.VectorStoreRetrieverMemory attribute)\nmemory_retriever (langchain.experimental.GenerativeAgentMemory attribute)\nmemory_stream (langchain.retrievers.TimeWeightedVectorStoreRetriever attribute)\nmemory_variables (langchain.experimental.GenerativeAgentMemory property)\n(langchain.memory.CombinedMemory property)\n(langchain.memory.ConversationStringBufferMemory property)\n(langchain.memory.ReadOnlySharedMemory property)\n(langchain.memory.SimpleMemory property)\n(langchain.memory.VectorStoreRetrieverMemory property)\nmentioned (langchain.document_loaders.GitHubIssuesLoader attribute)\nmerge_documents() (langchain.retrievers.MergerRetriever method)\nmerge_from() (langchain.vectorstores.FAISS method)\nMergerRetriever (class in langchain.retrievers)\nmessages (langchain.memory.CassandraChatMessageHistory property)\n(langchain.memory.ChatMessageHistory attribute)\n(langchain.memory.DynamoDBChatMessageHistory property)\n(langchain.memory.FileChatMessageHistory property)\n(langchain.memory.MomentoChatMessageHistory property)\n(langchain.memory.MongoDBChatMessageHistory property)\n(langchain.memory.PostgresChatMessageHistory property)\n(langchain.memory.RedisChatMessageHistory property)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-65", "text": "(langchain.memory.RedisChatMessageHistory property)\nmetadata_column (langchain.vectorstores.Clickhouse property)\n(langchain.vectorstores.MyScale property)\nmetadata_fields (langchain.document_loaders.FaunaLoader attribute)\nmetadata_key (langchain.retrievers.RemoteLangChainRetriever attribute)\nMETADATA_KEY (langchain.vectorstores.Qdrant attribute)\nMetalRetriever (class in langchain.retrievers)\nmetaphor_api_key (langchain.utilities.MetaphorSearchAPIWrapper attribute)\nmethod (langchain.tools.APIOperation attribute)\nmetric (langchain.vectorstores.ClickhouseSettings attribute)\n(langchain.vectorstores.MyScaleSettings attribute)\nmilestone (langchain.document_loaders.GitHubIssuesLoader attribute)\nMilvus (class in langchain.vectorstores)\nmin_chunk_size (langchain.document_loaders.DocugamiLoader attribute)\nmin_length (langchain.llms.NLPCloud attribute)\nmin_prob (langchain.chains.FlareChain attribute)\nmin_token_gap (langchain.chains.FlareChain attribute)\nmin_tokens (langchain.llms.GooseAI attribute)\n(langchain.llms.Writer attribute)\nminimax_api_key (langchain.embeddings.MiniMaxEmbeddings attribute)\nminimax_group_id (langchain.embeddings.MiniMaxEmbeddings attribute)\nminimum_tokens (langchain.llms.AlephAlpha attribute)\nminTokens (langchain.llms.AI21 attribute)\nmodel (langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding attribute)\n(langchain.embeddings.CohereEmbeddings attribute)\n(langchain.embeddings.MiniMaxEmbeddings attribute)\n(langchain.llms.AI21 attribute)\n(langchain.llms.AlephAlpha attribute)\n(langchain.llms.Anthropic attribute)\n(langchain.llms.Cohere attribute)\n(langchain.llms.CTransformers attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-66", "text": "(langchain.llms.Cohere attribute)\n(langchain.llms.CTransformers attribute)\n(langchain.llms.GPT4All attribute)\n(langchain.llms.PredictionGuard attribute)\n(langchain.llms.RWKV attribute)\n(langchain.retrievers.document_compressors.CohereRerank attribute)\nmodel_file (langchain.llms.CTransformers attribute)\nmodel_id (langchain.embeddings.BedrockEmbeddings attribute)\n(langchain.embeddings.DeepInfraEmbeddings attribute)\n(langchain.embeddings.ModelScopeEmbeddings attribute)\n(langchain.embeddings.SelfHostedHuggingFaceEmbeddings attribute)\n(langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings attribute)\n(langchain.llms.Bedrock attribute)\n(langchain.llms.HuggingFacePipeline attribute)\n(langchain.llms.SelfHostedHuggingFaceLLM attribute)\n(langchain.llms.Writer attribute)\nmodel_key (langchain.llms.Banana attribute)\nmodel_kwargs (langchain.chat_models.ChatOpenAI attribute)\n(langchain.embeddings.BedrockEmbeddings attribute)\n(langchain.embeddings.DeepInfraEmbeddings attribute)\n(langchain.embeddings.HuggingFaceEmbeddings attribute)\n(langchain.embeddings.HuggingFaceHubEmbeddings attribute)\n(langchain.embeddings.HuggingFaceInstructEmbeddings attribute)\n(langchain.embeddings.SagemakerEndpointEmbeddings attribute)\n(langchain.llms.Anyscale attribute)\n(langchain.llms.AzureOpenAI attribute)\n(langchain.llms.Banana attribute)\n(langchain.llms.Beam attribute)\n(langchain.llms.Bedrock attribute)\n(langchain.llms.CerebriumAI attribute)\n(langchain.llms.Databricks attribute)\n(langchain.llms.GooseAI attribute)\n(langchain.llms.HuggingFaceEndpoint attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-67", "text": "(langchain.llms.HuggingFaceEndpoint attribute)\n(langchain.llms.HuggingFaceHub attribute)\n(langchain.llms.HuggingFacePipeline attribute)\n(langchain.llms.Modal attribute)\n(langchain.llms.MosaicML attribute)\n(langchain.llms.OpenAI attribute)\n(langchain.llms.OpenAIChat attribute)\n(langchain.llms.OpenLM attribute)\n(langchain.llms.Petals attribute)\n(langchain.llms.PromptLayerOpenAIChat attribute)\n(langchain.llms.SagemakerEndpoint attribute)\n(langchain.llms.SelfHostedHuggingFaceLLM attribute)\n(langchain.llms.StochasticAI attribute)\nmodel_load_fn (langchain.embeddings.SelfHostedHuggingFaceEmbeddings attribute)\n(langchain.llms.SelfHostedHuggingFaceLLM attribute)\n(langchain.llms.SelfHostedPipeline attribute)\nmodel_name (langchain.chains.OpenAIModerationChain attribute)\n(langchain.chat_models.ChatGooglePalm attribute)\n(langchain.chat_models.ChatOpenAI attribute)\n(langchain.chat_models.ChatVertexAI attribute)\n(langchain.embeddings.HuggingFaceEmbeddings attribute)\n(langchain.embeddings.HuggingFaceInstructEmbeddings attribute)\n(langchain.llms.AzureOpenAI attribute)\n(langchain.llms.GooglePalm attribute)\n(langchain.llms.GooseAI attribute)\n(langchain.llms.NLPCloud attribute)\n(langchain.llms.OpenAI attribute)\n(langchain.llms.OpenAIChat attribute)\n(langchain.llms.OpenLM attribute)\n(langchain.llms.Petals attribute)\n(langchain.llms.PromptLayerOpenAIChat attribute)\n(langchain.tools.SteamshipImageGenerationTool attribute)\nmodel_path (langchain.llms.LlamaCpp attribute)\nmodel_reqs (langchain.embeddings.SelfHostedHuggingFaceEmbeddings attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-68", "text": "model_reqs (langchain.embeddings.SelfHostedHuggingFaceEmbeddings attribute)\n(langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings attribute)\n(langchain.llms.SelfHostedHuggingFaceLLM attribute)\n(langchain.llms.SelfHostedPipeline attribute)\nmodel_type (langchain.llms.CTransformers attribute)\nmodel_url (langchain.embeddings.TensorflowHubEmbeddings attribute)\nmodelname_to_contextsize() (langchain.llms.AzureOpenAI method)\n(langchain.llms.OpenAI method)\n(langchain.llms.OpenLM method)\n(langchain.llms.PromptLayerOpenAI method)\nModernTreasuryLoader (class in langchain.document_loaders)\n module\n \nlangchain.agents\nlangchain.agents.agent_toolkits\nlangchain.chains\nlangchain.chat_models\nlangchain.docstore\nlangchain.document_loaders\nlangchain.document_transformers\nlangchain.embeddings\nlangchain.llms\nlangchain.memory\nlangchain.output_parsers\nlangchain.prompts\nlangchain.prompts.example_selector\nlangchain.python\nlangchain.retrievers\nlangchain.retrievers.document_compressors\nlangchain.serpapi\nlangchain.text_splitter\nlangchain.tools\nlangchain.utilities\nlangchain.utilities.searx_search\nlangchain.vectorstores\nMomentoChatMessageHistory (class in langchain.memory)\nMongoDBAtlasVectorSearch (class in langchain.vectorstores)\nMongoDBChatMessageHistory (class in langchain.memory)\nmoving_summary_buffer (langchain.memory.ConversationSummaryBufferMemory attribute)\nMWDumpLoader (class in langchain.document_loaders)\nMyScale (class in langchain.vectorstores)\nN\nn (langchain.chat_models.ChatGooglePalm attribute)\n(langchain.chat_models.ChatOpenAI attribute)\n(langchain.llms.AlephAlpha attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-69", "text": "(langchain.llms.AlephAlpha attribute)\n(langchain.llms.AzureOpenAI attribute)\n(langchain.llms.GooglePalm attribute)\n(langchain.llms.GooseAI attribute)\n(langchain.llms.OpenAI attribute)\n(langchain.llms.OpenLM attribute)\n(langchain.llms.Writer attribute)\nn_batch (langchain.embeddings.LlamaCppEmbeddings attribute)\n(langchain.llms.GPT4All attribute)\n(langchain.llms.LlamaCpp attribute)\nn_ctx (langchain.embeddings.LlamaCppEmbeddings attribute)\n(langchain.llms.GPT4All attribute)\n(langchain.llms.LlamaCpp attribute)\nn_gpu_layers (langchain.embeddings.LlamaCppEmbeddings attribute)\n(langchain.llms.LlamaCpp attribute)\nn_parts (langchain.embeddings.LlamaCppEmbeddings attribute)\n(langchain.llms.GPT4All attribute)\n(langchain.llms.LlamaCpp attribute)\nn_predict (langchain.llms.GPT4All attribute)\nn_threads (langchain.embeddings.LlamaCppEmbeddings attribute)\n(langchain.llms.GPT4All attribute)\n(langchain.llms.LlamaCpp attribute)\nname (langchain.agents.agent_toolkits.VectorStoreInfo attribute)\n(langchain.experimental.GenerativeAgent attribute)\n(langchain.output_parsers.ResponseSchema attribute)\n(langchain.tools.BaseTool attribute)\n(langchain.tools.ClickTool attribute)\n(langchain.tools.CopyFileTool attribute)\n(langchain.tools.CurrentWebPageTool attribute)\n(langchain.tools.DeleteFileTool attribute)\n(langchain.tools.ExtractHyperlinksTool attribute)\n(langchain.tools.ExtractTextTool attribute)\n(langchain.tools.FileSearchTool attribute)\n(langchain.tools.GetElementsTool attribute)\n(langchain.tools.GmailCreateDraft attribute)\n(langchain.tools.GmailGetMessage attribute)\n(langchain.tools.GmailGetThread attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-70", "text": "(langchain.tools.GmailGetMessage attribute)\n(langchain.tools.GmailGetThread attribute)\n(langchain.tools.GmailSearch attribute)\n(langchain.tools.GmailSendMessage attribute)\n(langchain.tools.ListDirectoryTool attribute)\n(langchain.tools.MoveFileTool attribute)\n(langchain.tools.NavigateBackTool attribute)\n(langchain.tools.NavigateTool attribute)\n(langchain.tools.ReadFileTool attribute)\n(langchain.tools.ShellTool attribute)\n(langchain.tools.Tool attribute)\n(langchain.tools.WriteFileTool attribute)\nngql_generation_chain (langchain.chains.NebulaGraphQAChain attribute)\nnla_tools (langchain.agents.agent_toolkits.NLAToolkit attribute)\nNLTKTextSplitter (class in langchain.text_splitter)\nno_update_value (langchain.output_parsers.RegexDictParser attribute)\nnormalize (langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding attribute)\n(langchain.embeddings.DeepInfraEmbeddings attribute)\nNotebookLoader (class in langchain.document_loaders)\nNotionDBLoader (class in langchain.document_loaders)\nNotionDirectoryLoader (class in langchain.document_loaders)\nnum_beams (langchain.llms.NLPCloud attribute)\nnum_pad_tokens (langchain.chains.FlareChain attribute)\nnum_results (langchain.tools.BingSearchResults attribute)\n(langchain.tools.DuckDuckGoSearchResults attribute)\n(langchain.tools.GoogleSearchResults attribute)\nnum_return_sequences (langchain.llms.NLPCloud attribute)\nnumResults (langchain.llms.AI21 attribute)\nO\nobject_ids (langchain.document_loaders.OneDriveLoader attribute)\nobservation_prefix (langchain.agents.Agent property)\n(langchain.agents.ConversationalAgent property)\n(langchain.agents.ConversationalChatAgent property)\n(langchain.agents.StructuredChatAgent property)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-71", "text": "(langchain.agents.StructuredChatAgent property)\n(langchain.agents.ZeroShotAgent property)\nObsidianLoader (class in langchain.document_loaders)\nOnlinePDFLoader (class in langchain.document_loaders)\nopenai_api_base (langchain.chat_models.AzureChatOpenAI attribute)\n(langchain.chat_models.ChatOpenAI attribute)\nopenai_api_key (langchain.chains.OpenAIModerationChain attribute)\n(langchain.chat_models.AzureChatOpenAI attribute)\n(langchain.chat_models.ChatOpenAI attribute)\nopenai_api_type (langchain.chat_models.AzureChatOpenAI attribute)\nopenai_api_version (langchain.chat_models.AzureChatOpenAI attribute)\nopenai_organization (langchain.chains.OpenAIModerationChain attribute)\n(langchain.chat_models.AzureChatOpenAI attribute)\n(langchain.chat_models.ChatOpenAI attribute)\nopenai_proxy (langchain.chat_models.AzureChatOpenAI attribute)\n(langchain.chat_models.ChatOpenAI attribute)\nOpenSearchVectorSearch (class in langchain.vectorstores)\nopenweathermap_api_key (langchain.utilities.OpenWeatherMapAPIWrapper attribute)\noperation_id (langchain.tools.APIOperation attribute)\nother_score_keys (langchain.retrievers.TimeWeightedVectorStoreRetriever attribute)\nOutlookMessageLoader (class in langchain.document_loaders)\noutput (langchain.llms.PredictionGuard attribute)\noutput_key (langchain.chains.QAGenerationChain attribute)\n(langchain.memory.ConversationStringBufferMemory attribute)\noutput_key_to_format (langchain.output_parsers.RegexDictParser attribute)\noutput_keys (langchain.chains.ConstitutionalChain property)\n(langchain.chains.FlareChain property)\n(langchain.chains.HypotheticalDocumentEmbedder property)\n(langchain.chains.QAGenerationChain property)\n(langchain.experimental.BabyAGI property)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-72", "text": "(langchain.experimental.BabyAGI property)\n(langchain.output_parsers.RegexParser attribute)\noutput_parser (langchain.agents.Agent attribute)\n(langchain.agents.ConversationalAgent attribute)\n(langchain.agents.ConversationalChatAgent attribute)\n(langchain.agents.LLMSingleActionAgent attribute)\n(langchain.agents.StructuredChatAgent attribute)\n(langchain.agents.ZeroShotAgent attribute)\n(langchain.chains.FlareChain attribute)\n(langchain.prompts.BasePromptTemplate attribute)\noutput_variables (langchain.chains.TransformChain attribute)\nowm (langchain.utilities.OpenWeatherMapAPIWrapper attribute)\nP\np (langchain.llms.Cohere attribute)\npage_content_field (langchain.document_loaders.FaunaLoader attribute)\npage_content_key (langchain.retrievers.RemoteLangChainRetriever attribute)\nPagedPDFSplitter (in module langchain.document_loaders)\npaginate_request() (langchain.document_loaders.ConfluenceLoader method)\nparam_mapping (langchain.chains.OpenAPIEndpointChain attribute)\nparams (langchain.serpapi.SerpAPIWrapper attribute)\n(langchain.tools.ZapierNLARunAction attribute)\n(langchain.utilities.searx_search.SearxSearchWrapper attribute)\n(langchain.utilities.SearxSearchWrapper attribute)\n(langchain.utilities.SerpAPIWrapper attribute)\nparams_schema (langchain.tools.ZapierNLARunAction attribute)\nparse() (langchain.agents.AgentOutputParser method)\n(langchain.output_parsers.CommaSeparatedListOutputParser method)\n(langchain.output_parsers.DatetimeOutputParser method)\n(langchain.output_parsers.GuardrailsOutputParser method)\n(langchain.output_parsers.ListOutputParser method)\n(langchain.output_parsers.OutputFixingParser method)\n(langchain.output_parsers.PydanticOutputParser method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-73", "text": "(langchain.output_parsers.PydanticOutputParser method)\n(langchain.output_parsers.RegexDictParser method)\n(langchain.output_parsers.RegexParser method)\n(langchain.output_parsers.RetryOutputParser method)\n(langchain.output_parsers.RetryWithErrorOutputParser method)\n(langchain.output_parsers.StructuredOutputParser method)\nparse_filename() (langchain.document_loaders.BlackboardLoader method)\nparse_issue() (langchain.document_loaders.GitHubIssuesLoader method)\nparse_obj() (langchain.tools.OpenAPISpec class method)\nparse_sitemap() (langchain.document_loaders.SitemapLoader method)\nparse_with_prompt() (langchain.output_parsers.RetryOutputParser method)\n(langchain.output_parsers.RetryWithErrorOutputParser method)\nparser (langchain.output_parsers.OutputFixingParser attribute)\n(langchain.output_parsers.RetryOutputParser attribute)\n(langchain.output_parsers.RetryWithErrorOutputParser attribute)\npartial() (langchain.prompts.BasePromptTemplate method)\n(langchain.prompts.ChatPromptTemplate method)\npassword (langchain.vectorstores.ClickhouseSettings attribute)\n(langchain.vectorstores.MyScaleSettings attribute)\npatch() (langchain.utilities.TextRequestsWrapper method)\npath (langchain.tools.APIOperation attribute)\npath_params (langchain.tools.APIOperation property)\npause_to_reflect() (langchain.experimental.GenerativeAgentMemory method)\nPDFMinerLoader (class in langchain.document_loaders)\nPDFMinerPDFasHTMLLoader (class in langchain.document_loaders)\nPDFPlumberLoader (class in langchain.document_loaders)\npenalty_alpha_frequency (langchain.llms.RWKV attribute)\npenalty_alpha_presence (langchain.llms.RWKV attribute)\npenalty_bias (langchain.llms.AlephAlpha attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-74", "text": "penalty_bias (langchain.llms.AlephAlpha attribute)\npenalty_exceptions (langchain.llms.AlephAlpha attribute)\npenalty_exceptions_include_stop_sequences (langchain.llms.AlephAlpha attribute)\npersist() (langchain.vectorstores.Chroma method)\n(langchain.vectorstores.DeepLake method)\n(langchain.vectorstores.SKLearnVectorStore method)\nPHP (langchain.text_splitter.Language attribute)\nPinecone (class in langchain.vectorstores)\npipeline_key (langchain.llms.PipelineAI attribute)\npipeline_kwargs (langchain.llms.HuggingFacePipeline attribute)\n(langchain.llms.PipelineAI attribute)\npl_tags (langchain.chat_models.PromptLayerChatOpenAI attribute)\nplan() (langchain.agents.Agent method)\n(langchain.agents.BaseMultiActionAgent method)\n(langchain.agents.BaseSingleActionAgent method)\n(langchain.agents.LLMSingleActionAgent method)\nplaywright_strict (langchain.tools.ClickTool attribute)\nplaywright_timeout (langchain.tools.ClickTool attribute)\nPlaywrightURLLoader (class in langchain.document_loaders)\nplugin (langchain.tools.AIPluginTool attribute)\nport (langchain.vectorstores.ClickhouseSettings attribute)\n(langchain.vectorstores.MyScaleSettings attribute)\npost() (langchain.utilities.TextRequestsWrapper method)\nPostgresChatMessageHistory (class in langchain.memory)\npowerbi (langchain.agents.agent_toolkits.PowerBIToolkit attribute)\n(langchain.tools.InfoPowerBITool attribute)\n(langchain.tools.ListPowerBITool attribute)\n(langchain.tools.QueryPowerBITool attribute)\npredict() (langchain.chains.LLMChain method)\n(langchain.llms.AI21 method)\n(langchain.llms.AlephAlpha method)\n(langchain.llms.Anthropic method)\n(langchain.llms.Anyscale method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-75", "text": "(langchain.llms.Anthropic method)\n(langchain.llms.Anyscale method)\n(langchain.llms.Aviary method)\n(langchain.llms.AzureOpenAI method)\n(langchain.llms.Banana method)\n(langchain.llms.Baseten method)\n(langchain.llms.Beam method)\n(langchain.llms.Bedrock method)\n(langchain.llms.CerebriumAI method)\n(langchain.llms.Cohere method)\n(langchain.llms.CTransformers method)\n(langchain.llms.Databricks method)\n(langchain.llms.DeepInfra method)\n(langchain.llms.FakeListLLM method)\n(langchain.llms.ForefrontAI method)\n(langchain.llms.GooglePalm method)\n(langchain.llms.GooseAI method)\n(langchain.llms.GPT4All method)\n(langchain.llms.HuggingFaceEndpoint method)\n(langchain.llms.HuggingFaceHub method)\n(langchain.llms.HuggingFacePipeline method)\n(langchain.llms.HuggingFaceTextGenInference method)\n(langchain.llms.HumanInputLLM method)\n(langchain.llms.LlamaCpp method)\n(langchain.llms.Modal method)\n(langchain.llms.MosaicML method)\n(langchain.llms.NLPCloud method)\n(langchain.llms.OpenAI method)\n(langchain.llms.OpenAIChat method)\n(langchain.llms.OpenLM method)\n(langchain.llms.Petals method)\n(langchain.llms.PipelineAI method)\n(langchain.llms.PredictionGuard method)\n(langchain.llms.PromptLayerOpenAI method)\n(langchain.llms.PromptLayerOpenAIChat method)\n(langchain.llms.Replicate method)\n(langchain.llms.RWKV method)\n(langchain.llms.SagemakerEndpoint method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-76", "text": "(langchain.llms.SagemakerEndpoint method)\n(langchain.llms.SelfHostedHuggingFaceLLM method)\n(langchain.llms.SelfHostedPipeline method)\n(langchain.llms.StochasticAI method)\n(langchain.llms.VertexAI method)\n(langchain.llms.Writer method)\npredict_and_parse() (langchain.chains.LLMChain method)\npredict_messages() (langchain.llms.AI21 method)\n(langchain.llms.AlephAlpha method)\n(langchain.llms.Anthropic method)\n(langchain.llms.Anyscale method)\n(langchain.llms.Aviary method)\n(langchain.llms.AzureOpenAI method)\n(langchain.llms.Banana method)\n(langchain.llms.Baseten method)\n(langchain.llms.Beam method)\n(langchain.llms.Bedrock method)\n(langchain.llms.CerebriumAI method)\n(langchain.llms.Cohere method)\n(langchain.llms.CTransformers method)\n(langchain.llms.Databricks method)\n(langchain.llms.DeepInfra method)\n(langchain.llms.FakeListLLM method)\n(langchain.llms.ForefrontAI method)\n(langchain.llms.GooglePalm method)\n(langchain.llms.GooseAI method)\n(langchain.llms.GPT4All method)\n(langchain.llms.HuggingFaceEndpoint method)\n(langchain.llms.HuggingFaceHub method)\n(langchain.llms.HuggingFacePipeline method)\n(langchain.llms.HuggingFaceTextGenInference method)\n(langchain.llms.HumanInputLLM method)\n(langchain.llms.LlamaCpp method)\n(langchain.llms.Modal method)\n(langchain.llms.MosaicML method)\n(langchain.llms.NLPCloud method)\n(langchain.llms.OpenAI method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-77", "text": "(langchain.llms.NLPCloud method)\n(langchain.llms.OpenAI method)\n(langchain.llms.OpenAIChat method)\n(langchain.llms.OpenLM method)\n(langchain.llms.Petals method)\n(langchain.llms.PipelineAI method)\n(langchain.llms.PredictionGuard method)\n(langchain.llms.PromptLayerOpenAI method)\n(langchain.llms.PromptLayerOpenAIChat method)\n(langchain.llms.Replicate method)\n(langchain.llms.RWKV method)\n(langchain.llms.SagemakerEndpoint method)\n(langchain.llms.SelfHostedHuggingFaceLLM method)\n(langchain.llms.SelfHostedPipeline method)\n(langchain.llms.StochasticAI method)\n(langchain.llms.VertexAI method)\n(langchain.llms.Writer method)\nprefix (langchain.prompts.FewShotPromptTemplate attribute)\n(langchain.prompts.FewShotPromptWithTemplates attribute)\nprefix_messages (langchain.llms.OpenAIChat attribute)\n(langchain.llms.PromptLayerOpenAIChat attribute)\nprep_prompts() (langchain.chains.LLMChain method)\nprep_streaming_params() (langchain.llms.AzureOpenAI method)\n(langchain.llms.OpenAI method)\n(langchain.llms.OpenLM method)\n(langchain.llms.PromptLayerOpenAI method)\nprepare_cosmos() (langchain.memory.CosmosDBChatMessageHistory method)\npresence_penalty (langchain.llms.AlephAlpha attribute)\n(langchain.llms.AzureOpenAI attribute)\n(langchain.llms.Cohere attribute)\n(langchain.llms.GooseAI attribute)\n(langchain.llms.OpenAI attribute)\n(langchain.llms.OpenLM attribute)\n(langchain.llms.Writer attribute)\npresencePenalty (langchain.llms.AI21 attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-78", "text": "presencePenalty (langchain.llms.AI21 attribute)\nprioritize_tasks() (langchain.experimental.BabyAGI method)\nprocess (langchain.tools.ShellTool attribute)\nprocess_attachment() (langchain.document_loaders.ConfluenceLoader method)\nprocess_doc() (langchain.document_loaders.ConfluenceLoader method)\nprocess_image() (langchain.document_loaders.ConfluenceLoader method)\nprocess_index_results() (langchain.vectorstores.Annoy method)\nprocess_output() (langchain.utilities.BashProcess method)\nprocess_page() (langchain.document_loaders.ConfluenceLoader method)\nprocess_pages() (langchain.document_loaders.ConfluenceLoader method)\nprocess_pdf() (langchain.document_loaders.ConfluenceLoader method)\nprocess_svg() (langchain.document_loaders.ConfluenceLoader method)\nprocess_xls() (langchain.document_loaders.ConfluenceLoader method)\nproject (langchain.llms.VertexAI attribute)\nPrompt (in module langchain.prompts)\nprompt (langchain.chains.ConversationChain attribute)\n(langchain.chains.LLMBashChain attribute)\n(langchain.chains.LLMChain attribute)\n(langchain.chains.LLMMathChain attribute)\n(langchain.chains.PALChain attribute)\n(langchain.chains.SQLDatabaseChain attribute)\nprompt_func (langchain.tools.HumanInputRun attribute)\nproperties (langchain.tools.APIOperation attribute)\nPROTO (langchain.text_splitter.Language attribute)\nprune() (langchain.memory.ConversationSummaryBufferMemory method)\nPsychicLoader (class in langchain.document_loaders)\nput() (langchain.utilities.TextRequestsWrapper method)\npydantic_object (langchain.output_parsers.PydanticOutputParser attribute)\nPyMuPDFLoader (class in langchain.document_loaders)\nPyPDFDirectoryLoader (class in langchain.document_loaders)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-79", "text": "PyPDFDirectoryLoader (class in langchain.document_loaders)\nPyPDFium2Loader (class in langchain.document_loaders)\nPyPDFLoader (class in langchain.document_loaders)\nPySparkDataFrameLoader (class in langchain.document_loaders)\nPYTHON (langchain.text_splitter.Language attribute)\npython_globals (langchain.chains.PALChain attribute)\npython_locals (langchain.chains.PALChain attribute)\nPythonCodeTextSplitter (class in langchain.text_splitter)\nPythonLoader (class in langchain.document_loaders)\nQ\nqa_chain (langchain.chains.GraphCypherQAChain attribute)\n(langchain.chains.GraphQAChain attribute)\n(langchain.chains.NebulaGraphQAChain attribute)\nQdrant (class in langchain.vectorstores)\nquery (langchain.document_loaders.FaunaLoader attribute)\nquery_checker_prompt (langchain.chains.SQLDatabaseChain attribute)\nquery_instruction (langchain.embeddings.DeepInfraEmbeddings attribute)\n(langchain.embeddings.HuggingFaceInstructEmbeddings attribute)\n(langchain.embeddings.MosaicMLInstructorEmbeddings attribute)\n(langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings attribute)\nquery_name (langchain.vectorstores.SupabaseVectorStore attribute)\nquery_params (langchain.document_loaders.GitHubIssuesLoader property)\n(langchain.tools.APIOperation property)\nquery_suffix (langchain.utilities.searx_search.SearxSearchWrapper attribute)\n(langchain.utilities.SearxSearchWrapper attribute)\nquestion_generator_chain (langchain.chains.FlareChain attribute)\nquestion_to_checked_assertions_chain (langchain.chains.LLMCheckerChain attribute)\nR\nraw_completion (langchain.llms.AlephAlpha attribute)\nREACT_DOCSTORE (langchain.agents.AgentType attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-80", "text": "REACT_DOCSTORE (langchain.agents.AgentType attribute)\nReadTheDocsLoader (class in langchain.document_loaders)\nrecall_ttl (langchain.memory.RedisEntityStore attribute)\nrecursive (langchain.document_loaders.GoogleDriveLoader attribute)\nRecursiveCharacterTextSplitter (class in langchain.text_splitter)\nRedditPostsLoader (class in langchain.document_loaders)\nRedis (class in langchain.vectorstores)\nredis_client (langchain.memory.RedisEntityStore attribute)\nRedisChatMessageHistory (class in langchain.memory)\nreduce_k_below_max_tokens (langchain.chains.RetrievalQAWithSourcesChain attribute)\n(langchain.chains.VectorDBQAWithSourcesChain attribute)\nreflection_threshold (langchain.experimental.GenerativeAgentMemory attribute)\nregex (langchain.output_parsers.RegexParser attribute)\nregex_pattern (langchain.output_parsers.RegexDictParser attribute)\nregion (langchain.utilities.DuckDuckGoSearchAPIWrapper attribute)\nregion_name (langchain.embeddings.BedrockEmbeddings attribute)\n(langchain.embeddings.SagemakerEndpointEmbeddings attribute)\n(langchain.llms.Bedrock attribute)\n(langchain.llms.SagemakerEndpoint attribute)\nrelevancy_threshold (langchain.retrievers.KNNRetriever attribute)\n(langchain.retrievers.SVMRetriever attribute)\nremove_end_sequence (langchain.llms.NLPCloud attribute)\nremove_input (langchain.llms.NLPCloud attribute)\nrepeat_last_n (langchain.llms.GPT4All attribute)\nrepeat_penalty (langchain.llms.GPT4All attribute)\n(langchain.llms.LlamaCpp attribute)\nrepetition_penalties_include_completion (langchain.llms.AlephAlpha attribute)\nrepetition_penalties_include_prompt (langchain.llms.AlephAlpha attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-81", "text": "repetition_penalties_include_prompt (langchain.llms.AlephAlpha attribute)\nrepetition_penalty (langchain.llms.ForefrontAI attribute)\n(langchain.llms.NLPCloud attribute)\n(langchain.llms.Writer attribute)\nrepo_id (langchain.embeddings.HuggingFaceHubEmbeddings attribute)\n(langchain.llms.HuggingFaceHub attribute)\nrequest_body (langchain.tools.APIOperation attribute)\nrequest_timeout (langchain.chat_models.ChatOpenAI attribute)\n(langchain.embeddings.OpenAIEmbeddings attribute)\n(langchain.llms.AzureOpenAI attribute)\n(langchain.llms.OpenAI attribute)\n(langchain.llms.OpenLM attribute)\nrequest_url (langchain.utilities.PowerBIDataset property)\nrequests (langchain.chains.OpenAPIEndpointChain attribute)\n(langchain.utilities.TextRequestsWrapper property)\nrequests_kwargs (langchain.document_loaders.WebBaseLoader attribute)\nrequests_per_second (langchain.document_loaders.WebBaseLoader attribute)\nrequests_wrapper (langchain.agents.agent_toolkits.OpenAPIToolkit attribute)\n(langchain.chains.APIChain attribute)\n(langchain.chains.LLMRequestsChain attribute)\nresponse_chain (langchain.chains.FlareChain attribute)\nresponse_key (langchain.retrievers.RemoteLangChainRetriever attribute)\nresponse_schemas (langchain.output_parsers.StructuredOutputParser attribute)\nresults() (langchain.serpapi.SerpAPIWrapper method)\n(langchain.utilities.BingSearchAPIWrapper method)\n(langchain.utilities.DuckDuckGoSearchAPIWrapper method)\n(langchain.utilities.GoogleSearchAPIWrapper method)\n(langchain.utilities.GoogleSerperAPIWrapper method)\n(langchain.utilities.MetaphorSearchAPIWrapper method)\n(langchain.utilities.searx_search.SearxSearchWrapper method)\n(langchain.utilities.SearxSearchWrapper method)\n(langchain.utilities.SerpAPIWrapper method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-82", "text": "(langchain.utilities.SerpAPIWrapper method)\nresults_async() (langchain.utilities.MetaphorSearchAPIWrapper method)\nretrieve_article() (langchain.utilities.PubMedAPIWrapper method)\nretriever (langchain.chains.ConversationalRetrievalChain attribute)\n(langchain.chains.FlareChain attribute)\n(langchain.chains.RetrievalQA attribute)\n(langchain.chains.RetrievalQAWithSourcesChain attribute)\n(langchain.memory.VectorStoreRetrieverMemory attribute)\nretry_chain (langchain.output_parsers.OutputFixingParser attribute)\n(langchain.output_parsers.RetryOutputParser attribute)\n(langchain.output_parsers.RetryWithErrorOutputParser attribute)\nretry_sleep (langchain.embeddings.MosaicMLInstructorEmbeddings attribute)\n(langchain.llms.MosaicML attribute)\nreturn_all (langchain.chains.SequentialChain attribute)\nreturn_direct (langchain.chains.GraphCypherQAChain attribute)\n(langchain.chains.SQLDatabaseChain attribute)\n(langchain.tools.BaseTool attribute)\n(langchain.tools.Tool attribute)\nreturn_docs (langchain.memory.VectorStoreRetrieverMemory attribute)\nreturn_intermediate_steps (langchain.agents.AgentExecutor attribute)\n(langchain.chains.ConstitutionalChain attribute)\n(langchain.chains.GraphCypherQAChain attribute)\n(langchain.chains.OpenAPIEndpointChain attribute)\n(langchain.chains.PALChain attribute)\n(langchain.chains.SQLDatabaseChain attribute)\n(langchain.chains.SQLDatabaseSequentialChain attribute)\nreturn_pl_id (langchain.chat_models.PromptLayerChatOpenAI attribute)\nreturn_stopped_response() (langchain.agents.Agent method)\n(langchain.agents.BaseMultiActionAgent method)\n(langchain.agents.BaseSingleActionAgent method)\nreturn_urls (langchain.tools.SteamshipImageGenerationTool attribute)\nreturn_values (langchain.agents.Agent property)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-83", "text": "return_values (langchain.agents.Agent property)\n(langchain.agents.BaseMultiActionAgent property)\n(langchain.agents.BaseSingleActionAgent property)\nrevised_answer_prompt (langchain.chains.LLMCheckerChain attribute)\nrevised_summary_prompt (langchain.chains.LLMSummarizationCheckerChain attribute)\nrevision_chain (langchain.chains.ConstitutionalChain attribute)\nRoamLoader (class in langchain.document_loaders)\nroot_dir (langchain.agents.agent_toolkits.FileManagementToolkit attribute)\nRST (langchain.text_splitter.Language attribute)\nRUBY (langchain.text_splitter.Language attribute)\nrun() (langchain.python.PythonREPL method)\n(langchain.serpapi.SerpAPIWrapper method)\n(langchain.tools.BaseTool method)\n(langchain.utilities.ArxivAPIWrapper method)\n(langchain.utilities.BashProcess method)\n(langchain.utilities.BingSearchAPIWrapper method)\n(langchain.utilities.DuckDuckGoSearchAPIWrapper method)\n(langchain.utilities.GooglePlacesAPIWrapper method)\n(langchain.utilities.GoogleSearchAPIWrapper method)\n(langchain.utilities.GoogleSerperAPIWrapper method)\n(langchain.utilities.GraphQLAPIWrapper method)\n(langchain.utilities.LambdaWrapper method)\n(langchain.utilities.OpenWeatherMapAPIWrapper method)\n(langchain.utilities.PowerBIDataset method)\n(langchain.utilities.PubMedAPIWrapper method)\n(langchain.utilities.PythonREPL method)\n(langchain.utilities.searx_search.SearxSearchWrapper method)\n(langchain.utilities.SearxSearchWrapper method)\n(langchain.utilities.SerpAPIWrapper method)\n(langchain.utilities.SparkSQL method)\n(langchain.utilities.TwilioAPIWrapper method)\n(langchain.utilities.WikipediaAPIWrapper method)\n(langchain.utilities.WolframAlphaAPIWrapper method)\nrun_creation() (langchain.llms.Beam method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-84", "text": "run_creation() (langchain.llms.Beam method)\nrun_no_throw() (langchain.utilities.SparkSQL method)\nRUST (langchain.text_splitter.Language attribute)\nrwkv_verbose (langchain.llms.RWKV attribute)\nS\nS3DirectoryLoader (class in langchain.document_loaders)\nS3FileLoader (class in langchain.document_loaders)\nsafesearch (langchain.utilities.DuckDuckGoSearchAPIWrapper attribute)\nsample_rows_in_table_info (langchain.utilities.PowerBIDataset attribute)\nsave() (langchain.agents.AgentExecutor method)\n(langchain.agents.BaseMultiActionAgent method)\n(langchain.agents.BaseSingleActionAgent method)\n(langchain.llms.AI21 method)\n(langchain.llms.AlephAlpha method)\n(langchain.llms.Anthropic method)\n(langchain.llms.Anyscale method)\n(langchain.llms.Aviary method)\n(langchain.llms.AzureOpenAI method)\n(langchain.llms.Banana method)\n(langchain.llms.Baseten method)\n(langchain.llms.Beam method)\n(langchain.llms.Bedrock method)\n(langchain.llms.CerebriumAI method)\n(langchain.llms.Cohere method)\n(langchain.llms.CTransformers method)\n(langchain.llms.Databricks method)\n(langchain.llms.DeepInfra method)\n(langchain.llms.FakeListLLM method)\n(langchain.llms.ForefrontAI method)\n(langchain.llms.GooglePalm method)\n(langchain.llms.GooseAI method)\n(langchain.llms.GPT4All method)\n(langchain.llms.HuggingFaceEndpoint method)\n(langchain.llms.HuggingFaceHub method)\n(langchain.llms.HuggingFacePipeline method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-85", "text": "(langchain.llms.HuggingFacePipeline method)\n(langchain.llms.HuggingFaceTextGenInference method)\n(langchain.llms.HumanInputLLM method)\n(langchain.llms.LlamaCpp method)\n(langchain.llms.Modal method)\n(langchain.llms.MosaicML method)\n(langchain.llms.NLPCloud method)\n(langchain.llms.OpenAI method)\n(langchain.llms.OpenAIChat method)\n(langchain.llms.OpenLM method)\n(langchain.llms.Petals method)\n(langchain.llms.PipelineAI method)\n(langchain.llms.PredictionGuard method)\n(langchain.llms.PromptLayerOpenAI method)\n(langchain.llms.PromptLayerOpenAIChat method)\n(langchain.llms.Replicate method)\n(langchain.llms.RWKV method)\n(langchain.llms.SagemakerEndpoint method)\n(langchain.llms.SelfHostedHuggingFaceLLM method)\n(langchain.llms.SelfHostedPipeline method)\n(langchain.llms.StochasticAI method)\n(langchain.llms.VertexAI method)\n(langchain.llms.Writer method)\n(langchain.prompts.BasePromptTemplate method)\n(langchain.prompts.ChatPromptTemplate method)\nsave_agent() (langchain.agents.AgentExecutor method)\nsave_context() (langchain.experimental.GenerativeAgentMemory method)\n(langchain.memory.CombinedMemory method)\n(langchain.memory.ConversationEntityMemory method)\n(langchain.memory.ConversationKGMemory method)\n(langchain.memory.ConversationStringBufferMemory method)\n(langchain.memory.ConversationSummaryBufferMemory method)\n(langchain.memory.ConversationSummaryMemory method)\n(langchain.memory.ConversationTokenBufferMemory method)\n(langchain.memory.ReadOnlySharedMemory method)\n(langchain.memory.SimpleMemory method)\n(langchain.memory.VectorStoreRetrieverMemory method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-86", "text": "(langchain.memory.SimpleMemory method)\n(langchain.memory.VectorStoreRetrieverMemory method)\nsave_local() (langchain.vectorstores.Annoy method)\n(langchain.vectorstores.FAISS method)\nSCALA (langchain.text_splitter.Language attribute)\nschemas (langchain.utilities.PowerBIDataset attribute)\nscrape() (langchain.document_loaders.WebBaseLoader method)\nscrape_all() (langchain.document_loaders.WebBaseLoader method)\nscrape_page() (langchain.tools.ExtractHyperlinksTool static method)\nsearch() (langchain.docstore.InMemoryDocstore method)\n(langchain.docstore.Wikipedia method)\n(langchain.vectorstores.VectorStore method)\nsearch_index (langchain.vectorstores.Tigris property)\nsearch_kwargs (langchain.chains.ChatVectorDBChain attribute)\n(langchain.chains.VectorDBQA attribute)\n(langchain.chains.VectorDBQAWithSourcesChain attribute)\n(langchain.retrievers.SelfQueryRetriever attribute)\n(langchain.retrievers.TimeWeightedVectorStoreRetriever attribute)\nsearch_type (langchain.chains.VectorDBQA attribute)\n(langchain.retrievers.SelfQueryRetriever attribute)\nsearch_wrapper (langchain.tools.BraveSearch attribute)\nsearx_host (langchain.utilities.searx_search.SearxSearchWrapper attribute)\n(langchain.utilities.SearxSearchWrapper attribute)\nSearxResults (class in langchain.utilities.searx_search)\nsecret (langchain.document_loaders.FaunaLoader attribute)\nseed (langchain.embeddings.LlamaCppEmbeddings attribute)\n(langchain.llms.GPT4All attribute)\n(langchain.llms.LlamaCpp attribute)\nselect_examples() (langchain.prompts.example_selector.LengthBasedExampleSelector method)\n(langchain.prompts.example_selector.MaxMarginalRelevanceExampleSelector method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-87", "text": "(langchain.prompts.example_selector.MaxMarginalRelevanceExampleSelector method)\n(langchain.prompts.example_selector.SemanticSimilarityExampleSelector method)\nselected_tools (langchain.agents.agent_toolkits.FileManagementToolkit attribute)\nSeleniumURLLoader (class in langchain.document_loaders)\nSELF_ASK_WITH_SEARCH (langchain.agents.AgentType attribute)\nsend_pdf() (langchain.document_loaders.MathpixPDFLoader method)\nSentenceTransformerEmbeddings (in module langchain.embeddings)\nSentenceTransformersTokenTextSplitter (class in langchain.text_splitter)\nsequential_chain (langchain.chains.LLMSummarizationCheckerChain attribute)\nserpapi_api_key (langchain.serpapi.SerpAPIWrapper attribute)\n(langchain.utilities.SerpAPIWrapper attribute)\nserper_api_key (langchain.utilities.GoogleSerperAPIWrapper attribute)\nservice_account_key (langchain.document_loaders.GoogleDriveLoader attribute)\nservice_account_path (langchain.document_loaders.GoogleApiClient attribute)\nservice_name (langchain.retrievers.AzureCognitiveSearchRetriever attribute)\nsession_cache (langchain.tools.QueryPowerBITool attribute)\nsession_id (langchain.memory.RedisEntityStore attribute)\n(langchain.memory.SQLiteEntityStore attribute)\nset() (langchain.memory.InMemoryEntityStore method)\n(langchain.memory.RedisEntityStore method)\n(langchain.memory.SQLiteEntityStore method)\nsettings (langchain.document_loaders.OneDriveLoader attribute)\nsimilarity_fn (langchain.document_transformers.EmbeddingsRedundantFilter attribute)\n(langchain.retrievers.document_compressors.EmbeddingsFilter attribute)\nsimilarity_search() (langchain.vectorstores.AnalyticDB method)\n(langchain.vectorstores.Annoy method)\n(langchain.vectorstores.AtlasDB method)\n(langchain.vectorstores.AwaDB method)\n(langchain.vectorstores.Chroma method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-88", "text": "(langchain.vectorstores.AwaDB method)\n(langchain.vectorstores.Chroma method)\n(langchain.vectorstores.Clickhouse method)\n(langchain.vectorstores.DeepLake method)\n(langchain.vectorstores.ElasticVectorSearch method)\n(langchain.vectorstores.FAISS method)\n(langchain.vectorstores.LanceDB method)\n(langchain.vectorstores.MatchingEngine method)\n(langchain.vectorstores.Milvus method)\n(langchain.vectorstores.MongoDBAtlasVectorSearch method)\n(langchain.vectorstores.MyScale method)\n(langchain.vectorstores.OpenSearchVectorSearch method)\n(langchain.vectorstores.Pinecone method)\n(langchain.vectorstores.Qdrant method)\n(langchain.vectorstores.Redis method)\n(langchain.vectorstores.SingleStoreDB method)\n(langchain.vectorstores.SKLearnVectorStore method)\n(langchain.vectorstores.SupabaseVectorStore method)\n(langchain.vectorstores.Tair method)\n(langchain.vectorstores.Tigris method)\n(langchain.vectorstores.Typesense method)\n(langchain.vectorstores.Vectara method)\n(langchain.vectorstores.VectorStore method)\n(langchain.vectorstores.Weaviate method)\nsimilarity_search_by_index() (langchain.vectorstores.Annoy method)\nsimilarity_search_by_text() (langchain.vectorstores.Weaviate method)\nsimilarity_search_by_vector() (langchain.vectorstores.AnalyticDB method)\n(langchain.vectorstores.Annoy method)\n(langchain.vectorstores.AwaDB method)\n(langchain.vectorstores.Chroma method)\n(langchain.vectorstores.Clickhouse method)\n(langchain.vectorstores.DeepLake method)\n(langchain.vectorstores.FAISS method)\n(langchain.vectorstores.Milvus method)\n(langchain.vectorstores.MyScale method)\n(langchain.vectorstores.SupabaseVectorStore method)\n(langchain.vectorstores.VectorStore method)\n(langchain.vectorstores.Weaviate method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-89", "text": "(langchain.vectorstores.VectorStore method)\n(langchain.vectorstores.Weaviate method)\nsimilarity_search_by_vector_returning_embeddings() (langchain.vectorstores.SupabaseVectorStore method)\nsimilarity_search_by_vector_with_relevance_scores() (langchain.vectorstores.SupabaseVectorStore method)\nsimilarity_search_limit_score() (langchain.vectorstores.Redis method)\nsimilarity_search_with_relevance_scores() (langchain.vectorstores.AwaDB method)\n(langchain.vectorstores.Clickhouse method)\n(langchain.vectorstores.MyScale method)\n(langchain.vectorstores.SupabaseVectorStore method)\n(langchain.vectorstores.VectorStore method)\nsimilarity_search_with_score() (langchain.vectorstores.AnalyticDB method)\n(langchain.vectorstores.Annoy method)\n(langchain.vectorstores.AwaDB method)\n(langchain.vectorstores.Chroma method)\n(langchain.vectorstores.DeepLake method)\n(langchain.vectorstores.ElasticVectorSearch method)\n(langchain.vectorstores.FAISS method)\n(langchain.vectorstores.Milvus method)\n(langchain.vectorstores.MongoDBAtlasVectorSearch method)\n(langchain.vectorstores.OpenSearchVectorSearch method)\n(langchain.vectorstores.Pinecone method)\n(langchain.vectorstores.Qdrant method)\n(langchain.vectorstores.Redis method)\n(langchain.vectorstores.SingleStoreDB method)\n(langchain.vectorstores.SKLearnVectorStore method)\n(langchain.vectorstores.Tigris method)\n(langchain.vectorstores.Typesense method)\n(langchain.vectorstores.Vectara method)\n(langchain.vectorstores.Weaviate method)\nsimilarity_search_with_score_by_index() (langchain.vectorstores.Annoy method)\nsimilarity_search_with_score_by_vector() (langchain.vectorstores.AnalyticDB method)\n(langchain.vectorstores.Annoy method)\n(langchain.vectorstores.FAISS method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-90", "text": "(langchain.vectorstores.Annoy method)\n(langchain.vectorstores.FAISS method)\n(langchain.vectorstores.Milvus method)\nsimilarity_threshold (langchain.document_transformers.EmbeddingsRedundantFilter attribute)\n(langchain.retrievers.document_compressors.EmbeddingsFilter attribute)\nsince (langchain.document_loaders.GitHubIssuesLoader attribute)\nSingleStoreDB (class in langchain.vectorstores)\nSitemapLoader (class in langchain.document_loaders)\nsiterestrict (langchain.utilities.GoogleSearchAPIWrapper attribute)\nsize (langchain.tools.SteamshipImageGenerationTool attribute)\nSKLearnVectorStore (class in langchain.vectorstores)\nSlackDirectoryLoader (class in langchain.document_loaders)\nSnowflakeLoader (class in langchain.document_loaders)\nsort (langchain.document_loaders.GitHubIssuesLoader attribute)\nSpacyTextSplitter (class in langchain.text_splitter)\nSparkSQL (class in langchain.utilities)\nsparse_encoder (langchain.retrievers.PineconeHybridSearchRetriever attribute)\nspec (langchain.agents.agent_toolkits.JsonToolkit attribute)\nsplit_documents() (langchain.text_splitter.TextSplitter method)\nsplit_text() (langchain.text_splitter.CharacterTextSplitter method)\n(langchain.text_splitter.NLTKTextSplitter method)\n(langchain.text_splitter.RecursiveCharacterTextSplitter method)\n(langchain.text_splitter.SentenceTransformersTokenTextSplitter method)\n(langchain.text_splitter.SpacyTextSplitter method)\n(langchain.text_splitter.TextSplitter method)\n(langchain.text_splitter.TokenTextSplitter method)\nsplit_text_on_tokens() (in module langchain.text_splitter)\nSpreedlyLoader (class in langchain.document_loaders)\nsql_chain (langchain.chains.SQLDatabaseSequentialChain attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-91", "text": "sql_chain (langchain.chains.SQLDatabaseSequentialChain attribute)\nSRTLoader (class in langchain.document_loaders)\nstart_with_retrieval (langchain.chains.FlareChain attribute)\nstate (langchain.document_loaders.GitHubIssuesLoader attribute)\nstatus (langchain.experimental.GenerativeAgent attribute)\nsteamship (langchain.tools.SteamshipImageGenerationTool attribute)\nstop (langchain.agents.LLMSingleActionAgent attribute)\n(langchain.chains.PALChain attribute)\n(langchain.llms.GPT4All attribute)\n(langchain.llms.LlamaCpp attribute)\n(langchain.llms.VertexAI attribute)\n(langchain.llms.Writer attribute)\nstop_sequences (langchain.llms.AlephAlpha attribute)\nstore (langchain.memory.InMemoryEntityStore attribute)\nstrategy (langchain.llms.RWKV attribute)\nstream() (langchain.llms.Anthropic method)\n(langchain.llms.AzureOpenAI method)\n(langchain.llms.LlamaCpp method)\n(langchain.llms.OpenAI method)\n(langchain.llms.OpenLM method)\n(langchain.llms.PromptLayerOpenAI method)\nstreaming (langchain.chat_models.ChatOpenAI attribute)\n(langchain.llms.Anthropic attribute)\n(langchain.llms.AzureOpenAI attribute)\n(langchain.llms.GPT4All attribute)\n(langchain.llms.LlamaCpp attribute)\n(langchain.llms.OpenAI attribute)\n(langchain.llms.OpenAIChat attribute)\n(langchain.llms.OpenLM attribute)\n(langchain.llms.PromptLayerOpenAIChat attribute)\nstrip_outputs (langchain.chains.SimpleSequentialChain attribute)\nStripeLoader (class in langchain.document_loaders)\nSTRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION (langchain.agents.AgentType attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-92", "text": "STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION (langchain.agents.AgentType attribute)\nstructured_query_translator (langchain.retrievers.SelfQueryRetriever attribute)\nsuffix (langchain.llms.LlamaCpp attribute)\n(langchain.prompts.FewShotPromptTemplate attribute)\n(langchain.prompts.FewShotPromptWithTemplates attribute)\nsummarize_related_memories() (langchain.experimental.GenerativeAgent method)\nsummary (langchain.experimental.GenerativeAgent attribute)\nsummary_message_cls (langchain.memory.ConversationKGMemory attribute)\nsummary_refresh_seconds (langchain.experimental.GenerativeAgent attribute)\nSupabaseVectorStore (class in langchain.vectorstores)\nSWIFT (langchain.text_splitter.Language attribute)\nsync_browser (langchain.agents.agent_toolkits.PlayWrightBrowserToolkit attribute)\nT\ntable (langchain.vectorstores.ClickhouseSettings attribute)\n(langchain.vectorstores.MyScaleSettings attribute)\ntable_info (langchain.utilities.PowerBIDataset property)\ntable_name (langchain.memory.SQLiteEntityStore attribute)\n(langchain.vectorstores.SupabaseVectorStore attribute)\ntable_names (langchain.utilities.PowerBIDataset attribute)\nTair (class in langchain.vectorstores)\ntask (langchain.embeddings.HuggingFaceHubEmbeddings attribute)\n(langchain.llms.HuggingFaceEndpoint attribute)\n(langchain.llms.HuggingFaceHub attribute)\n(langchain.llms.SelfHostedHuggingFaceLLM attribute)\ntbs (langchain.utilities.GoogleSerperAPIWrapper attribute)\nTelegramChatApiLoader (class in langchain.document_loaders)\nTelegramChatFileLoader (class in langchain.document_loaders)\nTelegramChatLoader (in module langchain.document_loaders)\ntemp (langchain.llms.GPT4All attribute)\ntemperature (langchain.chat_models.ChatGooglePalm attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-93", "text": "temperature (langchain.chat_models.ChatGooglePalm attribute)\n(langchain.chat_models.ChatOpenAI attribute)\n(langchain.llms.AI21 attribute)\n(langchain.llms.AlephAlpha attribute)\n(langchain.llms.Anthropic attribute)\n(langchain.llms.AzureOpenAI attribute)\n(langchain.llms.Cohere attribute)\n(langchain.llms.ForefrontAI attribute)\n(langchain.llms.GooglePalm attribute)\n(langchain.llms.GooseAI attribute)\n(langchain.llms.LlamaCpp attribute)\n(langchain.llms.NLPCloud attribute)\n(langchain.llms.OpenAI attribute)\n(langchain.llms.OpenLM attribute)\n(langchain.llms.Petals attribute)\n(langchain.llms.PredictionGuard attribute)\n(langchain.llms.RWKV attribute)\n(langchain.llms.VertexAI attribute)\n(langchain.llms.Writer attribute)\ntemplate (langchain.prompts.PromptTemplate attribute)\n(langchain.tools.QueryPowerBITool attribute)\ntemplate_format (langchain.prompts.FewShotPromptTemplate attribute)\n(langchain.prompts.FewShotPromptWithTemplates attribute)\n(langchain.prompts.PromptTemplate attribute)\ntemplate_tool_response (langchain.agents.ConversationalChatAgent attribute)\ntext_length (langchain.chains.LLMRequestsChain attribute)\ntext_splitter (langchain.chains.AnalyzeDocumentChain attribute)\n(langchain.chains.MapReduceChain attribute)\n(langchain.chains.QAGenerationChain attribute)\nTextLoader (class in langchain.document_loaders)\ntexts (langchain.retrievers.KNNRetriever attribute)\n(langchain.retrievers.SVMRetriever attribute)\nTextSplitter (class in langchain.text_splitter)\ntfidf_array (langchain.retrievers.TFIDFRetriever attribute)\nTigris (class in langchain.vectorstores)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-94", "text": "Tigris (class in langchain.vectorstores)\ntime (langchain.utilities.DuckDuckGoSearchAPIWrapper attribute)\nto_typescript() (langchain.tools.APIOperation method)\ntoken (langchain.llms.PredictionGuard attribute)\n(langchain.utilities.PowerBIDataset attribute)\ntoken_path (langchain.document_loaders.GoogleApiClient attribute)\n(langchain.document_loaders.GoogleDriveLoader attribute)\nTokenizer (class in langchain.text_splitter)\ntokenizer (langchain.llms.Petals attribute)\ntokens (langchain.llms.AlephAlpha attribute)\ntokens_path (langchain.llms.RWKV attribute)\ntokens_per_chunk (langchain.text_splitter.Tokenizer attribute)\nTokenTextSplitter (class in langchain.text_splitter)\nToMarkdownLoader (class in langchain.document_loaders)\nTomlLoader (class in langchain.document_loaders)\ntool() (in module langchain.agents)\n(in module langchain.tools)\ntool_run_logging_kwargs() (langchain.agents.Agent method)\n(langchain.agents.BaseMultiActionAgent method)\n(langchain.agents.BaseSingleActionAgent method)\n(langchain.agents.LLMSingleActionAgent method)\ntools (langchain.agents.agent_toolkits.JiraToolkit attribute)\n(langchain.agents.agent_toolkits.ZapierToolkit attribute)\n(langchain.agents.AgentExecutor attribute)\ntop_k (langchain.chains.GraphCypherQAChain attribute)\n(langchain.chains.SQLDatabaseChain attribute)\n(langchain.chat_models.ChatGooglePalm attribute)\n(langchain.llms.AlephAlpha attribute)\n(langchain.llms.Anthropic attribute)\n(langchain.llms.ForefrontAI attribute)\n(langchain.llms.GooglePalm attribute)\n(langchain.llms.GPT4All attribute)\n(langchain.llms.LlamaCpp attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-95", "text": "(langchain.llms.LlamaCpp attribute)\n(langchain.llms.NLPCloud attribute)\n(langchain.llms.Petals attribute)\n(langchain.llms.VertexAI attribute)\n(langchain.retrievers.ChatGPTPluginRetriever attribute)\n(langchain.retrievers.DataberryRetriever attribute)\n(langchain.retrievers.PineconeHybridSearchRetriever attribute)\ntop_k_docs_for_context (langchain.chains.ChatVectorDBChain attribute)\ntop_k_results (langchain.utilities.ArxivAPIWrapper attribute)\n(langchain.utilities.GooglePlacesAPIWrapper attribute)\n(langchain.utilities.PubMedAPIWrapper attribute)\n(langchain.utilities.WikipediaAPIWrapper attribute)\ntop_n (langchain.retrievers.document_compressors.CohereRerank attribute)\ntop_p (langchain.chat_models.ChatGooglePalm attribute)\n(langchain.llms.AlephAlpha attribute)\n(langchain.llms.Anthropic attribute)\n(langchain.llms.AzureOpenAI attribute)\n(langchain.llms.ForefrontAI attribute)\n(langchain.llms.GooglePalm attribute)\n(langchain.llms.GooseAI attribute)\n(langchain.llms.GPT4All attribute)\n(langchain.llms.LlamaCpp attribute)\n(langchain.llms.NLPCloud attribute)\n(langchain.llms.OpenAI attribute)\n(langchain.llms.OpenLM attribute)\n(langchain.llms.Petals attribute)\n(langchain.llms.RWKV attribute)\n(langchain.llms.VertexAI attribute)\n(langchain.llms.Writer attribute)\ntopP (langchain.llms.AI21 attribute)\ntraits (langchain.experimental.GenerativeAgent attribute)\ntransform (langchain.chains.TransformChain attribute)\ntransform_documents() (langchain.document_transformers.EmbeddingsRedundantFilter method)\n(langchain.text_splitter.TextSplitter method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-96", "text": "(langchain.text_splitter.TextSplitter method)\ntransform_input_fn (langchain.llms.Databricks attribute)\ntransform_output_fn (langchain.llms.Databricks attribute)\ntransformers (langchain.retrievers.document_compressors.DocumentCompressorPipeline attribute)\nTrelloLoader (class in langchain.document_loaders)\ntruncate (langchain.embeddings.CohereEmbeddings attribute)\n(langchain.llms.Cohere attribute)\nts_type_from_python() (langchain.tools.APIOperation static method)\nttl (langchain.memory.RedisEntityStore attribute)\ntuned_model_name (langchain.llms.VertexAI attribute)\nTwitterTweetLoader (class in langchain.document_loaders)\ntype (langchain.output_parsers.ResponseSchema attribute)\n(langchain.utilities.GoogleSerperAPIWrapper attribute)\nTypesense (class in langchain.vectorstores)\nU\nunsecure (langchain.utilities.searx_search.SearxSearchWrapper attribute)\n(langchain.utilities.SearxSearchWrapper attribute)\nUnstructuredAPIFileIOLoader (class in langchain.document_loaders)\nUnstructuredAPIFileLoader (class in langchain.document_loaders)\nUnstructuredCSVLoader (class in langchain.document_loaders)\nUnstructuredEmailLoader (class in langchain.document_loaders)\nUnstructuredEPubLoader (class in langchain.document_loaders)\nUnstructuredExcelLoader (class in langchain.document_loaders)\nUnstructuredFileIOLoader (class in langchain.document_loaders)\nUnstructuredFileLoader (class in langchain.document_loaders)\nUnstructuredHTMLLoader (class in langchain.document_loaders)\nUnstructuredImageLoader (class in langchain.document_loaders)\nUnstructuredMarkdownLoader (class in langchain.document_loaders)\nUnstructuredODTLoader (class in langchain.document_loaders)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-97", "text": "UnstructuredODTLoader (class in langchain.document_loaders)\nUnstructuredPDFLoader (class in langchain.document_loaders)\nUnstructuredPowerPointLoader (class in langchain.document_loaders)\nUnstructuredRTFLoader (class in langchain.document_loaders)\nUnstructuredURLLoader (class in langchain.document_loaders)\nUnstructuredWordDocumentLoader (class in langchain.document_loaders)\nUnstructuredXMLLoader (class in langchain.document_loaders)\nupdate_document() (langchain.vectorstores.Chroma method)\nupdate_forward_refs() (langchain.llms.AI21 class method)\n(langchain.llms.AlephAlpha class method)\n(langchain.llms.Anthropic class method)\n(langchain.llms.Anyscale class method)\n(langchain.llms.Aviary class method)\n(langchain.llms.AzureOpenAI class method)\n(langchain.llms.Banana class method)\n(langchain.llms.Baseten class method)\n(langchain.llms.Beam class method)\n(langchain.llms.Bedrock class method)\n(langchain.llms.CerebriumAI class method)\n(langchain.llms.Cohere class method)\n(langchain.llms.CTransformers class method)\n(langchain.llms.Databricks class method)\n(langchain.llms.DeepInfra class method)\n(langchain.llms.FakeListLLM class method)\n(langchain.llms.ForefrontAI class method)\n(langchain.llms.GooglePalm class method)\n(langchain.llms.GooseAI class method)\n(langchain.llms.GPT4All class method)\n(langchain.llms.HuggingFaceEndpoint class method)\n(langchain.llms.HuggingFaceHub class method)\n(langchain.llms.HuggingFacePipeline class method)\n(langchain.llms.HuggingFaceTextGenInference class method)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-98", "text": "(langchain.llms.HuggingFaceTextGenInference class method)\n(langchain.llms.HumanInputLLM class method)\n(langchain.llms.LlamaCpp class method)\n(langchain.llms.Modal class method)\n(langchain.llms.MosaicML class method)\n(langchain.llms.NLPCloud class method)\n(langchain.llms.OpenAI class method)\n(langchain.llms.OpenAIChat class method)\n(langchain.llms.OpenLM class method)\n(langchain.llms.Petals class method)\n(langchain.llms.PipelineAI class method)\n(langchain.llms.PredictionGuard class method)\n(langchain.llms.PromptLayerOpenAI class method)\n(langchain.llms.PromptLayerOpenAIChat class method)\n(langchain.llms.Replicate class method)\n(langchain.llms.RWKV class method)\n(langchain.llms.SagemakerEndpoint class method)\n(langchain.llms.SelfHostedHuggingFaceLLM class method)\n(langchain.llms.SelfHostedPipeline class method)\n(langchain.llms.StochasticAI class method)\n(langchain.llms.VertexAI class method)\n(langchain.llms.Writer class method)\nupsert_messages() (langchain.memory.CosmosDBChatMessageHistory method)\nurl (langchain.document_loaders.GitHubIssuesLoader property)\n(langchain.document_loaders.MathpixPDFLoader property)\n(langchain.llms.Beam attribute)\n(langchain.retrievers.ChatGPTPluginRetriever attribute)\n(langchain.retrievers.RemoteLangChainRetriever attribute)\n(langchain.tools.IFTTTWebhook attribute)\nurls (langchain.document_loaders.PlaywrightURLLoader attribute)\n(langchain.document_loaders.SeleniumURLLoader attribute)\nuse_mlock (langchain.embeddings.LlamaCppEmbeddings attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-99", "text": "use_mlock (langchain.embeddings.LlamaCppEmbeddings attribute)\n(langchain.llms.GPT4All attribute)\n(langchain.llms.LlamaCpp attribute)\nuse_mmap (langchain.llms.LlamaCpp attribute)\nuse_multiplicative_presence_penalty (langchain.llms.AlephAlpha attribute)\nuse_query_checker (langchain.chains.SQLDatabaseChain attribute)\nusername (langchain.vectorstores.ClickhouseSettings attribute)\n(langchain.vectorstores.MyScaleSettings attribute)\nV\nvalidate_channel_or_videoIds_is_set() (langchain.document_loaders.GoogleApiClient class method)\n(langchain.document_loaders.GoogleApiYoutubeLoader class method)\nvalidate_init_args() (langchain.document_loaders.ConfluenceLoader static method)\nvalidate_template (langchain.prompts.FewShotPromptTemplate attribute)\n(langchain.prompts.FewShotPromptWithTemplates attribute)\n(langchain.prompts.PromptTemplate attribute)\nVectara (class in langchain.vectorstores)\nvector_field (langchain.vectorstores.SingleStoreDB attribute)\nvectorizer (langchain.retrievers.TFIDFRetriever attribute)\nVectorStore (class in langchain.vectorstores)\nvectorstore (langchain.agents.agent_toolkits.VectorStoreInfo attribute)\n(langchain.chains.ChatVectorDBChain attribute)\n(langchain.chains.VectorDBQA attribute)\n(langchain.chains.VectorDBQAWithSourcesChain attribute)\n(langchain.prompts.example_selector.SemanticSimilarityExampleSelector attribute)\n(langchain.retrievers.SelfQueryRetriever attribute)\n(langchain.retrievers.TimeWeightedVectorStoreRetriever attribute)\nvectorstore_info (langchain.agents.agent_toolkits.VectorStoreToolkit attribute)\nvectorstores (langchain.agents.agent_toolkits.VectorStoreRouterToolkit attribute)\nverbose (langchain.llms.AI21 attribute)\n(langchain.llms.AlephAlpha attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-100", "text": "(langchain.llms.AlephAlpha attribute)\n(langchain.llms.Anthropic attribute)\n(langchain.llms.Anyscale attribute)\n(langchain.llms.Aviary attribute)\n(langchain.llms.AzureOpenAI attribute)\n(langchain.llms.Banana attribute)\n(langchain.llms.Baseten attribute)\n(langchain.llms.Beam attribute)\n(langchain.llms.Bedrock attribute)\n(langchain.llms.CerebriumAI attribute)\n(langchain.llms.Cohere attribute)\n(langchain.llms.CTransformers attribute)\n(langchain.llms.Databricks attribute)\n(langchain.llms.DeepInfra attribute)\n(langchain.llms.FakeListLLM attribute)\n(langchain.llms.ForefrontAI attribute)\n(langchain.llms.GooglePalm attribute)\n(langchain.llms.GooseAI attribute)\n(langchain.llms.GPT4All attribute)\n(langchain.llms.HuggingFaceEndpoint attribute)\n(langchain.llms.HuggingFaceHub attribute)\n(langchain.llms.HuggingFacePipeline attribute)\n(langchain.llms.HuggingFaceTextGenInference attribute)\n(langchain.llms.HumanInputLLM attribute)\n(langchain.llms.LlamaCpp attribute)\n(langchain.llms.Modal attribute)\n(langchain.llms.MosaicML attribute)\n(langchain.llms.NLPCloud attribute)\n(langchain.llms.OpenAI attribute)\n(langchain.llms.OpenAIChat attribute)\n(langchain.llms.OpenLM attribute)\n(langchain.llms.Petals attribute)\n(langchain.llms.PipelineAI attribute)\n(langchain.llms.PredictionGuard attribute)\n(langchain.llms.Replicate attribute)\n(langchain.llms.RWKV attribute)\n(langchain.llms.SagemakerEndpoint attribute)\n(langchain.llms.SelfHostedHuggingFaceLLM attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-101", "text": "(langchain.llms.SelfHostedHuggingFaceLLM attribute)\n(langchain.llms.SelfHostedPipeline attribute)\n(langchain.llms.StochasticAI attribute)\n(langchain.llms.VertexAI attribute)\n(langchain.llms.Writer attribute)\n(langchain.retrievers.SelfQueryRetriever attribute)\n(langchain.tools.BaseTool attribute)\n(langchain.tools.Tool attribute)\nVespaRetriever (class in langchain.retrievers)\nvideo_ids (langchain.document_loaders.GoogleApiYoutubeLoader attribute)\nvisible_only (langchain.tools.ClickTool attribute)\nvocab_only (langchain.embeddings.LlamaCppEmbeddings attribute)\n(langchain.llms.GPT4All attribute)\n(langchain.llms.LlamaCpp attribute)\nW\nwait_for_processing() (langchain.document_loaders.MathpixPDFLoader method)\nWeatherDataLoader (class in langchain.document_loaders)\nWeaviate (class in langchain.vectorstores)\nWeaviateHybridSearchRetriever (class in langchain.retrievers)\nWeaviateHybridSearchRetriever.Config (class in langchain.retrievers)\nweb_path (langchain.document_loaders.WebBaseLoader property)\nweb_paths (langchain.document_loaders.WebBaseLoader attribute)\nWebBaseLoader (class in langchain.document_loaders)\nWhatsAppChatLoader (class in langchain.document_loaders)\nWikipedia (class in langchain.docstore)\nWikipediaLoader (class in langchain.document_loaders)\nwolfram_alpha_appid (langchain.utilities.WolframAlphaAPIWrapper attribute)\nwriter_api_key (langchain.llms.Writer attribute)\nwriter_org_id (langchain.llms.Writer attribute)\nY\nYoutubeLoader (class in langchain.document_loaders)\nZ\nzapier_description (langchain.tools.ZapierNLARunAction attribute)", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "961c25351698-102", "text": "Z\nzapier_description (langchain.tools.ZapierNLARunAction attribute)\nZepRetriever (class in langchain.retrievers)\nZERO_SHOT_REACT_DESCRIPTION (langchain.agents.AgentType attribute)\nZilliz (class in langchain.vectorstores)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/genindex.html"}
+{"id": "a1bc0fa44abd-0", "text": ".rst\n.pdf\nWelcome to LangChain\n Contents \nGetting Started\nModules\nUse Cases\nReference Docs\nEcosystem\nAdditional Resources\nWelcome to LangChain#\nLangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model, but will also be:\nData-aware: connect a language model to other sources of data\nAgentic: allow a language model to interact with its environment\nThe LangChain framework is designed around these principles.\nThis is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.\nGetting Started#\nHow to get started using LangChain to create an Language Model application.\nQuickstart Guide\nConcepts and terminology.\nConcepts and terminology\nTutorials created by community experts and presented on YouTube.\nTutorials\nModules#\nThese modules are the core abstractions which we view as the building blocks of any LLM-powered application.\nFor each module LangChain provides standard, extendable interfaces. LangChain also provides external integrations and even end-to-end implementations for off-the-shelf use.\nThe docs for each module contain quickstart examples, how-to guides, reference docs, and conceptual guides.\nThe modules are (from least to most complex):\nModels: Supported model types and integrations.\nPrompts: Prompt management, optimization, and serialization.\nMemory: Memory refers to state that is persisted between calls of a chain/agent.\nIndexes: Language models become much more powerful when combined with application-specific data - this module contains interfaces and integrations for loading, querying and updating external data.\nChains: Chains are structured sequences of calls (to an LLM or to a different utility).", "source": "https://python.langchain.com/en/latest/index.html"}
+{"id": "a1bc0fa44abd-1", "text": "Agents: An agent is a Chain in which an LLM, given a high-level directive and a set of tools, repeatedly decides an action, executes the action and observes the outcome until the high-level directive is complete.\nCallbacks: Callbacks let you log and stream the intermediate steps of any chain, making it easy to observe, debug, and evaluate the internals of an application.\nUse Cases#\nBest practices and built-in implementations for common LangChain use cases:\nAutonomous Agents: Autonomous agents are long-running agents that take many steps in an attempt to accomplish an objective. Examples include AutoGPT and BabyAGI.\nAgent Simulations: Putting agents in a sandbox and observing how they interact with each other and react to events can be an effective way to evaluate their long-range reasoning and planning abilities.\nPersonal Assistants: One of the primary LangChain use cases. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\nQuestion Answering: Another common LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\nChatbots: Language models love to chat, making this a very natural use of them.\nQuerying Tabular Data: Recommended reading if you want to use language models to query structured data (CSVs, SQL, dataframes, etc).\nCode Understanding: Recommended reading if you want to use language models to analyze code.\nInteracting with APIs: Enabling language models to interact with APIs is extremely powerful. It gives them access to up-to-date information and allows them to take actions.\nExtraction: Extract structured information from text.\nSummarization: Compressing longer documents. A type of Data-Augmented Generation.\nEvaluation: Generative models are hard to evaluate with traditional metrics. One promising approach is to use language models themselves to do the evaluation.\nReference Docs#", "source": "https://python.langchain.com/en/latest/index.html"}
+{"id": "a1bc0fa44abd-2", "text": "Reference Docs#\nFull documentation on all methods, classes, installation methods, and integration setups for LangChain.\nLangChain Installation\nReference Documentation\nEcosystem#\nLangChain integrates a lot of different LLMs, systems, and products.\nFrom the other side, many systems and products depend on LangChain.\nIt creates a vibrant and thriving ecosystem.\nIntegrations: Guides for how other products can be used with LangChain.\nDependents: List of repositories that use LangChain.\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\nAdditional Resources#\nAdditional resources we think may be useful as you develop your application!\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\nGallery: A collection of great projects that use Langchain, compiled by the folks at Kyrolabs. Useful for finding inspiration and example implementations.\nDeploying LLMs in Production: A collection of best practices and tutorials for deploying LLMs in production.\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\nModel Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\nDiscord: Join us on our Discord to discuss all things LangChain!\nYouTube: A collection of the LangChain tutorials and videos.\nProduction Support: As you move your LangChains into production, we\u2019d love to offer more comprehensive support. Please fill out this form and we\u2019ll set up a dedicated support Slack channel.\nnext\nQuickstart Guide\n Contents\n \nGetting Started\nModules\nUse Cases\nReference Docs\nEcosystem\nAdditional Resources\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.", "source": "https://python.langchain.com/en/latest/index.html"}
+{"id": "a1bc0fa44abd-3", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/index.html"}
+{"id": "bc75ce6b3dde-0", "text": ".rst\n.pdf\nIntegrations\n Contents \nIntegrations by Module\nDependencies\nAll Integrations\nIntegrations#\nLangChain integrates with many LLMs, systems, and products.\nIntegrations by Module#\nIntegrations grouped by the core LangChain module they map to:\nLLM Providers\nChat Model Providers\nText Embedding Model Providers\nDocument Loader Integrations\nText Splitter Integrations\nVectorstore Providers\nRetriever Providers\nTool Providers\nToolkit Integrations\nDependencies#\nLangChain depends on several hungered Python packages.\nAll Integrations#\nA comprehensive list of LLMs, systems, and products integrated with LangChain:\nTracing Walkthrough\nAI21 Labs\nAim\nAirbyte\nAleph Alpha\nAmazon Bedrock\nAnalyticDB\nAnnoy\nAnthropic\nAnyscale\nApify\nArgilla\nArxiv\nAtlasDB\nAwaDB\nAWS S3 Directory\nAZLyrics\nAzure Blob Storage\nAzure Cognitive Search\nAzure OpenAI\nBanana\nBeam\nBiliBili\nBlackboard\nCassandra\nCerebriumAI\nChroma\nClearML\nClickHouse\nCohere\nCollege Confidential\nComet\nConfluence\nC Transformers\nDataberry\nDatabricks\nDeepInfra\nDeep Lake\nDiffbot\nDiscord\nDocugami\nDuckDB\nElasticsearch\nEverNote\nFacebook Chat\nFigma\nForefrontAI\nGit\nGitBook\nGoogle BigQuery\nGoogle Cloud Storage\nGoogle Drive\nGoogle Search\nGoogle Serper\nGoogle Vertex AI\nGooseAI\nGPT4All\nGraphsignal\nGutenberg\nHacker News\nHazy Research\nHelicone\nHugging Face\niFixit\nIMSDb\nJina\nLanceDB\nLlama.cpp\nMediaWikiDump\nMetal\nMicrosoft OneDrive\nMicrosoft PowerPoint", "source": "https://python.langchain.com/en/latest/integrations.html"}
+{"id": "bc75ce6b3dde-1", "text": "Llama.cpp\nMediaWikiDump\nMetal\nMicrosoft OneDrive\nMicrosoft PowerPoint\nMicrosoft Word\nMilvus\nMLflow\nModal\nModern Treasury\nMomento\nMyScale\nNLPCloud\nNotion DB\nObsidian\nOpenAI\nOpenSearch\nOpenWeatherMap\nPetals\nPGVector\nPinecone\nPipelineAI\nPrediction Guard\nPromptLayer\nPsychic\nQdrant\nRay Serve\nRebuff\nReddit\nRedis\nReplicate\nRoam\nRunhouse\nRWKV-4\nSageMaker Endpoint\nSearxNG Search API\nSerpAPI\nShale Protocol\nscikit-learn\nSlack\nspaCy\nSpreedly\nStochasticAI\nStripe\nTair\nTelegram\nTensorflow Hub\n2Markdown\nTrello\nTwitter\nUnstructured\nVectara\nVespa\nWeights & Biases\nWeather\nWeaviate\nWhatsApp\nWhyLabs\nWikipedia\nWolfram Alpha\nWriter\nYeager.ai\nYouTube\nZep\nZilliz\nprevious\nExperimental Modules\nnext\nTracing Walkthrough\n Contents\n \nIntegrations by Module\nDependencies\nAll Integrations\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/integrations.html"}
+{"id": "af52fc0a2ee1-0", "text": ".md\n.pdf\nDependents\nDependents#\nDependents stats for hwchase17/langchain\n[update: 2023-06-05; only dependent repositories with Stars > 100]\nRepository\nStars\nopenai/openai-cookbook\n38024\nLAION-AI/Open-Assistant\n33609\nmicrosoft/TaskMatrix\n33136\nhpcaitech/ColossalAI\n30032\nimartinez/privateGPT\n28094\nreworkd/AgentGPT\n23430\nopenai/chatgpt-retrieval-plugin\n17942\njerryjliu/llama_index\n16697\nmindsdb/mindsdb\n16410\nmlflow/mlflow\n14517\nGaiZhenbiao/ChuanhuChatGPT\n10793\ndatabrickslabs/dolly\n10155\nopenai/evals\n10076\nAIGC-Audio/AudioGPT\n8619\nlogspace-ai/langflow\n8211\nimClumsyPanda/langchain-ChatGLM\n8154\nPromtEngineer/localGPT\n6853\nStanGirard/quivr\n6830\nPipedreamHQ/pipedream\n6520\ngo-skynet/LocalAI\n6018\narc53/DocsGPT\n5643\ne2b-dev/e2b\n5075\nlanggenius/dify\n4281\nnsarrazin/serge\n4228\nzauberzeug/nicegui\n4084\nmadawei2699/myGPTReader\n4039\nwenda-LLM/wenda\n3871\nGreyDGL/PentestGPT\n3837\nzilliztech/GPTCache\n3625\ncsunny/DB-GPT\n3545\ngkamradt/langchain-tutorials\n3404", "source": "https://python.langchain.com/en/latest/dependents.html"}
+{"id": "af52fc0a2ee1-1", "text": "3545\ngkamradt/langchain-tutorials\n3404\nmmabrouk/chatgpt-wrapper\n3303\npostgresml/postgresml\n3052\nmarqo-ai/marqo\n3014\nMineDojo/Voyager\n2945\nPrefectHQ/marvin\n2761\nproject-baize/baize-chatbot\n2673\nhwchase17/chat-langchain\n2589\nwhitead/paper-qa\n2572\nAzure-Samples/azure-search-openai-demo\n2366\nGerevAI/gerev\n2330\nOpenGVLab/InternGPT\n2289\nParisNeo/gpt4all-ui\n2159\nOpenBMB/BMTools\n2158\nguangzhengli/ChatFiles\n2005\nh2oai/h2ogpt\n1939\nFarama-Foundation/PettingZoo\n1845\nOpenGVLab/Ask-Anything\n1749\nIntelligenzaArtificiale/Free-Auto-GPT\n1740\nUnstructured-IO/unstructured\n1628\nhwchase17/notion-qa\n1607\nNVIDIA/NeMo-Guardrails\n1544\nSamurAIGPT/privateGPT\n1543\npaulpierre/RasaGPT\n1526\nyanqiangmiffy/Chinese-LangChain\n1485\nKav-K/GPTDiscord\n1402\nvocodedev/vocode-python\n1387\nChainlit/chainlit\n1336\nlunasec-io/lunasec\n1323\npsychic-api/psychic\n1248\nagiresearch/OpenAGI\n1208\njina-ai/thinkgpt\n1193\nthomas-yanxin/LangChain-ChatGLM-Webui\n1182", "source": "https://python.langchain.com/en/latest/dependents.html"}
+{"id": "af52fc0a2ee1-2", "text": "thomas-yanxin/LangChain-ChatGLM-Webui\n1182\nttengwang/Caption-Anything\n1137\njina-ai/dev-gpt\n1135\ngreshake/llm-security\n1086\nkeephq/keep\n1063\njuncongmoo/chatllama\n1037\nrichardyc/Chrome-GPT\n1035\nvisual-openllm/visual-openllm\n997\nmmz-001/knowledge_gpt\n995\njina-ai/langchain-serve\n949\nirgolic/AutoPR\n936\nmicrosoft/X-Decoder\n908\npoe-platform/api-bot-tutorial\n902\npeterw/Chat-with-Github-Repo\n875\ncirediatpl/FigmaChain\n822\nhomanp/superagent\n806\nseanpixel/Teenage-AGI\n800\nchatarena/chatarena\n796\nhashintel/hash\n795\nSamurAIGPT/Camel-AutoGPT\n786\nrlancemartin/auto-evaluator\n770\ncorca-ai/EVAL\n769\n101dotxyz/GPTeam\n755\nnoahshinn024/reflexion\n706\neyurtsev/kor\n695\ncheshire-cat-ai/core\n681\ne-johnstonn/BriefGPT\n656\nrun-llama/llama-lab\n635\ngriptape-ai/griptape\n583\nnamuan/dr-doc-search\n555\ngetmetal/motorhead\n550\nkreneskyp/ix\n543\nhwchase17/chat-your-data\n510\nAnil-matcha/ChatPDF\n501\nwhyiyhw/chatgpt-wechat\n497\nSamurAIGPT/ChatGPT-Developer-Plugins\n496\nmicrosoft/PodcastCopilot\n492\ndebanjum/khoj", "source": "https://python.langchain.com/en/latest/dependents.html"}
+{"id": "af52fc0a2ee1-3", "text": "496\nmicrosoft/PodcastCopilot\n492\ndebanjum/khoj\n485\nakshata29/chatpdf\n485\nlangchain-ai/langchain-aiplugin\n462\njina-ai/agentchain\n460\nalexanderatallah/window.ai\n457\nyeagerai/yeagerai-agent\n451\nmckaywrigley/repo-chat\n446\nmichaelthwan/searchGPT\n446\nmpaepper/content-chatbot\n441\nfreddyaboulton/gradio-tools\n439\nruoccofabrizio/azure-open-ai-embeddings-qna\n429\nStevenGrove/GPT4Tools\n422\njonra1993/fastapi-alembic-sqlmodel-async\n407\nmsoedov/langcorn\n405\namosjyng/langchain-visualizer\n395\najndkr/lanarky\n384\nmtenenholtz/chat-twitter\n376\nsteamship-core/steamship-langchain\n371\nlangchain-ai/auto-evaluator\n365\nxuwenhao/geektime-ai-course\n358\ncontinuum-llms/chatgpt-memory\n357\nopentensor/bittensor\n347\nshowlab/VLog\n345\ndaodao97/chatdoc\n345\nlogan-markewich/llama_index_starter_pack\n332\npoe-platform/poe-protocol\n320\nexplosion/spacy-llm\n312\nandylokandy/gpt-4-search\n311\nalejandro-ao/langchain-ask-pdf\n310\njupyterlab/jupyter-ai\n294\nBlackHC/llm-strategy\n283\nitamargol/openai\n281\nmomegas/megabots\n279\npersonoids/personoids-lite\n277\nyvann-hub/Robby-chatbot\n267\nAnil-matcha/Website-to-Chatbot", "source": "https://python.langchain.com/en/latest/dependents.html"}
+{"id": "af52fc0a2ee1-4", "text": "267\nAnil-matcha/Website-to-Chatbot\n266\nCheems-Seminar/grounded-segment-any-parts\n260\nsullivan-sean/chat-langchainjs\n248\nbborn/howdoi.ai\n245\ndaveebbelaar/langchain-experiments\n240\nMagnivOrg/prompt-layer-library\n237\nur-whitelab/exmol\n234\nconceptofmind/toolformer\n234\nrecalign/RecAlign\n226\nOpenBMB/AgentVerse\n220\nalvarosevilla95/autolang\n219\nJohnSnowLabs/nlptest\n216\nkaleido-lab/dolphin\n215\ntruera/trulens\n208\nNimbleBoxAI/ChainFury\n208\nairobotlab/KoChatGPT\n207\nmonarch-initiative/ontogpt\n200\npaolorechia/learn-langchain\n195\nshaman-ai/agent-actors\n185\nHaste171/langchain-chatbot\n184\nplchld/InsightFlow\n182\nsu77ungr/CASALIOY\n180\njbrukh/gpt-jargon\n177\nbenthecoder/ClassGPT\n174\nbillxbf/ReWOO\n170\nfilip-michalsky/SalesGPT\n168\nhwchase17/langchain-streamlit-template\n168\nradi-cho/datasetGPT\n164\nhardbyte/qabot\n164\ngia-guar/JARVIS-ChatGPT\n158\nplastic-labs/tutor-gpt\n154\nyasyf/compress-gpt\n154\nfengyuli-dev/multimedia-gpt\n154\nethanyanjiali/minChatGPT\n153\nhwchase17/chroma-langchain\n153\nedreisMD/plugnplai\n148\nchakkaradeep/pyCodeAGI\n145", "source": "https://python.langchain.com/en/latest/dependents.html"}
+{"id": "af52fc0a2ee1-5", "text": "148\nchakkaradeep/pyCodeAGI\n145\nccurme/yolopandas\n145\nshamspias/customizable-gpt-chatbot\n144\nrealminchoi/babyagi-ui\n143\nPradipNichite/Youtube-Tutorials\n140\ngustavz/DataChad\n140\nKlingefjord/chatgpt-telegram\n140\nJaseci-Labs/jaseci\n139\nhandrew/browserpilot\n137\njmpaz/promptlib\n137\nSamPink/dev-gpt\n135\nmenloparklab/langchain-cohere-qdrant-doc-retrieval\n135\nhirokidaichi/wanna\n135\nsteamship-core/vercel-examples\n134\npablomarin/GPT-Azure-Search-Engine\n133\nibiscp/LLM-IMDB\n133\nshauryr/S2QA\n133\njerlendds/osintbuddy\n132\nyuanjie-ai/ChatLLM\n132\nyasyf/summ\n132\nWongSaang/chatgpt-ui-server\n130\npeterw/StoryStorm\n127\nTeahouse-Studios/akari-bot\n126\nvaibkumr/prompt-optimizer\n125\npreset-io/promptimize\n124\nhomanp/vercel-langchain\n124\npetehunt/langchain-github-bot\n123\neunomia-bpf/GPTtrace\n118\nnicknochnack/LangchainDocuments\n116\njiran214/GPT-vup\n112\nrsaryev/talk-codebase\n112\nzenml-io/zenml-projects\n112\nmicrosoft/azure-openai-in-a-day-workshop\n112\ndavila7/file-gpt\n112\nprof-frink-lab/slangchain\n111\naurelio-labs/arxiv-bot\n110", "source": "https://python.langchain.com/en/latest/dependents.html"}
+{"id": "af52fc0a2ee1-6", "text": "111\naurelio-labs/arxiv-bot\n110\nfixie-ai/fixie-examples\n108\nmiaoshouai/miaoshouai-assistant\n105\nflurb18/AgentOoba\n103\nsolana-labs/chatgpt-plugin\n102\nSignificant-Gravitas/Auto-GPT-Benchmarks\n102\nkaarthik108/snowChat\n100\nGenerated by github-dependents-info\ngithub-dependents-info --repo hwchase17/langchain --markdownfile dependents.md --minstars 100 --sort stars\nprevious\nZilliz\nnext\nDeployments\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/dependents.html"}
+{"id": "46379634f13f-0", "text": "Source code for langchain.requests\n\"\"\"Lightweight wrapper around requests library, with async support.\"\"\"\nfrom contextlib import asynccontextmanager\nfrom typing import Any, AsyncGenerator, Dict, Optional\nimport aiohttp\nimport requests\nfrom pydantic import BaseModel, Extra\nclass Requests(BaseModel):\n \"\"\"Wrapper around requests to handle auth and async.\n The main purpose of this wrapper is to handle authentication (by saving\n headers) and enable easy async methods on the same base object.\n \"\"\"\n headers: Optional[Dict[str, str]] = None\n aiosession: Optional[aiohttp.ClientSession] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n def get(self, url: str, **kwargs: Any) -> requests.Response:\n \"\"\"GET the URL and return the text.\"\"\"\n return requests.get(url, headers=self.headers, **kwargs)\n def post(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:\n \"\"\"POST to the URL and return the text.\"\"\"\n return requests.post(url, json=data, headers=self.headers, **kwargs)\n def patch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:\n \"\"\"PATCH the URL and return the text.\"\"\"\n return requests.patch(url, json=data, headers=self.headers, **kwargs)\n def put(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:\n \"\"\"PUT the URL and return the text.\"\"\"\n return requests.put(url, json=data, headers=self.headers, **kwargs)\n def delete(self, url: str, **kwargs: Any) -> requests.Response:", "source": "https://python.langchain.com/en/latest/_modules/langchain/requests.html"}
+{"id": "46379634f13f-1", "text": "def delete(self, url: str, **kwargs: Any) -> requests.Response:\n \"\"\"DELETE the URL and return the text.\"\"\"\n return requests.delete(url, headers=self.headers, **kwargs)\n @asynccontextmanager\n async def _arequest(\n self, method: str, url: str, **kwargs: Any\n ) -> AsyncGenerator[aiohttp.ClientResponse, None]:\n \"\"\"Make an async request.\"\"\"\n if not self.aiosession:\n async with aiohttp.ClientSession() as session:\n async with session.request(\n method, url, headers=self.headers, **kwargs\n ) as response:\n yield response\n else:\n async with self.aiosession.request(\n method, url, headers=self.headers, **kwargs\n ) as response:\n yield response\n @asynccontextmanager\n async def aget(\n self, url: str, **kwargs: Any\n ) -> AsyncGenerator[aiohttp.ClientResponse, None]:\n \"\"\"GET the URL and return the text asynchronously.\"\"\"\n async with self._arequest(\"GET\", url, **kwargs) as response:\n yield response\n @asynccontextmanager\n async def apost(\n self, url: str, data: Dict[str, Any], **kwargs: Any\n ) -> AsyncGenerator[aiohttp.ClientResponse, None]:\n \"\"\"POST to the URL and return the text asynchronously.\"\"\"\n async with self._arequest(\"POST\", url, **kwargs) as response:\n yield response\n @asynccontextmanager\n async def apatch(\n self, url: str, data: Dict[str, Any], **kwargs: Any\n ) -> AsyncGenerator[aiohttp.ClientResponse, None]:\n \"\"\"PATCH the URL and return the text asynchronously.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/requests.html"}
+{"id": "46379634f13f-2", "text": "\"\"\"PATCH the URL and return the text asynchronously.\"\"\"\n async with self._arequest(\"PATCH\", url, **kwargs) as response:\n yield response\n @asynccontextmanager\n async def aput(\n self, url: str, data: Dict[str, Any], **kwargs: Any\n ) -> AsyncGenerator[aiohttp.ClientResponse, None]:\n \"\"\"PUT the URL and return the text asynchronously.\"\"\"\n async with self._arequest(\"PUT\", url, **kwargs) as response:\n yield response\n @asynccontextmanager\n async def adelete(\n self, url: str, **kwargs: Any\n ) -> AsyncGenerator[aiohttp.ClientResponse, None]:\n \"\"\"DELETE the URL and return the text asynchronously.\"\"\"\n async with self._arequest(\"DELETE\", url, **kwargs) as response:\n yield response\n[docs]class TextRequestsWrapper(BaseModel):\n \"\"\"Lightweight wrapper around requests library.\n The main purpose of this wrapper is to always return a text output.\n \"\"\"\n headers: Optional[Dict[str, str]] = None\n aiosession: Optional[aiohttp.ClientSession] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def requests(self) -> Requests:\n return Requests(headers=self.headers, aiosession=self.aiosession)\n[docs] def get(self, url: str, **kwargs: Any) -> str:\n \"\"\"GET the URL and return the text.\"\"\"\n return self.requests.get(url, **kwargs).text\n[docs] def post(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:", "source": "https://python.langchain.com/en/latest/_modules/langchain/requests.html"}
+{"id": "46379634f13f-3", "text": "\"\"\"POST to the URL and return the text.\"\"\"\n return self.requests.post(url, data, **kwargs).text\n[docs] def patch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:\n \"\"\"PATCH the URL and return the text.\"\"\"\n return self.requests.patch(url, data, **kwargs).text\n[docs] def put(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:\n \"\"\"PUT the URL and return the text.\"\"\"\n return self.requests.put(url, data, **kwargs).text\n[docs] def delete(self, url: str, **kwargs: Any) -> str:\n \"\"\"DELETE the URL and return the text.\"\"\"\n return self.requests.delete(url, **kwargs).text\n[docs] async def aget(self, url: str, **kwargs: Any) -> str:\n \"\"\"GET the URL and return the text asynchronously.\"\"\"\n async with self.requests.aget(url, **kwargs) as response:\n return await response.text()\n[docs] async def apost(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:\n \"\"\"POST to the URL and return the text asynchronously.\"\"\"\n async with self.requests.apost(url, **kwargs) as response:\n return await response.text()\n[docs] async def apatch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:\n \"\"\"PATCH the URL and return the text asynchronously.\"\"\"\n async with self.requests.apatch(url, **kwargs) as response:\n return await response.text()\n[docs] async def aput(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:", "source": "https://python.langchain.com/en/latest/_modules/langchain/requests.html"}
+{"id": "46379634f13f-4", "text": "\"\"\"PUT the URL and return the text asynchronously.\"\"\"\n async with self.requests.aput(url, **kwargs) as response:\n return await response.text()\n[docs] async def adelete(self, url: str, **kwargs: Any) -> str:\n \"\"\"DELETE the URL and return the text asynchronously.\"\"\"\n async with self.requests.adelete(url, **kwargs) as response:\n return await response.text()\n# For backwards compatibility\nRequestsWrapper = TextRequestsWrapper\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/requests.html"}
+{"id": "e990308dc849-0", "text": "Source code for langchain.text_splitter\n\"\"\"Functionality for splitting text.\"\"\"\nfrom __future__ import annotations\nimport copy\nimport logging\nimport re\nfrom abc import ABC, abstractmethod\nfrom dataclasses import dataclass\nfrom enum import Enum\nfrom typing import (\n AbstractSet,\n Any,\n Callable,\n Collection,\n Iterable,\n List,\n Literal,\n Optional,\n Sequence,\n Type,\n TypeVar,\n Union,\n cast,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.schema import BaseDocumentTransformer\nlogger = logging.getLogger(__name__)\nTS = TypeVar(\"TS\", bound=\"TextSplitter\")\ndef _split_text_with_regex(\n text: str, separator: str, keep_separator: bool\n) -> List[str]:\n # Now that we have the separator, split the text\n if separator:\n if keep_separator:\n # The parentheses in the pattern keep the delimiters in the result.\n _splits = re.split(f\"({separator})\", text)\n splits = [_splits[i] + _splits[i + 1] for i in range(1, len(_splits), 2)]\n if len(_splits) % 2 == 0:\n splits += _splits[-1:]\n splits = [_splits[0]] + splits\n else:\n splits = text.split(separator)\n else:\n splits = list(text)\n return [s for s in splits if s != \"\"]\n[docs]class TextSplitter(BaseDocumentTransformer, ABC):\n \"\"\"Interface for splitting text into chunks.\"\"\"\n def __init__(\n self,\n chunk_size: int = 4000,\n chunk_overlap: int = 200,", "source": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html"}
+{"id": "e990308dc849-1", "text": "chunk_overlap: int = 200,\n length_function: Callable[[str], int] = len,\n keep_separator: bool = False,\n add_start_index: bool = False,\n ) -> None:\n \"\"\"Create a new TextSplitter.\n Args:\n chunk_size: Maximum size of chunks to return\n chunk_overlap: Overlap in characters between chunks\n length_function: Function that measures the length of given chunks\n keep_separator: Whether or not to keep the separator in the chunks\n add_start_index: If `True`, includes chunk's start index in metadata\n \"\"\"\n if chunk_overlap > chunk_size:\n raise ValueError(\n f\"Got a larger chunk overlap ({chunk_overlap}) than chunk size \"\n f\"({chunk_size}), should be smaller.\"\n )\n self._chunk_size = chunk_size\n self._chunk_overlap = chunk_overlap\n self._length_function = length_function\n self._keep_separator = keep_separator\n self._add_start_index = add_start_index\n[docs] @abstractmethod\n def split_text(self, text: str) -> List[str]:\n \"\"\"Split text into multiple components.\"\"\"\n[docs] def create_documents(\n self, texts: List[str], metadatas: Optional[List[dict]] = None\n ) -> List[Document]:\n \"\"\"Create documents from a list of texts.\"\"\"\n _metadatas = metadatas or [{}] * len(texts)\n documents = []\n for i, text in enumerate(texts):\n index = -1\n for chunk in self.split_text(text):\n metadata = copy.deepcopy(_metadatas[i])\n if self._add_start_index:\n index = text.find(chunk, index + 1)\n metadata[\"start_index\"] = index", "source": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html"}
+{"id": "e990308dc849-2", "text": "metadata[\"start_index\"] = index\n new_doc = Document(page_content=chunk, metadata=metadata)\n documents.append(new_doc)\n return documents\n[docs] def split_documents(self, documents: Iterable[Document]) -> List[Document]:\n \"\"\"Split documents.\"\"\"\n texts, metadatas = [], []\n for doc in documents:\n texts.append(doc.page_content)\n metadatas.append(doc.metadata)\n return self.create_documents(texts, metadatas=metadatas)\n def _join_docs(self, docs: List[str], separator: str) -> Optional[str]:\n text = separator.join(docs)\n text = text.strip()\n if text == \"\":\n return None\n else:\n return text\n def _merge_splits(self, splits: Iterable[str], separator: str) -> List[str]:\n # We now want to combine these smaller pieces into medium size\n # chunks to send to the LLM.\n separator_len = self._length_function(separator)\n docs = []\n current_doc: List[str] = []\n total = 0\n for d in splits:\n _len = self._length_function(d)\n if (\n total + _len + (separator_len if len(current_doc) > 0 else 0)\n > self._chunk_size\n ):\n if total > self._chunk_size:\n logger.warning(\n f\"Created a chunk of size {total}, \"\n f\"which is longer than the specified {self._chunk_size}\"\n )\n if len(current_doc) > 0:\n doc = self._join_docs(current_doc, separator)\n if doc is not None:\n docs.append(doc)\n # Keep on popping if:", "source": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html"}
+{"id": "e990308dc849-3", "text": "docs.append(doc)\n # Keep on popping if:\n # - we have a larger chunk than in the chunk overlap\n # - or if we still have any chunks and the length is long\n while total > self._chunk_overlap or (\n total + _len + (separator_len if len(current_doc) > 0 else 0)\n > self._chunk_size\n and total > 0\n ):\n total -= self._length_function(current_doc[0]) + (\n separator_len if len(current_doc) > 1 else 0\n )\n current_doc = current_doc[1:]\n current_doc.append(d)\n total += _len + (separator_len if len(current_doc) > 1 else 0)\n doc = self._join_docs(current_doc, separator)\n if doc is not None:\n docs.append(doc)\n return docs\n[docs] @classmethod\n def from_huggingface_tokenizer(cls, tokenizer: Any, **kwargs: Any) -> TextSplitter:\n \"\"\"Text splitter that uses HuggingFace tokenizer to count length.\"\"\"\n try:\n from transformers import PreTrainedTokenizerBase\n if not isinstance(tokenizer, PreTrainedTokenizerBase):\n raise ValueError(\n \"Tokenizer received was not an instance of PreTrainedTokenizerBase\"\n )\n def _huggingface_tokenizer_length(text: str) -> int:\n return len(tokenizer.encode(text))\n except ImportError:\n raise ValueError(\n \"Could not import transformers python package. \"\n \"Please install it with `pip install transformers`.\"\n )\n return cls(length_function=_huggingface_tokenizer_length, **kwargs)\n[docs] @classmethod\n def from_tiktoken_encoder(\n cls: Type[TS],", "source": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html"}
+{"id": "e990308dc849-4", "text": "def from_tiktoken_encoder(\n cls: Type[TS],\n encoding_name: str = \"gpt2\",\n model_name: Optional[str] = None,\n allowed_special: Union[Literal[\"all\"], AbstractSet[str]] = set(),\n disallowed_special: Union[Literal[\"all\"], Collection[str]] = \"all\",\n **kwargs: Any,\n ) -> TS:\n \"\"\"Text splitter that uses tiktoken encoder to count length.\"\"\"\n try:\n import tiktoken\n except ImportError:\n raise ImportError(\n \"Could not import tiktoken python package. \"\n \"This is needed in order to calculate max_tokens_for_prompt. \"\n \"Please install it with `pip install tiktoken`.\"\n )\n if model_name is not None:\n enc = tiktoken.encoding_for_model(model_name)\n else:\n enc = tiktoken.get_encoding(encoding_name)\n def _tiktoken_encoder(text: str) -> int:\n return len(\n enc.encode(\n text,\n allowed_special=allowed_special,\n disallowed_special=disallowed_special,\n )\n )\n if issubclass(cls, TokenTextSplitter):\n extra_kwargs = {\n \"encoding_name\": encoding_name,\n \"model_name\": model_name,\n \"allowed_special\": allowed_special,\n \"disallowed_special\": disallowed_special,\n }\n kwargs = {**kwargs, **extra_kwargs}\n return cls(length_function=_tiktoken_encoder, **kwargs)\n[docs] def transform_documents(\n self, documents: Sequence[Document], **kwargs: Any\n ) -> Sequence[Document]:\n \"\"\"Transform sequence of documents by splitting them.\"\"\"\n return self.split_documents(list(documents))", "source": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html"}
+{"id": "e990308dc849-5", "text": "return self.split_documents(list(documents))\n[docs] async def atransform_documents(\n self, documents: Sequence[Document], **kwargs: Any\n ) -> Sequence[Document]:\n \"\"\"Asynchronously transform a sequence of documents by splitting them.\"\"\"\n raise NotImplementedError\n[docs]class CharacterTextSplitter(TextSplitter):\n \"\"\"Implementation of splitting text that looks at characters.\"\"\"\n def __init__(self, separator: str = \"\\n\\n\", **kwargs: Any) -> None:\n \"\"\"Create a new TextSplitter.\"\"\"\n super().__init__(**kwargs)\n self._separator = separator\n[docs] def split_text(self, text: str) -> List[str]:\n \"\"\"Split incoming text and return chunks.\"\"\"\n # First we naively split the large input into a bunch of smaller ones.\n splits = _split_text_with_regex(text, self._separator, self._keep_separator)\n _separator = \"\" if self._keep_separator else self._separator\n return self._merge_splits(splits, _separator)\n# should be in newer Python versions (3.10+)\n# @dataclass(frozen=True, kw_only=True, slots=True)\n[docs]@dataclass(frozen=True)\nclass Tokenizer:\n chunk_overlap: int\n tokens_per_chunk: int\n decode: Callable[[list[int]], str]\n encode: Callable[[str], List[int]]\n[docs]def split_text_on_tokens(*, text: str, tokenizer: Tokenizer) -> List[str]:\n \"\"\"Split incoming text and return chunks.\"\"\"\n splits: List[str] = []\n input_ids = tokenizer.encode(text)\n start_idx = 0\n cur_idx = min(start_idx + tokenizer.tokens_per_chunk, len(input_ids))\n chunk_ids = input_ids[start_idx:cur_idx]", "source": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html"}
+{"id": "e990308dc849-6", "text": "chunk_ids = input_ids[start_idx:cur_idx]\n while start_idx < len(input_ids):\n splits.append(tokenizer.decode(chunk_ids))\n start_idx += tokenizer.tokens_per_chunk - tokenizer.chunk_overlap\n cur_idx = min(start_idx + tokenizer.tokens_per_chunk, len(input_ids))\n chunk_ids = input_ids[start_idx:cur_idx]\n return splits\n[docs]class TokenTextSplitter(TextSplitter):\n \"\"\"Implementation of splitting text that looks at tokens.\"\"\"\n def __init__(\n self,\n encoding_name: str = \"gpt2\",\n model_name: Optional[str] = None,\n allowed_special: Union[Literal[\"all\"], AbstractSet[str]] = set(),\n disallowed_special: Union[Literal[\"all\"], Collection[str]] = \"all\",\n **kwargs: Any,\n ) -> None:\n \"\"\"Create a new TextSplitter.\"\"\"\n super().__init__(**kwargs)\n try:\n import tiktoken\n except ImportError:\n raise ImportError(\n \"Could not import tiktoken python package. \"\n \"This is needed in order to for TokenTextSplitter. \"\n \"Please install it with `pip install tiktoken`.\"\n )\n if model_name is not None:\n enc = tiktoken.encoding_for_model(model_name)\n else:\n enc = tiktoken.get_encoding(encoding_name)\n self._tokenizer = enc\n self._allowed_special = allowed_special\n self._disallowed_special = disallowed_special\n[docs] def split_text(self, text: str) -> List[str]:\n def _encode(_text: str) -> List[int]:\n return self._tokenizer.encode(\n _text,\n allowed_special=self._allowed_special,\n disallowed_special=self._disallowed_special,", "source": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html"}
+{"id": "e990308dc849-7", "text": "allowed_special=self._allowed_special,\n disallowed_special=self._disallowed_special,\n )\n tokenizer = Tokenizer(\n chunk_overlap=self._chunk_overlap,\n tokens_per_chunk=self._chunk_size,\n decode=self._tokenizer.decode,\n encode=_encode,\n )\n return split_text_on_tokens(text=text, tokenizer=tokenizer)\n[docs]class SentenceTransformersTokenTextSplitter(TextSplitter):\n \"\"\"Implementation of splitting text that looks at tokens.\"\"\"\n def __init__(\n self,\n chunk_overlap: int = 50,\n model_name: str = \"sentence-transformers/all-mpnet-base-v2\",\n tokens_per_chunk: Optional[int] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Create a new TextSplitter.\"\"\"\n super().__init__(**kwargs, chunk_overlap=chunk_overlap)\n try:\n from sentence_transformers import SentenceTransformer\n except ImportError:\n raise ImportError(\n \"Could not import sentence_transformer python package. \"\n \"This is needed in order to for SentenceTransformersTokenTextSplitter. \"\n \"Please install it with `pip install sentence-transformers`.\"\n )\n self.model_name = model_name\n self._model = SentenceTransformer(self.model_name)\n self.tokenizer = self._model.tokenizer\n self._initialize_chunk_configuration(tokens_per_chunk=tokens_per_chunk)\n def _initialize_chunk_configuration(\n self, *, tokens_per_chunk: Optional[int]\n ) -> None:\n self.maximum_tokens_per_chunk = cast(int, self._model.max_seq_length)\n if tokens_per_chunk is None:\n self.tokens_per_chunk = self.maximum_tokens_per_chunk\n else:\n self.tokens_per_chunk = tokens_per_chunk", "source": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html"}
+{"id": "e990308dc849-8", "text": "else:\n self.tokens_per_chunk = tokens_per_chunk\n if self.tokens_per_chunk > self.maximum_tokens_per_chunk:\n raise ValueError(\n f\"The token limit of the models '{self.model_name}'\"\n f\" is: {self.maximum_tokens_per_chunk}.\"\n f\" Argument tokens_per_chunk={self.tokens_per_chunk}\"\n f\" > maximum token limit.\"\n )\n[docs] def split_text(self, text: str) -> List[str]:\n def encode_strip_start_and_stop_token_ids(text: str) -> List[int]:\n return self._encode(text)[1:-1]\n tokenizer = Tokenizer(\n chunk_overlap=self._chunk_overlap,\n tokens_per_chunk=self.tokens_per_chunk,\n decode=self.tokenizer.decode,\n encode=encode_strip_start_and_stop_token_ids,\n )\n return split_text_on_tokens(text=text, tokenizer=tokenizer)\n[docs] def count_tokens(self, *, text: str) -> int:\n return len(self._encode(text))\n _max_length_equal_32_bit_integer = 2**32\n def _encode(self, text: str) -> List[int]:\n token_ids_with_start_and_end_token_ids = self.tokenizer.encode(\n text,\n max_length=self._max_length_equal_32_bit_integer,\n truncation=\"do_not_truncate\",\n )\n return token_ids_with_start_and_end_token_ids\n[docs]class Language(str, Enum):\n CPP = \"cpp\"\n GO = \"go\"\n JAVA = \"java\"\n JS = \"js\"\n PHP = \"php\"\n PROTO = \"proto\"\n PYTHON = \"python\"\n RST = \"rst\"\n RUBY = \"ruby\"\n RUST = \"rust\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html"}
+{"id": "e990308dc849-9", "text": "RUBY = \"ruby\"\n RUST = \"rust\"\n SCALA = \"scala\"\n SWIFT = \"swift\"\n MARKDOWN = \"markdown\"\n LATEX = \"latex\"\n HTML = \"html\"\n[docs]class RecursiveCharacterTextSplitter(TextSplitter):\n \"\"\"Implementation of splitting text that looks at characters.\n Recursively tries to split by different characters to find one\n that works.\n \"\"\"\n def __init__(\n self,\n separators: Optional[List[str]] = None,\n keep_separator: bool = True,\n **kwargs: Any,\n ) -> None:\n \"\"\"Create a new TextSplitter.\"\"\"\n super().__init__(keep_separator=keep_separator, **kwargs)\n self._separators = separators or [\"\\n\\n\", \"\\n\", \" \", \"\"]\n def _split_text(self, text: str, separators: List[str]) -> List[str]:\n \"\"\"Split incoming text and return chunks.\"\"\"\n final_chunks = []\n # Get appropriate separator to use\n separator = separators[-1]\n new_separators = []\n for i, _s in enumerate(separators):\n if _s == \"\":\n separator = _s\n break\n if re.search(_s, text):\n separator = _s\n new_separators = separators[i + 1 :]\n break\n splits = _split_text_with_regex(text, separator, self._keep_separator)\n # Now go merging things, recursively splitting longer texts.\n _good_splits = []\n _separator = \"\" if self._keep_separator else separator\n for s in splits:\n if self._length_function(s) < self._chunk_size:\n _good_splits.append(s)", "source": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html"}
+{"id": "e990308dc849-10", "text": "_good_splits.append(s)\n else:\n if _good_splits:\n merged_text = self._merge_splits(_good_splits, _separator)\n final_chunks.extend(merged_text)\n _good_splits = []\n if not new_separators:\n final_chunks.append(s)\n else:\n other_info = self._split_text(s, new_separators)\n final_chunks.extend(other_info)\n if _good_splits:\n merged_text = self._merge_splits(_good_splits, _separator)\n final_chunks.extend(merged_text)\n return final_chunks\n[docs] def split_text(self, text: str) -> List[str]:\n return self._split_text(text, self._separators)\n[docs] @classmethod\n def from_language(\n cls, language: Language, **kwargs: Any\n ) -> RecursiveCharacterTextSplitter:\n separators = cls.get_separators_for_language(language)\n return cls(separators=separators, **kwargs)\n[docs] @staticmethod\n def get_separators_for_language(language: Language) -> List[str]:\n if language == Language.CPP:\n return [\n # Split along class definitions\n \"\\nclass \",\n # Split along function definitions\n \"\\nvoid \",\n \"\\nint \",\n \"\\nfloat \",\n \"\\ndouble \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nfor \",\n \"\\nwhile \",\n \"\\nswitch \",\n \"\\ncase \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.GO:\n return [\n # Split along function definitions", "source": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html"}
+{"id": "e990308dc849-11", "text": "elif language == Language.GO:\n return [\n # Split along function definitions\n \"\\nfunc \",\n \"\\nvar \",\n \"\\nconst \",\n \"\\ntype \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nfor \",\n \"\\nswitch \",\n \"\\ncase \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.JAVA:\n return [\n # Split along class definitions\n \"\\nclass \",\n # Split along method definitions\n \"\\npublic \",\n \"\\nprotected \",\n \"\\nprivate \",\n \"\\nstatic \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nfor \",\n \"\\nwhile \",\n \"\\nswitch \",\n \"\\ncase \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.JS:\n return [\n # Split along function definitions\n \"\\nfunction \",\n \"\\nconst \",\n \"\\nlet \",\n \"\\nvar \",\n \"\\nclass \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nfor \",\n \"\\nwhile \",\n \"\\nswitch \",\n \"\\ncase \",\n \"\\ndefault \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.PHP:\n return [\n # Split along function definitions\n \"\\nfunction \",\n # Split along class definitions\n \"\\nclass \",", "source": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html"}
+{"id": "e990308dc849-12", "text": "\"\\nfunction \",\n # Split along class definitions\n \"\\nclass \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nforeach \",\n \"\\nwhile \",\n \"\\ndo \",\n \"\\nswitch \",\n \"\\ncase \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.PROTO:\n return [\n # Split along message definitions\n \"\\nmessage \",\n # Split along service definitions\n \"\\nservice \",\n # Split along enum definitions\n \"\\nenum \",\n # Split along option definitions\n \"\\noption \",\n # Split along import statements\n \"\\nimport \",\n # Split along syntax declarations\n \"\\nsyntax \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.PYTHON:\n return [\n # First, try to split along class definitions\n \"\\nclass \",\n \"\\ndef \",\n \"\\n\\tdef \",\n # Now split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.RST:\n return [\n # Split along section titles\n \"\\n=+\\n\",\n \"\\n-+\\n\",\n \"\\n\\*+\\n\",\n # Split along directive markers\n \"\\n\\n.. *\\n\\n\",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.RUBY:\n return [", "source": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html"}
+{"id": "e990308dc849-13", "text": "\"\",\n ]\n elif language == Language.RUBY:\n return [\n # Split along method definitions\n \"\\ndef \",\n \"\\nclass \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nunless \",\n \"\\nwhile \",\n \"\\nfor \",\n \"\\ndo \",\n \"\\nbegin \",\n \"\\nrescue \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.RUST:\n return [\n # Split along function definitions\n \"\\nfn \",\n \"\\nconst \",\n \"\\nlet \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nwhile \",\n \"\\nfor \",\n \"\\nloop \",\n \"\\nmatch \",\n \"\\nconst \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.SCALA:\n return [\n # Split along class definitions\n \"\\nclass \",\n \"\\nobject \",\n # Split along method definitions\n \"\\ndef \",\n \"\\nval \",\n \"\\nvar \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nfor \",\n \"\\nwhile \",\n \"\\nmatch \",\n \"\\ncase \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.SWIFT:\n return [\n # Split along function definitions\n \"\\nfunc \",\n # Split along class definitions\n \"\\nclass \",", "source": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html"}
+{"id": "e990308dc849-14", "text": "\"\\nfunc \",\n # Split along class definitions\n \"\\nclass \",\n \"\\nstruct \",\n \"\\nenum \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nfor \",\n \"\\nwhile \",\n \"\\ndo \",\n \"\\nswitch \",\n \"\\ncase \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.MARKDOWN:\n return [\n # First, try to split along Markdown headings (starting with level 2)\n \"\\n#{1,6} \",\n # Note the alternative syntax for headings (below) is not handled here\n # Heading level 2\n # ---------------\n # End of code block\n \"```\\n\",\n # Horizontal lines\n \"\\n\\*\\*\\*+\\n\",\n \"\\n---+\\n\",\n \"\\n___+\\n\",\n # Note that this splitter doesn't handle horizontal lines defined\n # by *three or more* of ***, ---, or ___, but this is not handled\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.LATEX:\n return [\n # First, try to split along Latex sections\n \"\\n\\\\\\chapter{\",\n \"\\n\\\\\\section{\",\n \"\\n\\\\\\subsection{\",\n \"\\n\\\\\\subsubsection{\",\n # Now split by environments\n \"\\n\\\\\\begin{enumerate}\",\n \"\\n\\\\\\begin{itemize}\",\n \"\\n\\\\\\begin{description}\",\n \"\\n\\\\\\begin{list}\",\n \"\\n\\\\\\begin{quote}\",\n \"\\n\\\\\\begin{quotation}\",", "source": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html"}
+{"id": "e990308dc849-15", "text": "\"\\n\\\\\\begin{quote}\",\n \"\\n\\\\\\begin{quotation}\",\n \"\\n\\\\\\begin{verse}\",\n \"\\n\\\\\\begin{verbatim}\",\n ## Now split by math environments\n \"\\n\\\\\\begin{align}\",\n \"$$\",\n \"$\",\n # Now split by the normal type of lines\n \" \",\n \"\",\n ]\n elif language == Language.HTML:\n return [\n # First, try to split along HTML tags\n \"
None:\n \"\"\"Initialize the NLTK splitter.\"\"\"\n super().__init__(**kwargs)\n try:\n from nltk.tokenize import sent_tokenize\n self._tokenizer = sent_tokenize\n except ImportError:\n raise ImportError(", "source": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html"}
+{"id": "e990308dc849-16", "text": "self._tokenizer = sent_tokenize\n except ImportError:\n raise ImportError(\n \"NLTK is not installed, please install it with `pip install nltk`.\"\n )\n self._separator = separator\n[docs] def split_text(self, text: str) -> List[str]:\n \"\"\"Split incoming text and return chunks.\"\"\"\n # First we naively split the large input into a bunch of smaller ones.\n splits = self._tokenizer(text)\n return self._merge_splits(splits, self._separator)\n[docs]class SpacyTextSplitter(TextSplitter):\n \"\"\"Implementation of splitting text that looks at sentences using Spacy.\"\"\"\n def __init__(\n self, separator: str = \"\\n\\n\", pipeline: str = \"en_core_web_sm\", **kwargs: Any\n ) -> None:\n \"\"\"Initialize the spacy text splitter.\"\"\"\n super().__init__(**kwargs)\n try:\n import spacy\n except ImportError:\n raise ImportError(\n \"Spacy is not installed, please install it with `pip install spacy`.\"\n )\n self._tokenizer = spacy.load(pipeline)\n self._separator = separator\n[docs] def split_text(self, text: str) -> List[str]:\n \"\"\"Split incoming text and return chunks.\"\"\"\n splits = (str(s) for s in self._tokenizer(text).sents)\n return self._merge_splits(splits, self._separator)\n# For backwards compatibility\n[docs]class PythonCodeTextSplitter(RecursiveCharacterTextSplitter):\n \"\"\"Attempts to split the text along Python syntax.\"\"\"\n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Initialize a PythonCodeTextSplitter.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html"}
+{"id": "e990308dc849-17", "text": "\"\"\"Initialize a PythonCodeTextSplitter.\"\"\"\n separators = self.get_separators_for_language(Language.PYTHON)\n super().__init__(separators=separators, **kwargs)\n[docs]class MarkdownTextSplitter(RecursiveCharacterTextSplitter):\n \"\"\"Attempts to split the text along Markdown-formatted headings.\"\"\"\n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Initialize a MarkdownTextSplitter.\"\"\"\n separators = self.get_separators_for_language(Language.MARKDOWN)\n super().__init__(separators=separators, **kwargs)\n[docs]class LatexTextSplitter(RecursiveCharacterTextSplitter):\n \"\"\"Attempts to split the text along Latex-formatted layout elements.\"\"\"\n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Initialize a LatexTextSplitter.\"\"\"\n separators = self.get_separators_for_language(Language.LATEX)\n super().__init__(separators=separators, **kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html"}
+{"id": "e98f17c4398f-0", "text": "Source code for langchain.document_transformers\n\"\"\"Transform documents\"\"\"\nfrom typing import Any, Callable, List, Sequence\nimport numpy as np\nfrom pydantic import BaseModel, Field\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.math_utils import cosine_similarity\nfrom langchain.schema import BaseDocumentTransformer, Document\nclass _DocumentWithState(Document):\n \"\"\"Wrapper for a document that includes arbitrary state.\"\"\"\n state: dict = Field(default_factory=dict)\n \"\"\"State associated with the document.\"\"\"\n def to_document(self) -> Document:\n \"\"\"Convert the DocumentWithState to a Document.\"\"\"\n return Document(page_content=self.page_content, metadata=self.metadata)\n @classmethod\n def from_document(cls, doc: Document) -> \"_DocumentWithState\":\n \"\"\"Create a DocumentWithState from a Document.\"\"\"\n if isinstance(doc, cls):\n return doc\n return cls(page_content=doc.page_content, metadata=doc.metadata)\n[docs]def get_stateful_documents(\n documents: Sequence[Document],\n) -> Sequence[_DocumentWithState]:\n return [_DocumentWithState.from_document(doc) for doc in documents]\ndef _filter_similar_embeddings(\n embedded_documents: List[List[float]], similarity_fn: Callable, threshold: float\n) -> List[int]:\n \"\"\"Filter redundant documents based on the similarity of their embeddings.\"\"\"\n similarity = np.tril(similarity_fn(embedded_documents, embedded_documents), k=-1)\n redundant = np.where(similarity > threshold)\n redundant_stacked = np.column_stack(redundant)\n redundant_sorted = np.argsort(similarity[redundant])[::-1]\n included_idxs = set(range(len(embedded_documents)))\n for first_idx, second_idx in redundant_stacked[redundant_sorted]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_transformers.html"}
+{"id": "e98f17c4398f-1", "text": "for first_idx, second_idx in redundant_stacked[redundant_sorted]:\n if first_idx in included_idxs and second_idx in included_idxs:\n # Default to dropping the second document of any highly similar pair.\n included_idxs.remove(second_idx)\n return list(sorted(included_idxs))\ndef _get_embeddings_from_stateful_docs(\n embeddings: Embeddings, documents: Sequence[_DocumentWithState]\n) -> List[List[float]]:\n if len(documents) and \"embedded_doc\" in documents[0].state:\n embedded_documents = [doc.state[\"embedded_doc\"] for doc in documents]\n else:\n embedded_documents = embeddings.embed_documents(\n [d.page_content for d in documents]\n )\n for doc, embedding in zip(documents, embedded_documents):\n doc.state[\"embedded_doc\"] = embedding\n return embedded_documents\n[docs]class EmbeddingsRedundantFilter(BaseDocumentTransformer, BaseModel):\n \"\"\"Filter that drops redundant documents by comparing their embeddings.\"\"\"\n embeddings: Embeddings\n \"\"\"Embeddings to use for embedding document contents.\"\"\"\n similarity_fn: Callable = cosine_similarity\n \"\"\"Similarity function for comparing documents. Function expected to take as input\n two matrices (List[List[float]]) and return a matrix of scores where higher values\n indicate greater similarity.\"\"\"\n similarity_threshold: float = 0.95\n \"\"\"Threshold for determining when two documents are similar enough\n to be considered redundant.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def transform_documents(\n self, documents: Sequence[Document], **kwargs: Any\n ) -> Sequence[Document]:\n \"\"\"Filter down documents.\"\"\"\n stateful_documents = get_stateful_documents(documents)", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_transformers.html"}
+{"id": "e98f17c4398f-2", "text": "\"\"\"Filter down documents.\"\"\"\n stateful_documents = get_stateful_documents(documents)\n embedded_documents = _get_embeddings_from_stateful_docs(\n self.embeddings, stateful_documents\n )\n included_idxs = _filter_similar_embeddings(\n embedded_documents, self.similarity_fn, self.similarity_threshold\n )\n return [stateful_documents[i] for i in sorted(included_idxs)]\n[docs] async def atransform_documents(\n self, documents: Sequence[Document], **kwargs: Any\n ) -> Sequence[Document]:\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_transformers.html"}
+{"id": "e52585dff09b-0", "text": "Source code for langchain.output_parsers.retry\nfrom __future__ import annotations\nfrom typing import TypeVar\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import (\n BaseOutputParser,\n OutputParserException,\n PromptValue,\n)\nNAIVE_COMPLETION_RETRY = \"\"\"Prompt:\n{prompt}\nCompletion:\n{completion}\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nPlease try again:\"\"\"\nNAIVE_COMPLETION_RETRY_WITH_ERROR = \"\"\"Prompt:\n{prompt}\nCompletion:\n{completion}\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nDetails: {error}\nPlease try again:\"\"\"\nNAIVE_RETRY_PROMPT = PromptTemplate.from_template(NAIVE_COMPLETION_RETRY)\nNAIVE_RETRY_WITH_ERROR_PROMPT = PromptTemplate.from_template(\n NAIVE_COMPLETION_RETRY_WITH_ERROR\n)\nT = TypeVar(\"T\")\n[docs]class RetryOutputParser(BaseOutputParser[T]):\n \"\"\"Wraps a parser and tries to fix parsing errors.\n Does this by passing the original prompt and the completion to another\n LLM, and telling it the completion did not satisfy criteria in the prompt.\n \"\"\"\n parser: BaseOutputParser[T]\n retry_chain: LLMChain\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n parser: BaseOutputParser[T],\n prompt: BasePromptTemplate = NAIVE_RETRY_PROMPT,\n ) -> RetryOutputParser[T]:\n chain = LLMChain(llm=llm, prompt=prompt)", "source": "https://python.langchain.com/en/latest/_modules/langchain/output_parsers/retry.html"}
+{"id": "e52585dff09b-1", "text": "chain = LLMChain(llm=llm, prompt=prompt)\n return cls(parser=parser, retry_chain=chain)\n[docs] def parse_with_prompt(self, completion: str, prompt_value: PromptValue) -> T:\n try:\n parsed_completion = self.parser.parse(completion)\n except OutputParserException:\n new_completion = self.retry_chain.run(\n prompt=prompt_value.to_string(), completion=completion\n )\n parsed_completion = self.parser.parse(new_completion)\n return parsed_completion\n[docs] def parse(self, completion: str) -> T:\n raise NotImplementedError(\n \"This OutputParser can only be called by the `parse_with_prompt` method.\"\n )\n[docs] def get_format_instructions(self) -> str:\n return self.parser.get_format_instructions()\n @property\n def _type(self) -> str:\n return \"retry\"\n[docs]class RetryWithErrorOutputParser(BaseOutputParser[T]):\n \"\"\"Wraps a parser and tries to fix parsing errors.\n Does this by passing the original prompt, the completion, AND the error\n that was raised to another language model and telling it that the completion\n did not work, and raised the given error. Differs from RetryOutputParser\n in that this implementation provides the error that was raised back to the\n LLM, which in theory should give it more information on how to fix it.\n \"\"\"\n parser: BaseOutputParser[T]\n retry_chain: LLMChain\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n parser: BaseOutputParser[T],\n prompt: BasePromptTemplate = NAIVE_RETRY_WITH_ERROR_PROMPT,\n ) -> RetryWithErrorOutputParser[T]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/output_parsers/retry.html"}
+{"id": "e52585dff09b-2", "text": ") -> RetryWithErrorOutputParser[T]:\n chain = LLMChain(llm=llm, prompt=prompt)\n return cls(parser=parser, retry_chain=chain)\n[docs] def parse_with_prompt(self, completion: str, prompt_value: PromptValue) -> T:\n try:\n parsed_completion = self.parser.parse(completion)\n except OutputParserException as e:\n new_completion = self.retry_chain.run(\n prompt=prompt_value.to_string(), completion=completion, error=repr(e)\n )\n parsed_completion = self.parser.parse(new_completion)\n return parsed_completion\n[docs] def parse(self, completion: str) -> T:\n raise NotImplementedError(\n \"This OutputParser can only be called by the `parse_with_prompt` method.\"\n )\n[docs] def get_format_instructions(self) -> str:\n return self.parser.get_format_instructions()\n @property\n def _type(self) -> str:\n return \"retry_with_error\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/output_parsers/retry.html"}
+{"id": "ca6a23bf3350-0", "text": "Source code for langchain.output_parsers.pydantic\nimport json\nimport re\nfrom typing import Type, TypeVar\nfrom pydantic import BaseModel, ValidationError\nfrom langchain.output_parsers.format_instructions import PYDANTIC_FORMAT_INSTRUCTIONS\nfrom langchain.schema import BaseOutputParser, OutputParserException\nT = TypeVar(\"T\", bound=BaseModel)\n[docs]class PydanticOutputParser(BaseOutputParser[T]):\n pydantic_object: Type[T]\n[docs] def parse(self, text: str) -> T:\n try:\n # Greedy search for 1st json candidate.\n match = re.search(\n r\"\\{.*\\}\", text.strip(), re.MULTILINE | re.IGNORECASE | re.DOTALL\n )\n json_str = \"\"\n if match:\n json_str = match.group()\n json_object = json.loads(json_str, strict=False)\n return self.pydantic_object.parse_obj(json_object)\n except (json.JSONDecodeError, ValidationError) as e:\n name = self.pydantic_object.__name__\n msg = f\"Failed to parse {name} from completion {text}. Got: {e}\"\n raise OutputParserException(msg)\n[docs] def get_format_instructions(self) -> str:\n schema = self.pydantic_object.schema()\n # Remove extraneous fields.\n reduced_schema = schema\n if \"title\" in reduced_schema:\n del reduced_schema[\"title\"]\n if \"type\" in reduced_schema:\n del reduced_schema[\"type\"]\n # Ensure json in context is well-formed with double quotes.\n schema_str = json.dumps(reduced_schema)\n return PYDANTIC_FORMAT_INSTRUCTIONS.format(schema=schema_str)\n @property\n def _type(self) -> str:", "source": "https://python.langchain.com/en/latest/_modules/langchain/output_parsers/pydantic.html"}
+{"id": "ca6a23bf3350-1", "text": "@property\n def _type(self) -> str:\n return \"pydantic\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/output_parsers/pydantic.html"}
+{"id": "5370a30a3424-0", "text": "Source code for langchain.output_parsers.rail_parser\nfrom __future__ import annotations\nfrom typing import Any, Dict\nfrom langchain.schema import BaseOutputParser\n[docs]class GuardrailsOutputParser(BaseOutputParser):\n guard: Any\n @property\n def _type(self) -> str:\n return \"guardrails\"\n[docs] @classmethod\n def from_rail(cls, rail_file: str, num_reasks: int = 1) -> GuardrailsOutputParser:\n try:\n from guardrails import Guard\n except ImportError:\n raise ValueError(\n \"guardrails-ai package not installed. \"\n \"Install it by running `pip install guardrails-ai`.\"\n )\n return cls(guard=Guard.from_rail(rail_file, num_reasks=num_reasks))\n[docs] @classmethod\n def from_rail_string(\n cls, rail_str: str, num_reasks: int = 1\n ) -> GuardrailsOutputParser:\n try:\n from guardrails import Guard\n except ImportError:\n raise ValueError(\n \"guardrails-ai package not installed. \"\n \"Install it by running `pip install guardrails-ai`.\"\n )\n return cls(guard=Guard.from_rail_string(rail_str, num_reasks=num_reasks))\n[docs] def get_format_instructions(self) -> str:\n return self.guard.raw_prompt.format_instructions\n[docs] def parse(self, text: str) -> Dict:\n return self.guard.parse(text)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/output_parsers/rail_parser.html"}
+{"id": "8b82b3f2757f-0", "text": "Source code for langchain.output_parsers.regex\nfrom __future__ import annotations\nimport re\nfrom typing import Dict, List, Optional\nfrom langchain.schema import BaseOutputParser\n[docs]class RegexParser(BaseOutputParser):\n \"\"\"Class to parse the output into a dictionary.\"\"\"\n regex: str\n output_keys: List[str]\n default_output_key: Optional[str] = None\n @property\n def _type(self) -> str:\n \"\"\"Return the type key.\"\"\"\n return \"regex_parser\"\n[docs] def parse(self, text: str) -> Dict[str, str]:\n \"\"\"Parse the output of an LLM call.\"\"\"\n match = re.search(self.regex, text)\n if match:\n return {key: match.group(i + 1) for i, key in enumerate(self.output_keys)}\n else:\n if self.default_output_key is None:\n raise ValueError(f\"Could not parse output: {text}\")\n else:\n return {\n key: text if key == self.default_output_key else \"\"\n for key in self.output_keys\n }\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/output_parsers/regex.html"}
+{"id": "ea7943981c46-0", "text": "Source code for langchain.output_parsers.datetime\nimport random\nfrom datetime import datetime, timedelta\nfrom typing import List\nfrom langchain.schema import BaseOutputParser, OutputParserException\nfrom langchain.utils import comma_list\ndef _generate_random_datetime_strings(\n pattern: str,\n n: int = 3,\n start_date: datetime = datetime(1, 1, 1),\n end_date: datetime = datetime.now() + timedelta(days=3650),\n) -> List[str]:\n \"\"\"\n Generates n random datetime strings conforming to the\n given pattern within the specified date range.\n Pattern should be a string containing the desired format codes.\n start_date and end_date should be datetime objects representing\n the start and end of the date range.\n \"\"\"\n examples = []\n delta = end_date - start_date\n for i in range(n):\n random_delta = random.uniform(0, delta.total_seconds())\n dt = start_date + timedelta(seconds=random_delta)\n date_string = dt.strftime(pattern)\n examples.append(date_string)\n return examples\n[docs]class DatetimeOutputParser(BaseOutputParser[datetime]):\n format: str = \"%Y-%m-%dT%H:%M:%S.%fZ\"\n[docs] def get_format_instructions(self) -> str:\n examples = comma_list(_generate_random_datetime_strings(self.format))\n return f\"\"\"Write a datetime string that matches the \n following pattern: \"{self.format}\". Examples: {examples}\"\"\"\n[docs] def parse(self, response: str) -> datetime:\n try:\n return datetime.strptime(response.strip(), self.format)\n except ValueError as e:\n raise OutputParserException(\n f\"Could not parse datetime string: {response}\"\n ) from e\n @property", "source": "https://python.langchain.com/en/latest/_modules/langchain/output_parsers/datetime.html"}
+{"id": "ea7943981c46-1", "text": ") from e\n @property\n def _type(self) -> str:\n return \"datetime\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/output_parsers/datetime.html"}
+{"id": "0d268a24a711-0", "text": "Source code for langchain.output_parsers.list\nfrom __future__ import annotations\nfrom abc import abstractmethod\nfrom typing import List\nfrom langchain.schema import BaseOutputParser\n[docs]class ListOutputParser(BaseOutputParser):\n \"\"\"Class to parse the output of an LLM call to a list.\"\"\"\n @property\n def _type(self) -> str:\n return \"list\"\n[docs] @abstractmethod\n def parse(self, text: str) -> List[str]:\n \"\"\"Parse the output of an LLM call.\"\"\"\n[docs]class CommaSeparatedListOutputParser(ListOutputParser):\n \"\"\"Parse out comma separated lists.\"\"\"\n[docs] def get_format_instructions(self) -> str:\n return (\n \"Your response should be a list of comma separated values, \"\n \"eg: `foo, bar, baz`\"\n )\n[docs] def parse(self, text: str) -> List[str]:\n \"\"\"Parse the output of an LLM call.\"\"\"\n return text.strip().split(\", \")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/output_parsers/list.html"}
+{"id": "5359cc1d3772-0", "text": "Source code for langchain.output_parsers.structured\nfrom __future__ import annotations\nfrom typing import Any, List\nfrom pydantic import BaseModel\nfrom langchain.output_parsers.format_instructions import STRUCTURED_FORMAT_INSTRUCTIONS\nfrom langchain.output_parsers.json import parse_and_check_json_markdown\nfrom langchain.schema import BaseOutputParser\nline_template = '\\t\"{name}\": {type} // {description}'\n[docs]class ResponseSchema(BaseModel):\n name: str\n description: str\n type: str = \"string\"\ndef _get_sub_string(schema: ResponseSchema) -> str:\n return line_template.format(\n name=schema.name, description=schema.description, type=schema.type\n )\n[docs]class StructuredOutputParser(BaseOutputParser):\n response_schemas: List[ResponseSchema]\n[docs] @classmethod\n def from_response_schemas(\n cls, response_schemas: List[ResponseSchema]\n ) -> StructuredOutputParser:\n return cls(response_schemas=response_schemas)\n[docs] def get_format_instructions(self) -> str:\n schema_str = \"\\n\".join(\n [_get_sub_string(schema) for schema in self.response_schemas]\n )\n return STRUCTURED_FORMAT_INSTRUCTIONS.format(format=schema_str)\n[docs] def parse(self, text: str) -> Any:\n expected_keys = [rs.name for rs in self.response_schemas]\n return parse_and_check_json_markdown(text, expected_keys)\n @property\n def _type(self) -> str:\n return \"structured\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/output_parsers/structured.html"}
+{"id": "b8734bee59e7-0", "text": "Source code for langchain.output_parsers.regex_dict\nfrom __future__ import annotations\nimport re\nfrom typing import Dict, Optional\nfrom langchain.schema import BaseOutputParser\n[docs]class RegexDictParser(BaseOutputParser):\n \"\"\"Class to parse the output into a dictionary.\"\"\"\n regex_pattern: str = r\"{}:\\s?([^.'\\n']*)\\.?\" # : :meta private:\n output_key_to_format: Dict[str, str]\n no_update_value: Optional[str] = None\n @property\n def _type(self) -> str:\n \"\"\"Return the type key.\"\"\"\n return \"regex_dict_parser\"\n[docs] def parse(self, text: str) -> Dict[str, str]:\n \"\"\"Parse the output of an LLM call.\"\"\"\n result = {}\n for output_key, expected_format in self.output_key_to_format.items():\n specific_regex = self.regex_pattern.format(re.escape(expected_format))\n matches = re.findall(specific_regex, text)\n if not matches:\n raise ValueError(\n f\"No match found for output key: {output_key} with expected format \\\n {expected_format} on text {text}\"\n )\n elif len(matches) > 1:\n raise ValueError(\n f\"Multiple matches found for output key: {output_key} with \\\n expected format {expected_format} on text {text}\"\n )\n elif (\n self.no_update_value is not None and matches[0] == self.no_update_value\n ):\n continue\n else:\n result[output_key] = matches[0]\n return result\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/output_parsers/regex_dict.html"}
+{"id": "35eec0c5b588-0", "text": "Source code for langchain.output_parsers.fix\nfrom __future__ import annotations\nfrom typing import TypeVar\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.chains.llm import LLMChain\nfrom langchain.output_parsers.prompts import NAIVE_FIX_PROMPT\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.schema import BaseOutputParser, OutputParserException\nT = TypeVar(\"T\")\n[docs]class OutputFixingParser(BaseOutputParser[T]):\n \"\"\"Wraps a parser and tries to fix parsing errors.\"\"\"\n parser: BaseOutputParser[T]\n retry_chain: LLMChain\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n parser: BaseOutputParser[T],\n prompt: BasePromptTemplate = NAIVE_FIX_PROMPT,\n ) -> OutputFixingParser[T]:\n chain = LLMChain(llm=llm, prompt=prompt)\n return cls(parser=parser, retry_chain=chain)\n[docs] def parse(self, completion: str) -> T:\n try:\n parsed_completion = self.parser.parse(completion)\n except OutputParserException as e:\n new_completion = self.retry_chain.run(\n instructions=self.parser.get_format_instructions(),\n completion=completion,\n error=repr(e),\n )\n parsed_completion = self.parser.parse(new_completion)\n return parsed_completion\n[docs] def get_format_instructions(self) -> str:\n return self.parser.get_format_instructions()\n @property\n def _type(self) -> str:\n return \"output_fixing\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/output_parsers/fix.html"}
+{"id": "af0860406392-0", "text": "Source code for langchain.embeddings.llamacpp\n\"\"\"Wrapper around llama.cpp embedding models.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, Field, root_validator\nfrom langchain.embeddings.base import Embeddings\n[docs]class LlamaCppEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around llama.cpp embedding models.\n To use, you should have the llama-cpp-python library installed, and provide the\n path to the Llama model as a named parameter to the constructor.\n Check out: https://github.com/abetlen/llama-cpp-python\n Example:\n .. code-block:: python\n from langchain.embeddings import LlamaCppEmbeddings\n llama = LlamaCppEmbeddings(model_path=\"/path/to/model.bin\")\n \"\"\"\n client: Any #: :meta private:\n model_path: str\n n_ctx: int = Field(512, alias=\"n_ctx\")\n \"\"\"Token context window.\"\"\"\n n_parts: int = Field(-1, alias=\"n_parts\")\n \"\"\"Number of parts to split the model into. \n If -1, the number of parts is automatically determined.\"\"\"\n seed: int = Field(-1, alias=\"seed\")\n \"\"\"Seed. If -1, a random seed is used.\"\"\"\n f16_kv: bool = Field(False, alias=\"f16_kv\")\n \"\"\"Use half-precision for key/value cache.\"\"\"\n logits_all: bool = Field(False, alias=\"logits_all\")\n \"\"\"Return logits for all tokens, not just the last token.\"\"\"\n vocab_only: bool = Field(False, alias=\"vocab_only\")\n \"\"\"Only load the vocabulary, no weights.\"\"\"\n use_mlock: bool = Field(False, alias=\"use_mlock\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/llamacpp.html"}
+{"id": "af0860406392-1", "text": "use_mlock: bool = Field(False, alias=\"use_mlock\")\n \"\"\"Force system to keep model in RAM.\"\"\"\n n_threads: Optional[int] = Field(None, alias=\"n_threads\")\n \"\"\"Number of threads to use. If None, the number \n of threads is automatically determined.\"\"\"\n n_batch: Optional[int] = Field(8, alias=\"n_batch\")\n \"\"\"Number of tokens to process in parallel.\n Should be a number between 1 and n_ctx.\"\"\"\n n_gpu_layers: Optional[int] = Field(None, alias=\"n_gpu_layers\")\n \"\"\"Number of layers to be loaded into gpu memory. Default None.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that llama-cpp-python library is installed.\"\"\"\n model_path = values[\"model_path\"]\n model_param_names = [\n \"n_ctx\",\n \"n_parts\",\n \"seed\",\n \"f16_kv\",\n \"logits_all\",\n \"vocab_only\",\n \"use_mlock\",\n \"n_threads\",\n \"n_batch\",\n ]\n model_params = {k: values[k] for k in model_param_names}\n # For backwards compatibility, only include if non-null.\n if values[\"n_gpu_layers\"] is not None:\n model_params[\"n_gpu_layers\"] = values[\"n_gpu_layers\"]\n try:\n from llama_cpp import Llama\n values[\"client\"] = Llama(model_path, embedding=True, **model_params)\n except ImportError:\n raise ModuleNotFoundError(\n \"Could not import llama-cpp-python library. \"", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/llamacpp.html"}
+{"id": "af0860406392-2", "text": "raise ModuleNotFoundError(\n \"Could not import llama-cpp-python library. \"\n \"Please install the llama-cpp-python library to \"\n \"use this embedding model: pip install llama-cpp-python\"\n )\n except Exception as e:\n raise ValueError(\n f\"Could not load Llama model from path: {model_path}. \"\n f\"Received error {e}\"\n )\n return values\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Embed a list of documents using the Llama model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n embeddings = [self.client.embed(text) for text in texts]\n return [list(map(float, e)) for e in embeddings]\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Embed a query using the Llama model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n embedding = self.client.embed(text)\n return list(map(float, embedding))\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/llamacpp.html"}
+{"id": "b6ecaf2575c8-0", "text": "Source code for langchain.embeddings.tensorflow_hub\n\"\"\"Wrapper around TensorflowHub embedding models.\"\"\"\nfrom typing import Any, List\nfrom pydantic import BaseModel, Extra\nfrom langchain.embeddings.base import Embeddings\nDEFAULT_MODEL_URL = \"https://tfhub.dev/google/universal-sentence-encoder-multilingual/3\"\n[docs]class TensorflowHubEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around tensorflow_hub embedding models.\n To use, you should have the ``tensorflow_text`` python package installed.\n Example:\n .. code-block:: python\n from langchain.embeddings import TensorflowHubEmbeddings\n url = \"https://tfhub.dev/google/universal-sentence-encoder-multilingual/3\"\n tf = TensorflowHubEmbeddings(model_url=url)\n \"\"\"\n embed: Any #: :meta private:\n model_url: str = DEFAULT_MODEL_URL\n \"\"\"Model name to use.\"\"\"\n def __init__(self, **kwargs: Any):\n \"\"\"Initialize the tensorflow_hub and tensorflow_text.\"\"\"\n super().__init__(**kwargs)\n try:\n import tensorflow_hub\n except ImportError:\n raise ImportError(\n \"Could not import tensorflow-hub python package. \"\n \"Please install it with `pip install tensorflow-hub``.\"\n )\n try:\n import tensorflow_text # noqa\n except ImportError:\n raise ImportError(\n \"Could not import tensorflow_text python package. \"\n \"Please install it with `pip install tensorflow_text``.\"\n )\n self.embed = tensorflow_hub.load(self.model_url)\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/tensorflow_hub.html"}
+{"id": "b6ecaf2575c8-1", "text": "\"\"\"Compute doc embeddings using a TensorflowHub embedding model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n texts = list(map(lambda x: x.replace(\"\\n\", \" \"), texts))\n embeddings = self.embed(texts).numpy()\n return embeddings.tolist()\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a TensorflowHub embedding model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n text = text.replace(\"\\n\", \" \")\n embedding = self.embed([text]).numpy()[0]\n return embedding.tolist()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/tensorflow_hub.html"}
+{"id": "7b11115d5237-0", "text": "Source code for langchain.embeddings.elasticsearch\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, List, Optional\nfrom langchain.utils import get_from_env\nif TYPE_CHECKING:\n from elasticsearch import Elasticsearch\n from elasticsearch.client import MlClient\nfrom langchain.embeddings.base import Embeddings\n[docs]class ElasticsearchEmbeddings(Embeddings):\n \"\"\"\n Wrapper around Elasticsearch embedding models.\n This class provides an interface to generate embeddings using a model deployed\n in an Elasticsearch cluster. It requires an Elasticsearch connection object\n and the model_id of the model deployed in the cluster.\n In Elasticsearch you need to have an embedding model loaded and deployed.\n - https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-trained-model.html\n - https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-deploy-models.html\n \"\"\" # noqa: E501\n def __init__(\n self,\n client: MlClient,\n model_id: str,\n *,\n input_field: str = \"text_field\",\n ):\n \"\"\"\n Initialize the ElasticsearchEmbeddings instance.\n Args:\n client (MlClient): An Elasticsearch ML client object.\n model_id (str): The model_id of the model deployed in the Elasticsearch\n cluster.\n input_field (str): The name of the key for the input text field in the\n document. Defaults to 'text_field'.\n \"\"\"\n self.client = client\n self.model_id = model_id\n self.input_field = input_field\n[docs] @classmethod\n def from_credentials(\n cls,\n model_id: str,\n *,\n es_cloud_id: Optional[str] = None,\n es_user: Optional[str] = None,", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/elasticsearch.html"}
+{"id": "7b11115d5237-1", "text": "es_user: Optional[str] = None,\n es_password: Optional[str] = None,\n input_field: str = \"text_field\",\n ) -> ElasticsearchEmbeddings:\n \"\"\"Instantiate embeddings from Elasticsearch credentials.\n Args:\n model_id (str): The model_id of the model deployed in the Elasticsearch\n cluster.\n input_field (str): The name of the key for the input text field in the\n document. Defaults to 'text_field'.\n es_cloud_id: (str, optional): The Elasticsearch cloud ID to connect to.\n es_user: (str, optional): Elasticsearch username.\n es_password: (str, optional): Elasticsearch password.\n Example:\n .. code-block:: python\n from langchain.embeddings import ElasticsearchEmbeddings\n # Define the model ID and input field name (if different from default)\n model_id = \"your_model_id\"\n # Optional, only if different from 'text_field'\n input_field = \"your_input_field\"\n # Credentials can be passed in two ways. Either set the env vars\n # ES_CLOUD_ID, ES_USER, ES_PASSWORD and they will be automatically\n # pulled in, or pass them in directly as kwargs.\n embeddings = ElasticsearchEmbeddings.from_credentials(\n model_id,\n input_field=input_field,\n # es_cloud_id=\"foo\",\n # es_user=\"bar\",\n # es_password=\"baz\",\n )\n documents = [\n \"This is an example document.\",\n \"Another example document to generate embeddings for.\",\n ]\n embeddings_generator.embed_documents(documents)\n \"\"\"\n try:\n from elasticsearch import Elasticsearch\n from elasticsearch.client import MlClient\n except ImportError:\n raise ImportError(", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/elasticsearch.html"}
+{"id": "7b11115d5237-2", "text": "from elasticsearch.client import MlClient\n except ImportError:\n raise ImportError(\n \"elasticsearch package not found, please install with 'pip install \"\n \"elasticsearch'\"\n )\n es_cloud_id = es_cloud_id or get_from_env(\"es_cloud_id\", \"ES_CLOUD_ID\")\n es_user = es_user or get_from_env(\"es_user\", \"ES_USER\")\n es_password = es_password or get_from_env(\"es_password\", \"ES_PASSWORD\")\n # Connect to Elasticsearch\n es_connection = Elasticsearch(\n cloud_id=es_cloud_id, basic_auth=(es_user, es_password)\n )\n client = MlClient(es_connection)\n return cls(client, model_id, input_field=input_field)\n[docs] @classmethod\n def from_es_connection(\n cls,\n model_id: str,\n es_connection: Elasticsearch,\n input_field: str = \"text_field\",\n ) -> ElasticsearchEmbeddings:\n \"\"\"\n Instantiate embeddings from an existing Elasticsearch connection.\n This method provides a way to create an instance of the ElasticsearchEmbeddings\n class using an existing Elasticsearch connection. The connection object is used\n to create an MlClient, which is then used to initialize the\n ElasticsearchEmbeddings instance.\n Args:\n model_id (str): The model_id of the model deployed in the Elasticsearch cluster.\n es_connection (elasticsearch.Elasticsearch): An existing Elasticsearch\n connection object. input_field (str, optional): The name of the key for the\n input text field in the document. Defaults to 'text_field'.\n Returns:\n ElasticsearchEmbeddings: An instance of the ElasticsearchEmbeddings class.\n Example:\n .. code-block:: python\n from elasticsearch import Elasticsearch", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/elasticsearch.html"}
+{"id": "7b11115d5237-3", "text": "Example:\n .. code-block:: python\n from elasticsearch import Elasticsearch\n from langchain.embeddings import ElasticsearchEmbeddings\n # Define the model ID and input field name (if different from default)\n model_id = \"your_model_id\"\n # Optional, only if different from 'text_field'\n input_field = \"your_input_field\"\n # Create Elasticsearch connection\n es_connection = Elasticsearch(\n hosts=[\"localhost:9200\"], http_auth=(\"user\", \"password\")\n )\n # Instantiate ElasticsearchEmbeddings using the existing connection\n embeddings = ElasticsearchEmbeddings.from_es_connection(\n model_id,\n es_connection,\n input_field=input_field,\n )\n documents = [\n \"This is an example document.\",\n \"Another example document to generate embeddings for.\",\n ]\n embeddings_generator.embed_documents(documents)\n \"\"\"\n # Importing MlClient from elasticsearch.client within the method to\n # avoid unnecessary import if the method is not used\n from elasticsearch.client import MlClient\n # Create an MlClient from the given Elasticsearch connection\n client = MlClient(es_connection)\n # Return a new instance of the ElasticsearchEmbeddings class with\n # the MlClient, model_id, and input_field\n return cls(client, model_id, input_field=input_field)\n def _embedding_func(self, texts: List[str]) -> List[List[float]]:\n \"\"\"\n Generate embeddings for the given texts using the Elasticsearch model.\n Args:\n texts (List[str]): A list of text strings to generate embeddings for.\n Returns:\n List[List[float]]: A list of embeddings, one for each text in the input\n list.\n \"\"\"\n response = self.client.infer_trained_model(", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/elasticsearch.html"}
+{"id": "7b11115d5237-4", "text": "list.\n \"\"\"\n response = self.client.infer_trained_model(\n model_id=self.model_id, docs=[{self.input_field: text} for text in texts]\n )\n embeddings = [doc[\"predicted_value\"] for doc in response[\"inference_results\"]]\n return embeddings\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"\n Generate embeddings for a list of documents.\n Args:\n texts (List[str]): A list of document text strings to generate embeddings\n for.\n Returns:\n List[List[float]]: A list of embeddings, one for each document in the input\n list.\n \"\"\"\n return self._embedding_func(texts)\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"\n Generate an embedding for a single query text.\n Args:\n text (str): The query text to generate an embedding for.\n Returns:\n List[float]: The embedding for the input query text.\n \"\"\"\n return self._embedding_func([text])[0]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/elasticsearch.html"}
+{"id": "47e5eeb4b1e9-0", "text": "Source code for langchain.embeddings.bedrock\nimport json\nimport os\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.embeddings.base import Embeddings\n[docs]class BedrockEmbeddings(BaseModel, Embeddings):\n \"\"\"Embeddings provider to invoke Bedrock embedding models.\n To authenticate, the AWS client uses the following methods to\n automatically load credentials:\n https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n If a specific credential profile should be used, you must pass\n the name of the profile from the ~/.aws/credentials file that is to be used.\n Make sure the credentials / roles used have the required policies to\n access the Bedrock service.\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n from langchain.bedrock_embeddings import BedrockEmbeddings\n \n region_name =\"us-east-1\"\n credentials_profile_name = \"default\"\n model_id = \"amazon.titan-e1t-medium\"\n be = BedrockEmbeddings(\n credentials_profile_name=credentials_profile_name,\n region_name=region_name,\n model_id=model_id\n )\n \"\"\"\n client: Any #: :meta private:\n region_name: Optional[str] = None\n \"\"\"The aws region e.g., `us-west-2`. Fallsback to AWS_DEFAULT_REGION env variable\n or region specified in ~/.aws/config in case it is not provided here.\n \"\"\"\n credentials_profile_name: Optional[str] = None\n \"\"\"The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\n has either access keys or role information specified.\n If not specified, the default credential profile or, if on an EC2 instance,", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/bedrock.html"}
+{"id": "47e5eeb4b1e9-1", "text": "If not specified, the default credential profile or, if on an EC2 instance,\n credentials from IMDS will be used.\n See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n \"\"\"\n model_id: str = \"amazon.titan-e1t-medium\"\n \"\"\"Id of the model to call, e.g., amazon.titan-e1t-medium, this is\n equivalent to the modelId property in the list-foundation-models api\"\"\"\n model_kwargs: Optional[Dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that AWS credentials to and python package exists in environment.\"\"\"\n if values[\"client\"] is not None:\n return values\n try:\n import boto3\n if values[\"credentials_profile_name\"] is not None:\n session = boto3.Session(profile_name=values[\"credentials_profile_name\"])\n else:\n # use default credentials\n session = boto3.Session()\n client_params = {}\n if values[\"region_name\"]:\n client_params[\"region_name\"] = values[\"region_name\"]\n values[\"client\"] = session.client(\"bedrock\", **client_params)\n except ImportError:\n raise ModuleNotFoundError(\n \"Could not import boto3 python package. \"\n \"Please install it with `pip install boto3`.\"\n )\n except Exception as e:\n raise ValueError(\n \"Could not load credentials to authenticate with AWS client. \"\n \"Please check that credentials in the specified \"\n \"profile name are valid.\"\n ) from e\n return values", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/bedrock.html"}
+{"id": "47e5eeb4b1e9-2", "text": "\"profile name are valid.\"\n ) from e\n return values\n def _embedding_func(self, text: str) -> List[float]:\n \"\"\"Call out to Bedrock embedding endpoint.\"\"\"\n # replace newlines, which can negatively affect performance.\n text = text.replace(os.linesep, \" \")\n _model_kwargs = self.model_kwargs or {}\n input_body = {**_model_kwargs}\n input_body[\"inputText\"] = text\n body = json.dumps(input_body)\n content_type = \"application/json\"\n accepts = \"application/json\"\n embeddings = []\n try:\n response = self.client.invoke_model(\n body=body,\n modelId=self.model_id,\n accept=accepts,\n contentType=content_type,\n )\n response_body = json.loads(response.get(\"body\").read())\n embeddings = response_body.get(\"embedding\")\n except Exception as e:\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n return embeddings\n[docs] def embed_documents(\n self, texts: List[str], chunk_size: int = 1\n ) -> List[List[float]]:\n \"\"\"Compute doc embeddings using a Bedrock model.\n Args:\n texts: The list of texts to embed.\n chunk_size: Bedrock currently only allows single string\n inputs, so chunk size is always 1. This input is here\n only for compatibility with the embeddings interface.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n results = []\n for text in texts:\n response = self._embedding_func(text)\n results.append(response)\n return results\n[docs] def embed_query(self, text: str) -> List[float]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/bedrock.html"}
+{"id": "47e5eeb4b1e9-3", "text": "[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a Bedrock model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n return self._embedding_func(text)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/bedrock.html"}
+{"id": "38c1e1d83a2e-0", "text": "Source code for langchain.embeddings.self_hosted_hugging_face\n\"\"\"Wrapper around HuggingFace embedding models for self-hosted remote hardware.\"\"\"\nimport importlib\nimport logging\nfrom typing import Any, Callable, List, Optional\nfrom langchain.embeddings.self_hosted import SelfHostedEmbeddings\nDEFAULT_MODEL_NAME = \"sentence-transformers/all-mpnet-base-v2\"\nDEFAULT_INSTRUCT_MODEL = \"hkunlp/instructor-large\"\nDEFAULT_EMBED_INSTRUCTION = \"Represent the document for retrieval: \"\nDEFAULT_QUERY_INSTRUCTION = (\n \"Represent the question for retrieving supporting documents: \"\n)\nlogger = logging.getLogger(__name__)\ndef _embed_documents(client: Any, *args: Any, **kwargs: Any) -> List[List[float]]:\n \"\"\"Inference function to send to the remote hardware.\n Accepts a sentence_transformer model_id and\n returns a list of embeddings for each document in the batch.\n \"\"\"\n return client.encode(*args, **kwargs)\ndef load_embedding_model(model_id: str, instruct: bool = False, device: int = 0) -> Any:\n \"\"\"Load the embedding model.\"\"\"\n if not instruct:\n import sentence_transformers\n client = sentence_transformers.SentenceTransformer(model_id)\n else:\n from InstructorEmbedding import INSTRUCTOR\n client = INSTRUCTOR(model_id)\n if importlib.util.find_spec(\"torch\") is not None:\n import torch\n cuda_device_count = torch.cuda.device_count()\n if device < -1 or (device >= cuda_device_count):\n raise ValueError(\n f\"Got device=={device}, \"\n f\"device is required to be within [-1, {cuda_device_count})\"\n )\n if device < 0 and cuda_device_count > 0:\n logger.warning(", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted_hugging_face.html"}
+{"id": "38c1e1d83a2e-1", "text": "if device < 0 and cuda_device_count > 0:\n logger.warning(\n \"Device has %d GPUs available. \"\n \"Provide device={deviceId} to `from_model_id` to use available\"\n \"GPUs for execution. deviceId is -1 for CPU and \"\n \"can be a positive integer associated with CUDA device id.\",\n cuda_device_count,\n )\n client = client.to(device)\n return client\n[docs]class SelfHostedHuggingFaceEmbeddings(SelfHostedEmbeddings):\n \"\"\"Runs sentence_transformers embedding models on self-hosted remote hardware.\n Supported hardware includes auto-launched instances on AWS, GCP, Azure,\n and Lambda, as well as servers specified\n by IP address and SSH credentials (such as on-prem, or another cloud\n like Paperspace, Coreweave, etc.).\n To use, you should have the ``runhouse`` python package installed.\n Example:\n .. code-block:: python\n from langchain.embeddings import SelfHostedHuggingFaceEmbeddings\n import runhouse as rh\n model_name = \"sentence-transformers/all-mpnet-base-v2\"\n gpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\n hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu)\n \"\"\"\n client: Any #: :meta private:\n model_id: str = DEFAULT_MODEL_NAME\n \"\"\"Model name to use.\"\"\"\n model_reqs: List[str] = [\"./\", \"sentence_transformers\", \"torch\"]\n \"\"\"Requirements to install on hardware to inference the model.\"\"\"\n hardware: Any\n \"\"\"Remote hardware to send the inference function to.\"\"\"\n model_load_fn: Callable = load_embedding_model", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted_hugging_face.html"}
+{"id": "38c1e1d83a2e-2", "text": "model_load_fn: Callable = load_embedding_model\n \"\"\"Function to load the model remotely on the server.\"\"\"\n load_fn_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model load function.\"\"\"\n inference_fn: Callable = _embed_documents\n \"\"\"Inference function to extract the embeddings.\"\"\"\n def __init__(self, **kwargs: Any):\n \"\"\"Initialize the remote inference function.\"\"\"\n load_fn_kwargs = kwargs.pop(\"load_fn_kwargs\", {})\n load_fn_kwargs[\"model_id\"] = load_fn_kwargs.get(\"model_id\", DEFAULT_MODEL_NAME)\n load_fn_kwargs[\"instruct\"] = load_fn_kwargs.get(\"instruct\", False)\n load_fn_kwargs[\"device\"] = load_fn_kwargs.get(\"device\", 0)\n super().__init__(load_fn_kwargs=load_fn_kwargs, **kwargs)\n[docs]class SelfHostedHuggingFaceInstructEmbeddings(SelfHostedHuggingFaceEmbeddings):\n \"\"\"Runs InstructorEmbedding embedding models on self-hosted remote hardware.\n Supported hardware includes auto-launched instances on AWS, GCP, Azure,\n and Lambda, as well as servers specified\n by IP address and SSH credentials (such as on-prem, or another\n cloud like Paperspace, Coreweave, etc.).\n To use, you should have the ``runhouse`` python package installed.\n Example:\n .. code-block:: python\n from langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings\n import runhouse as rh\n model_name = \"hkunlp/instructor-large\"\n gpu = rh.cluster(name='rh-a10x', instance_type='A100:1')\n hf = SelfHostedHuggingFaceInstructEmbeddings(\n model_name=model_name, hardware=gpu)\n \"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted_hugging_face.html"}
+{"id": "38c1e1d83a2e-3", "text": "model_name=model_name, hardware=gpu)\n \"\"\"\n model_id: str = DEFAULT_INSTRUCT_MODEL\n \"\"\"Model name to use.\"\"\"\n embed_instruction: str = DEFAULT_EMBED_INSTRUCTION\n \"\"\"Instruction to use for embedding documents.\"\"\"\n query_instruction: str = DEFAULT_QUERY_INSTRUCTION\n \"\"\"Instruction to use for embedding query.\"\"\"\n model_reqs: List[str] = [\"./\", \"InstructorEmbedding\", \"torch\"]\n \"\"\"Requirements to install on hardware to inference the model.\"\"\"\n def __init__(self, **kwargs: Any):\n \"\"\"Initialize the remote inference function.\"\"\"\n load_fn_kwargs = kwargs.pop(\"load_fn_kwargs\", {})\n load_fn_kwargs[\"model_id\"] = load_fn_kwargs.get(\n \"model_id\", DEFAULT_INSTRUCT_MODEL\n )\n load_fn_kwargs[\"instruct\"] = load_fn_kwargs.get(\"instruct\", True)\n load_fn_kwargs[\"device\"] = load_fn_kwargs.get(\"device\", 0)\n super().__init__(load_fn_kwargs=load_fn_kwargs, **kwargs)\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Compute doc embeddings using a HuggingFace instruct model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n instruction_pairs = []\n for text in texts:\n instruction_pairs.append([self.embed_instruction, text])\n embeddings = self.client(self.pipeline_ref, instruction_pairs)\n return embeddings.tolist()\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a HuggingFace instruct model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted_hugging_face.html"}
+{"id": "38c1e1d83a2e-4", "text": "Returns:\n Embeddings for the text.\n \"\"\"\n instruction_pair = [self.query_instruction, text]\n embedding = self.client(self.pipeline_ref, [instruction_pair])[0]\n return embedding.tolist()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted_hugging_face.html"}
+{"id": "64ef7e8ab507-0", "text": "Source code for langchain.embeddings.huggingface_hub\n\"\"\"Wrapper around HuggingFace Hub embedding models.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nDEFAULT_REPO_ID = \"sentence-transformers/all-mpnet-base-v2\"\nVALID_TASKS = (\"feature-extraction\",)\n[docs]class HuggingFaceHubEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around HuggingFaceHub embedding models.\n To use, you should have the ``huggingface_hub`` python package installed, and the\n environment variable ``HUGGINGFACEHUB_API_TOKEN`` set with your API token, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.embeddings import HuggingFaceHubEmbeddings\n repo_id = \"sentence-transformers/all-mpnet-base-v2\"\n hf = HuggingFaceHubEmbeddings(\n repo_id=repo_id,\n task=\"feature-extraction\",\n huggingfacehub_api_token=\"my-api-key\",\n )\n \"\"\"\n client: Any #: :meta private:\n repo_id: str = DEFAULT_REPO_ID\n \"\"\"Model name to use.\"\"\"\n task: Optional[str] = \"feature-extraction\"\n \"\"\"Task to call the model with.\"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n huggingfacehub_api_token: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/huggingface_hub.html"}
+{"id": "64ef7e8ab507-1", "text": "@root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n huggingfacehub_api_token = get_from_dict_or_env(\n values, \"huggingfacehub_api_token\", \"HUGGINGFACEHUB_API_TOKEN\"\n )\n try:\n from huggingface_hub.inference_api import InferenceApi\n repo_id = values[\"repo_id\"]\n if not repo_id.startswith(\"sentence-transformers\"):\n raise ValueError(\n \"Currently only 'sentence-transformers' embedding models \"\n f\"are supported. Got invalid 'repo_id' {repo_id}.\"\n )\n client = InferenceApi(\n repo_id=repo_id,\n token=huggingfacehub_api_token,\n task=values.get(\"task\"),\n )\n if client.task not in VALID_TASKS:\n raise ValueError(\n f\"Got invalid task {client.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n values[\"client\"] = client\n except ImportError:\n raise ValueError(\n \"Could not import huggingface_hub python package. \"\n \"Please install it with `pip install huggingface_hub`.\"\n )\n return values\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Call out to HuggingFaceHub's embedding endpoint for embedding search docs.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n # replace newlines, which can negatively affect performance.\n texts = [text.replace(\"\\n\", \" \") for text in texts]", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/huggingface_hub.html"}
+{"id": "64ef7e8ab507-2", "text": "texts = [text.replace(\"\\n\", \" \") for text in texts]\n _model_kwargs = self.model_kwargs or {}\n responses = self.client(inputs=texts, params=_model_kwargs)\n return responses\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Call out to HuggingFaceHub's embedding endpoint for embedding query text.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n response = self.embed_documents([text])[0]\n return response\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/huggingface_hub.html"}
+{"id": "094599aa4c01-0", "text": "Source code for langchain.embeddings.aleph_alpha\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\n[docs]class AlephAlphaAsymmetricSemanticEmbedding(BaseModel, Embeddings):\n \"\"\"\n Wrapper for Aleph Alpha's Asymmetric Embeddings\n AA provides you with an endpoint to embed a document and a query.\n The models were optimized to make the embeddings of documents and\n the query for a document as similar as possible.\n To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/\n Example:\n .. code-block:: python\n from aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding\n embeddings = AlephAlphaSymmetricSemanticEmbedding()\n document = \"This is a content of the document\"\n query = \"What is the content of the document?\"\n doc_result = embeddings.embed_documents([document])\n query_result = embeddings.embed_query(query)\n \"\"\"\n client: Any #: :meta private:\n model: Optional[str] = \"luminous-base\"\n \"\"\"Model name to use.\"\"\"\n hosting: Optional[str] = \"https://api.aleph-alpha.com\"\n \"\"\"Optional parameter that specifies which datacenters may process the request.\"\"\"\n normalize: Optional[bool] = True\n \"\"\"Should returned embeddings be normalized\"\"\"\n compress_to_size: Optional[int] = 128\n \"\"\"Should the returned embeddings come back as an original 5120-dim vector, \n or should it be compressed to 128-dim.\"\"\"\n contextual_control_threshold: Optional[int] = None\n \"\"\"Attention control parameters only apply to those tokens that have", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/aleph_alpha.html"}
+{"id": "094599aa4c01-1", "text": "\"\"\"Attention control parameters only apply to those tokens that have \n explicitly been set in the request.\"\"\"\n control_log_additive: Optional[bool] = True\n \"\"\"Apply controls on prompt items by adding the log(control_factor) \n to attention scores.\"\"\"\n aleph_alpha_api_key: Optional[str] = None\n \"\"\"API key for Aleph Alpha API.\"\"\"\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n aleph_alpha_api_key = get_from_dict_or_env(\n values, \"aleph_alpha_api_key\", \"ALEPH_ALPHA_API_KEY\"\n )\n try:\n from aleph_alpha_client import Client\n except ImportError:\n raise ValueError(\n \"Could not import aleph_alpha_client python package. \"\n \"Please install it with `pip install aleph_alpha_client`.\"\n )\n values[\"client\"] = Client(token=aleph_alpha_api_key)\n return values\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Call out to Aleph Alpha's asymmetric Document endpoint.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n try:\n from aleph_alpha_client import (\n Prompt,\n SemanticEmbeddingRequest,\n SemanticRepresentation,\n )\n except ImportError:\n raise ValueError(\n \"Could not import aleph_alpha_client python package. \"\n \"Please install it with `pip install aleph_alpha_client`.\"\n )\n document_embeddings = []\n for text in texts:\n document_params = {\n \"prompt\": Prompt.from_text(text),", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/aleph_alpha.html"}
+{"id": "094599aa4c01-2", "text": "document_params = {\n \"prompt\": Prompt.from_text(text),\n \"representation\": SemanticRepresentation.Document,\n \"compress_to_size\": self.compress_to_size,\n \"normalize\": self.normalize,\n \"contextual_control_threshold\": self.contextual_control_threshold,\n \"control_log_additive\": self.control_log_additive,\n }\n document_request = SemanticEmbeddingRequest(**document_params)\n document_response = self.client.semantic_embed(\n request=document_request, model=self.model\n )\n document_embeddings.append(document_response.embedding)\n return document_embeddings\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Call out to Aleph Alpha's asymmetric, query embedding endpoint\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n try:\n from aleph_alpha_client import (\n Prompt,\n SemanticEmbeddingRequest,\n SemanticRepresentation,\n )\n except ImportError:\n raise ValueError(\n \"Could not import aleph_alpha_client python package. \"\n \"Please install it with `pip install aleph_alpha_client`.\"\n )\n symmetric_params = {\n \"prompt\": Prompt.from_text(text),\n \"representation\": SemanticRepresentation.Query,\n \"compress_to_size\": self.compress_to_size,\n \"normalize\": self.normalize,\n \"contextual_control_threshold\": self.contextual_control_threshold,\n \"control_log_additive\": self.control_log_additive,\n }\n symmetric_request = SemanticEmbeddingRequest(**symmetric_params)\n symmetric_response = self.client.semantic_embed(\n request=symmetric_request, model=self.model\n )\n return symmetric_response.embedding", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/aleph_alpha.html"}
+{"id": "094599aa4c01-3", "text": "request=symmetric_request, model=self.model\n )\n return symmetric_response.embedding\n[docs]class AlephAlphaSymmetricSemanticEmbedding(AlephAlphaAsymmetricSemanticEmbedding):\n \"\"\"The symmetric version of the Aleph Alpha's semantic embeddings.\n The main difference is that here, both the documents and\n queries are embedded with a SemanticRepresentation.Symmetric\n Example:\n .. code-block:: python\n from aleph_alpha import AlephAlphaSymmetricSemanticEmbedding\n embeddings = AlephAlphaAsymmetricSemanticEmbedding()\n text = \"This is a test text\"\n doc_result = embeddings.embed_documents([text])\n query_result = embeddings.embed_query(text)\n \"\"\"\n def _embed(self, text: str) -> List[float]:\n try:\n from aleph_alpha_client import (\n Prompt,\n SemanticEmbeddingRequest,\n SemanticRepresentation,\n )\n except ImportError:\n raise ValueError(\n \"Could not import aleph_alpha_client python package. \"\n \"Please install it with `pip install aleph_alpha_client`.\"\n )\n query_params = {\n \"prompt\": Prompt.from_text(text),\n \"representation\": SemanticRepresentation.Symmetric,\n \"compress_to_size\": self.compress_to_size,\n \"normalize\": self.normalize,\n \"contextual_control_threshold\": self.contextual_control_threshold,\n \"control_log_additive\": self.control_log_additive,\n }\n query_request = SemanticEmbeddingRequest(**query_params)\n query_response = self.client.semantic_embed(\n request=query_request, model=self.model\n )\n return query_response.embedding\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Call out to Aleph Alpha's Document endpoint.", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/aleph_alpha.html"}
+{"id": "094599aa4c01-4", "text": "\"\"\"Call out to Aleph Alpha's Document endpoint.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n document_embeddings = []\n for text in texts:\n document_embeddings.append(self._embed(text))\n return document_embeddings\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Call out to Aleph Alpha's asymmetric, query embedding endpoint\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n return self._embed(text)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/aleph_alpha.html"}
+{"id": "eedc1f895dcc-0", "text": "Source code for langchain.embeddings.fake\nfrom typing import List\nimport numpy as np\nfrom pydantic import BaseModel\nfrom langchain.embeddings.base import Embeddings\n[docs]class FakeEmbeddings(Embeddings, BaseModel):\n size: int\n def _get_embedding(self) -> List[float]:\n return list(np.random.normal(size=self.size))\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n return [self._get_embedding() for _ in texts]\n[docs] def embed_query(self, text: str) -> List[float]:\n return self._get_embedding()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/fake.html"}
+{"id": "1533f89840fa-0", "text": "Source code for langchain.embeddings.cohere\n\"\"\"Wrapper around Cohere embedding models.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\n[docs]class CohereEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around Cohere embedding models.\n To use, you should have the ``cohere`` python package installed, and the\n environment variable ``COHERE_API_KEY`` set with your API key or pass it\n as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.embeddings import CohereEmbeddings\n cohere = CohereEmbeddings(\n model=\"embed-english-light-v2.0\", cohere_api_key=\"my-api-key\"\n )\n \"\"\"\n client: Any #: :meta private:\n model: str = \"embed-english-v2.0\"\n \"\"\"Model name to use.\"\"\"\n truncate: Optional[str] = None\n \"\"\"Truncate embeddings that are too long from start or end (\"NONE\"|\"START\"|\"END\")\"\"\"\n cohere_api_key: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n cohere_api_key = get_from_dict_or_env(\n values, \"cohere_api_key\", \"COHERE_API_KEY\"\n )\n try:\n import cohere\n values[\"client\"] = cohere.Client(cohere_api_key)\n except ImportError:\n raise ValueError(", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/cohere.html"}
+{"id": "1533f89840fa-1", "text": "except ImportError:\n raise ValueError(\n \"Could not import cohere python package. \"\n \"Please install it with `pip install cohere`.\"\n )\n return values\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Call out to Cohere's embedding endpoint.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n embeddings = self.client.embed(\n model=self.model, texts=texts, truncate=self.truncate\n ).embeddings\n return [list(map(float, e)) for e in embeddings]\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Call out to Cohere's embedding endpoint.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n embedding = self.client.embed(\n model=self.model, texts=[text], truncate=self.truncate\n ).embeddings[0]\n return list(map(float, embedding))\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/cohere.html"}
+{"id": "00bc5482a026-0", "text": "Source code for langchain.embeddings.modelscope_hub\n\"\"\"Wrapper around ModelScopeHub embedding models.\"\"\"\nfrom typing import Any, List\nfrom pydantic import BaseModel, Extra\nfrom langchain.embeddings.base import Embeddings\n[docs]class ModelScopeEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around modelscope_hub embedding models.\n To use, you should have the ``modelscope`` python package installed.\n Example:\n .. code-block:: python\n from langchain.embeddings import ModelScopeEmbeddings\n model_id = \"damo/nlp_corom_sentence-embedding_english-base\"\n embed = ModelScopeEmbeddings(model_id=model_id)\n \"\"\"\n embed: Any\n model_id: str = \"damo/nlp_corom_sentence-embedding_english-base\"\n \"\"\"Model name to use.\"\"\"\n def __init__(self, **kwargs: Any):\n \"\"\"Initialize the modelscope\"\"\"\n super().__init__(**kwargs)\n try:\n from modelscope.pipelines import pipeline\n from modelscope.utils.constant import Tasks\n self.embed = pipeline(Tasks.sentence_embedding, model=self.model_id)\n except ImportError as e:\n raise ImportError(\n \"Could not import some python packages.\"\n \"Please install it with `pip install modelscope`.\"\n ) from e\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Compute doc embeddings using a modelscope embedding model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n texts = list(map(lambda x: x.replace(\"\\n\", \" \"), texts))", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/modelscope_hub.html"}
+{"id": "00bc5482a026-1", "text": "texts = list(map(lambda x: x.replace(\"\\n\", \" \"), texts))\n inputs = {\"source_sentence\": texts}\n embeddings = self.embed(input=inputs)[\"text_embedding\"]\n return embeddings.tolist()\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a modelscope embedding model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n text = text.replace(\"\\n\", \" \")\n inputs = {\"source_sentence\": [text]}\n embedding = self.embed(input=inputs)[\"text_embedding\"][0]\n return embedding.tolist()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/modelscope_hub.html"}
+{"id": "682a003d0ef2-0", "text": "Source code for langchain.embeddings.huggingface\n\"\"\"Wrapper around HuggingFace embedding models.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, Field\nfrom langchain.embeddings.base import Embeddings\nDEFAULT_MODEL_NAME = \"sentence-transformers/all-mpnet-base-v2\"\nDEFAULT_INSTRUCT_MODEL = \"hkunlp/instructor-large\"\nDEFAULT_EMBED_INSTRUCTION = \"Represent the document for retrieval: \"\nDEFAULT_QUERY_INSTRUCTION = (\n \"Represent the question for retrieving supporting documents: \"\n)\n[docs]class HuggingFaceEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around sentence_transformers embedding models.\n To use, you should have the ``sentence_transformers`` python package installed.\n Example:\n .. code-block:: python\n from langchain.embeddings import HuggingFaceEmbeddings\n model_name = \"sentence-transformers/all-mpnet-base-v2\"\n model_kwargs = {'device': 'cpu'}\n encode_kwargs = {'normalize_embeddings': False}\n hf = HuggingFaceEmbeddings(\n model_name=model_name,\n model_kwargs=model_kwargs,\n encode_kwargs=encode_kwargs\n )\n \"\"\"\n client: Any #: :meta private:\n model_name: str = DEFAULT_MODEL_NAME\n \"\"\"Model name to use.\"\"\"\n cache_folder: Optional[str] = None\n \"\"\"Path to store models. \n Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Key word arguments to pass to the model.\"\"\"\n encode_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Key word arguments to pass when calling the `encode` method of the model.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/huggingface.html"}
+{"id": "682a003d0ef2-1", "text": "\"\"\"Key word arguments to pass when calling the `encode` method of the model.\"\"\"\n def __init__(self, **kwargs: Any):\n \"\"\"Initialize the sentence_transformer.\"\"\"\n super().__init__(**kwargs)\n try:\n import sentence_transformers\n except ImportError as exc:\n raise ImportError(\n \"Could not import sentence_transformers python package. \"\n \"Please install it with `pip install sentence_transformers`.\"\n ) from exc\n self.client = sentence_transformers.SentenceTransformer(\n self.model_name, cache_folder=self.cache_folder, **self.model_kwargs\n )\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Compute doc embeddings using a HuggingFace transformer model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n texts = list(map(lambda x: x.replace(\"\\n\", \" \"), texts))\n embeddings = self.client.encode(texts, **self.encode_kwargs)\n return embeddings.tolist()\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a HuggingFace transformer model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n text = text.replace(\"\\n\", \" \")\n embedding = self.client.encode(text, **self.encode_kwargs)\n return embedding.tolist()\n[docs]class HuggingFaceInstructEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around sentence_transformers embedding models.\n To use, you should have the ``sentence_transformers``", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/huggingface.html"}
+{"id": "682a003d0ef2-2", "text": "To use, you should have the ``sentence_transformers``\n and ``InstructorEmbedding`` python packages installed.\n Example:\n .. code-block:: python\n from langchain.embeddings import HuggingFaceInstructEmbeddings\n model_name = \"hkunlp/instructor-large\"\n model_kwargs = {'device': 'cpu'}\n encode_kwargs = {'normalize_embeddings': True}\n hf = HuggingFaceInstructEmbeddings(\n model_name=model_name,\n model_kwargs=model_kwargs,\n encode_kwargs=encode_kwargs\n )\n \"\"\"\n client: Any #: :meta private:\n model_name: str = DEFAULT_INSTRUCT_MODEL\n \"\"\"Model name to use.\"\"\"\n cache_folder: Optional[str] = None\n \"\"\"Path to store models. \n Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Key word arguments to pass to the model.\"\"\"\n encode_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Key word arguments to pass when calling the `encode` method of the model.\"\"\"\n embed_instruction: str = DEFAULT_EMBED_INSTRUCTION\n \"\"\"Instruction to use for embedding documents.\"\"\"\n query_instruction: str = DEFAULT_QUERY_INSTRUCTION\n \"\"\"Instruction to use for embedding query.\"\"\"\n def __init__(self, **kwargs: Any):\n \"\"\"Initialize the sentence_transformer.\"\"\"\n super().__init__(**kwargs)\n try:\n from InstructorEmbedding import INSTRUCTOR\n self.client = INSTRUCTOR(\n self.model_name, cache_folder=self.cache_folder, **self.model_kwargs\n )\n except ImportError as e:\n raise ValueError(\"Dependencies for InstructorEmbedding not found.\") from e\n class Config:", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/huggingface.html"}
+{"id": "682a003d0ef2-3", "text": "raise ValueError(\"Dependencies for InstructorEmbedding not found.\") from e\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Compute doc embeddings using a HuggingFace instruct model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n instruction_pairs = [[self.embed_instruction, text] for text in texts]\n embeddings = self.client.encode(instruction_pairs, **self.encode_kwargs)\n return embeddings.tolist()\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a HuggingFace instruct model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n instruction_pair = [self.query_instruction, text]\n embedding = self.client.encode([instruction_pair], **self.encode_kwargs)[0]\n return embedding.tolist()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/huggingface.html"}
+{"id": "efe14e94b84b-0", "text": "Source code for langchain.embeddings.deepinfra\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nDEFAULT_MODEL_ID = \"sentence-transformers/clip-ViT-B-32\"\n[docs]class DeepInfraEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around Deep Infra's embedding inference service.\n To use, you should have the\n environment variable ``DEEPINFRA_API_TOKEN`` set with your API token, or pass\n it as a named parameter to the constructor.\n There are multiple embeddings models available,\n see https://deepinfra.com/models?type=embeddings.\n Example:\n .. code-block:: python\n from langchain.embeddings import DeepInfraEmbeddings\n deepinfra_emb = DeepInfraEmbeddings(\n model_id=\"sentence-transformers/clip-ViT-B-32\",\n deepinfra_api_token=\"my-api-key\"\n )\n r1 = deepinfra_emb.embed_documents(\n [\n \"Alpha is the first letter of Greek alphabet\",\n \"Beta is the second letter of Greek alphabet\",\n ]\n )\n r2 = deepinfra_emb.embed_query(\n \"What is the second letter of Greek alphabet\"\n )\n \"\"\"\n model_id: str = DEFAULT_MODEL_ID\n \"\"\"Embeddings model to use.\"\"\"\n normalize: bool = False\n \"\"\"whether to normalize the computed embeddings\"\"\"\n embed_instruction: str = \"passage: \"\n \"\"\"Instruction used to embed documents.\"\"\"\n query_instruction: str = \"query: \"\n \"\"\"Instruction used to embed the query.\"\"\"\n model_kwargs: Optional[dict] = None", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/deepinfra.html"}
+{"id": "efe14e94b84b-1", "text": "model_kwargs: Optional[dict] = None\n \"\"\"Other model keyword args\"\"\"\n deepinfra_api_token: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n deepinfra_api_token = get_from_dict_or_env(\n values, \"deepinfra_api_token\", \"DEEPINFRA_API_TOKEN\"\n )\n values[\"deepinfra_api_token\"] = deepinfra_api_token\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\"model_id\": self.model_id}\n def _embed(self, input: List[str]) -> List[List[float]]:\n _model_kwargs = self.model_kwargs or {}\n # HTTP headers for authorization\n headers = {\n \"Authorization\": f\"bearer {self.deepinfra_api_token}\",\n \"Content-Type\": \"application/json\",\n }\n # send request\n try:\n res = requests.post(\n f\"https://api.deepinfra.com/v1/inference/{self.model_id}\",\n headers=headers,\n json={\"inputs\": input, \"normalize\": self.normalize, **_model_kwargs},\n )\n except requests.exceptions.RequestException as e:\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n if res.status_code != 200:\n raise ValueError(\n \"Error raised by inference API HTTP code: %s, %s\"\n % (res.status_code, res.text)\n )\n try:\n t = res.json()\n embeddings = t[\"embeddings\"]", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/deepinfra.html"}
+{"id": "efe14e94b84b-2", "text": "try:\n t = res.json()\n embeddings = t[\"embeddings\"]\n except requests.exceptions.JSONDecodeError as e:\n raise ValueError(\n f\"Error raised by inference API: {e}.\\nResponse: {res.text}\"\n )\n return embeddings\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Embed documents using a Deep Infra deployed embedding model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n instruction_pairs = [f\"{self.query_instruction}{text}\" for text in texts]\n embeddings = self._embed(instruction_pairs)\n return embeddings\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Embed a query using a Deep Infra deployed embedding model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n instruction_pair = f\"{self.query_instruction}{text}\"\n embedding = self._embed([instruction_pair])[0]\n return embedding\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/deepinfra.html"}
+{"id": "e39fff581ace-0", "text": "Source code for langchain.embeddings.self_hosted\n\"\"\"Running custom embedding models on self-hosted remote hardware.\"\"\"\nfrom typing import Any, Callable, List\nfrom pydantic import Extra\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.llms import SelfHostedPipeline\ndef _embed_documents(pipeline: Any, *args: Any, **kwargs: Any) -> List[List[float]]:\n \"\"\"Inference function to send to the remote hardware.\n Accepts a sentence_transformer model_id and\n returns a list of embeddings for each document in the batch.\n \"\"\"\n return pipeline(*args, **kwargs)\n[docs]class SelfHostedEmbeddings(SelfHostedPipeline, Embeddings):\n \"\"\"Runs custom embedding models on self-hosted remote hardware.\n Supported hardware includes auto-launched instances on AWS, GCP, Azure,\n and Lambda, as well as servers specified\n by IP address and SSH credentials (such as on-prem, or another\n cloud like Paperspace, Coreweave, etc.).\n To use, you should have the ``runhouse`` python package installed.\n Example using a model load function:\n .. code-block:: python\n from langchain.embeddings import SelfHostedEmbeddings\n from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\n import runhouse as rh\n gpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\n def get_pipeline():\n model_id = \"facebook/bart-large\"\n tokenizer = AutoTokenizer.from_pretrained(model_id)\n model = AutoModelForCausalLM.from_pretrained(model_id)\n return pipeline(\"feature-extraction\", model=model, tokenizer=tokenizer)\n embeddings = SelfHostedEmbeddings(\n model_load_fn=get_pipeline,\n hardware=gpu", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted.html"}
+{"id": "e39fff581ace-1", "text": "model_load_fn=get_pipeline,\n hardware=gpu\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n )\n Example passing in a pipeline path:\n .. code-block:: python\n from langchain.embeddings import SelfHostedHFEmbeddings\n import runhouse as rh\n from transformers import pipeline\n gpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\n pipeline = pipeline(model=\"bert-base-uncased\", task=\"feature-extraction\")\n rh.blob(pickle.dumps(pipeline),\n path=\"models/pipeline.pkl\").save().to(gpu, path=\"models\")\n embeddings = SelfHostedHFEmbeddings.from_pipeline(\n pipeline=\"models/pipeline.pkl\",\n hardware=gpu,\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n )\n \"\"\"\n inference_fn: Callable = _embed_documents\n \"\"\"Inference function to extract the embeddings on the remote hardware.\"\"\"\n inference_kwargs: Any = None\n \"\"\"Any kwargs to pass to the model's inference function.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Compute doc embeddings using a HuggingFace transformer model.\n Args:\n texts: The list of texts to embed.s\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n texts = list(map(lambda x: x.replace(\"\\n\", \" \"), texts))\n embeddings = self.client(self.pipeline_ref, texts)\n if not isinstance(embeddings, list):\n return embeddings.tolist()\n return embeddings\n[docs] def embed_query(self, text: str) -> List[float]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted.html"}
+{"id": "e39fff581ace-2", "text": "[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a HuggingFace transformer model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n text = text.replace(\"\\n\", \" \")\n embeddings = self.client(self.pipeline_ref, text)\n if not isinstance(embeddings, list):\n return embeddings.tolist()\n return embeddings\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted.html"}
+{"id": "d49ec90c0f3f-0", "text": "Source code for langchain.embeddings.minimax\n\"\"\"Wrapper around MiniMax APIs.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Callable, Dict, List, Optional\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom tenacity import (\n before_sleep_log,\n retry,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\ndef _create_retry_decorator() -> Callable[[Any], Any]:\n \"\"\"Returns a tenacity retry decorator.\"\"\"\n multiplier = 1\n min_seconds = 1\n max_seconds = 4\n max_retries = 6\n return retry(\n reraise=True,\n stop=stop_after_attempt(max_retries),\n wait=wait_exponential(multiplier=multiplier, min=min_seconds, max=max_seconds),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\ndef embed_with_retry(embeddings: MiniMaxEmbeddings, *args: Any, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the completion call.\"\"\"\n retry_decorator = _create_retry_decorator()\n @retry_decorator\n def _embed_with_retry(*args: Any, **kwargs: Any) -> Any:\n return embeddings.embed(*args, **kwargs)\n return _embed_with_retry(*args, **kwargs)\n[docs]class MiniMaxEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around MiniMax's embedding inference service.\n To use, you should have the environment variable ``MINIMAX_GROUP_ID`` and\n ``MINIMAX_API_KEY`` set with your API token, or pass it as a named parameter to\n the constructor.", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/minimax.html"}
+{"id": "d49ec90c0f3f-1", "text": "the constructor.\n Example:\n .. code-block:: python\n from langchain.embeddings import MiniMaxEmbeddings\n embeddings = MiniMaxEmbeddings()\n query_text = \"This is a test query.\"\n query_result = embeddings.embed_query(query_text)\n document_text = \"This is a test document.\"\n document_result = embeddings.embed_documents([document_text])\n \"\"\"\n endpoint_url: str = \"https://api.minimax.chat/v1/embeddings\"\n \"\"\"Endpoint URL to use.\"\"\"\n model: str = \"embo-01\"\n \"\"\"Embeddings model name to use.\"\"\"\n embed_type_db: str = \"db\"\n \"\"\"For embed_documents\"\"\"\n embed_type_query: str = \"query\"\n \"\"\"For embed_query\"\"\"\n minimax_group_id: Optional[str] = None\n \"\"\"Group ID for MiniMax API.\"\"\"\n minimax_api_key: Optional[str] = None\n \"\"\"API Key for MiniMax API.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that group id and api key exists in environment.\"\"\"\n minimax_group_id = get_from_dict_or_env(\n values, \"minimax_group_id\", \"MINIMAX_GROUP_ID\"\n )\n minimax_api_key = get_from_dict_or_env(\n values, \"minimax_api_key\", \"MINIMAX_API_KEY\"\n )\n values[\"minimax_group_id\"] = minimax_group_id\n values[\"minimax_api_key\"] = minimax_api_key\n return values\n def embed(\n self,\n texts: List[str],\n embed_type: str,", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/minimax.html"}
+{"id": "d49ec90c0f3f-2", "text": "self,\n texts: List[str],\n embed_type: str,\n ) -> List[List[float]]:\n payload = {\n \"model\": self.model,\n \"type\": embed_type,\n \"texts\": texts,\n }\n # HTTP headers for authorization\n headers = {\n \"Authorization\": f\"Bearer {self.minimax_api_key}\",\n \"Content-Type\": \"application/json\",\n }\n params = {\n \"GroupId\": self.minimax_group_id,\n }\n # send request\n response = requests.post(\n self.endpoint_url, params=params, headers=headers, json=payload\n )\n parsed_response = response.json()\n # check for errors\n if parsed_response[\"base_resp\"][\"status_code\"] != 0:\n raise ValueError(\n f\"MiniMax API returned an error: {parsed_response['base_resp']}\"\n )\n embeddings = parsed_response[\"vectors\"]\n return embeddings\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Embed documents using a MiniMax embedding endpoint.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n embeddings = embed_with_retry(self, texts=texts, embed_type=self.embed_type_db)\n return embeddings\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Embed a query using a MiniMax embedding endpoint.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n embeddings = embed_with_retry(\n self, texts=[text], embed_type=self.embed_type_query\n )\n return embeddings[0]", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/minimax.html"}
+{"id": "d49ec90c0f3f-3", "text": ")\n return embeddings[0]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/minimax.html"}
+{"id": "c81b90f88854-0", "text": "Source code for langchain.embeddings.openai\n\"\"\"Wrapper around OpenAI embedding models.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import (\n Any,\n Callable,\n Dict,\n List,\n Literal,\n Optional,\n Sequence,\n Set,\n Tuple,\n Union,\n)\nimport numpy as np\nfrom pydantic import BaseModel, Extra, root_validator\nfrom tenacity import (\n before_sleep_log,\n retry,\n retry_if_exception_type,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\ndef _create_retry_decorator(embeddings: OpenAIEmbeddings) -> Callable[[Any], Any]:\n import openai\n min_seconds = 4\n max_seconds = 10\n # Wait 2^x * 1 second between each retry starting with\n # 4 seconds, then up to 10 seconds, then 10 seconds afterwards\n return retry(\n reraise=True,\n stop=stop_after_attempt(embeddings.max_retries),\n wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),\n retry=(\n retry_if_exception_type(openai.error.Timeout)\n | retry_if_exception_type(openai.error.APIError)\n | retry_if_exception_type(openai.error.APIConnectionError)\n | retry_if_exception_type(openai.error.RateLimitError)\n | retry_if_exception_type(openai.error.ServiceUnavailableError)\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\ndef embed_with_retry(embeddings: OpenAIEmbeddings, **kwargs: Any) -> Any:", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"}
+{"id": "c81b90f88854-1", "text": "\"\"\"Use tenacity to retry the embedding call.\"\"\"\n retry_decorator = _create_retry_decorator(embeddings)\n @retry_decorator\n def _embed_with_retry(**kwargs: Any) -> Any:\n return embeddings.client.create(**kwargs)\n return _embed_with_retry(**kwargs)\n[docs]class OpenAIEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around OpenAI embedding models.\n To use, you should have the ``openai`` python package installed, and the\n environment variable ``OPENAI_API_KEY`` set with your API key or pass it\n as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.embeddings import OpenAIEmbeddings\n openai = OpenAIEmbeddings(openai_api_key=\"my-api-key\")\n In order to use the library with Microsoft Azure endpoints, you need to set\n the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION.\n The OPENAI_API_TYPE must be set to 'azure' and the others correspond to\n the properties of your endpoint.\n In addition, the deployment name must be passed as the model parameter.\n Example:\n .. code-block:: python\n import os\n os.environ[\"OPENAI_API_TYPE\"] = \"azure\"\n os.environ[\"OPENAI_API_BASE\"] = \"https:// Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n values[\"openai_api_key\"] = get_from_dict_or_env(\n values, \"openai_api_key\", \"OPENAI_API_KEY\"\n )\n values[\"openai_api_base\"] = get_from_dict_or_env(\n values,\n \"openai_api_base\",\n \"OPENAI_API_BASE\",\n default=\"\",\n )\n values[\"openai_api_type\"] = get_from_dict_or_env(\n values,\n \"openai_api_type\",\n \"OPENAI_API_TYPE\",\n default=\"\",\n )\n values[\"openai_proxy\"] = get_from_dict_or_env(\n values,\n \"openai_proxy\",\n \"OPENAI_PROXY\",\n default=\"\",\n )\n if values[\"openai_api_type\"] in (\"azure\", \"azure_ad\", \"azuread\"):\n default_api_version = \"2022-12-01\"\n else:\n default_api_version = \"\"\n values[\"openai_api_version\"] = get_from_dict_or_env(\n values,\n \"openai_api_version\",\n \"OPENAI_API_VERSION\",\n default=default_api_version,\n )\n values[\"openai_organization\"] = get_from_dict_or_env(\n values,\n \"openai_organization\",\n \"OPENAI_ORGANIZATION\",\n default=\"\",\n )\n try:\n import openai\n values[\"client\"] = openai.Embedding\n except ImportError:\n raise ImportError(\n \"Could not import openai python package. \"\n \"Please install it with `pip install openai`.\"\n )\n return values", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"}
+{"id": "c81b90f88854-4", "text": ")\n return values\n @property\n def _invocation_params(self) -> Dict:\n openai_args = {\n \"engine\": self.deployment,\n \"request_timeout\": self.request_timeout,\n \"headers\": self.headers,\n \"api_key\": self.openai_api_key,\n \"organization\": self.openai_organization,\n \"api_base\": self.openai_api_base,\n \"api_type\": self.openai_api_type,\n \"api_version\": self.openai_api_version,\n }\n if self.openai_proxy:\n import openai\n openai.proxy = {\n \"http\": self.openai_proxy,\n \"https\": self.openai_proxy,\n } # type: ignore[assignment] # noqa: E501\n return openai_args\n # please refer to\n # https://github.com/openai/openai-cookbook/blob/main/examples/Embedding_long_inputs.ipynb\n def _get_len_safe_embeddings(\n self, texts: List[str], *, engine: str, chunk_size: Optional[int] = None\n ) -> List[List[float]]:\n embeddings: List[List[float]] = [[] for _ in range(len(texts))]\n try:\n import tiktoken\n except ImportError:\n raise ImportError(\n \"Could not import tiktoken python package. \"\n \"This is needed in order to for OpenAIEmbeddings. \"\n \"Please install it with `pip install tiktoken`.\"\n )\n tokens = []\n indices = []\n encoding = tiktoken.model.encoding_for_model(self.model)\n for i, text in enumerate(texts):\n if self.model.endswith(\"001\"):", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"}
+{"id": "c81b90f88854-5", "text": "for i, text in enumerate(texts):\n if self.model.endswith(\"001\"):\n # See: https://github.com/openai/openai-python/issues/418#issuecomment-1525939500\n # replace newlines, which can negatively affect performance.\n text = text.replace(\"\\n\", \" \")\n token = encoding.encode(\n text,\n allowed_special=self.allowed_special,\n disallowed_special=self.disallowed_special,\n )\n for j in range(0, len(token), self.embedding_ctx_length):\n tokens += [token[j : j + self.embedding_ctx_length]]\n indices += [i]\n batched_embeddings = []\n _chunk_size = chunk_size or self.chunk_size\n for i in range(0, len(tokens), _chunk_size):\n response = embed_with_retry(\n self,\n input=tokens[i : i + _chunk_size],\n **self._invocation_params,\n )\n batched_embeddings += [r[\"embedding\"] for r in response[\"data\"]]\n results: List[List[List[float]]] = [[] for _ in range(len(texts))]\n num_tokens_in_batch: List[List[int]] = [[] for _ in range(len(texts))]\n for i in range(len(indices)):\n results[indices[i]].append(batched_embeddings[i])\n num_tokens_in_batch[indices[i]].append(len(tokens[i]))\n for i in range(len(texts)):\n _result = results[i]\n if len(_result) == 0:\n average = embed_with_retry(\n self,\n input=\"\",\n **self._invocation_params,\n )[\n \"data\"\n ][0][\"embedding\"]\n else:", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"}
+{"id": "c81b90f88854-6", "text": ")[\n \"data\"\n ][0][\"embedding\"]\n else:\n average = np.average(_result, axis=0, weights=num_tokens_in_batch[i])\n embeddings[i] = (average / np.linalg.norm(average)).tolist()\n return embeddings\n def _embedding_func(self, text: str, *, engine: str) -> List[float]:\n \"\"\"Call out to OpenAI's embedding endpoint.\"\"\"\n # handle large input text\n if len(text) > self.embedding_ctx_length:\n return self._get_len_safe_embeddings([text], engine=engine)[0]\n else:\n if self.model.endswith(\"001\"):\n # See: https://github.com/openai/openai-python/issues/418#issuecomment-1525939500\n # replace newlines, which can negatively affect performance.\n text = text.replace(\"\\n\", \" \")\n return embed_with_retry(\n self,\n input=[text],\n **self._invocation_params,\n )[\n \"data\"\n ][0][\"embedding\"]\n[docs] def embed_documents(\n self, texts: List[str], chunk_size: Optional[int] = 0\n ) -> List[List[float]]:\n \"\"\"Call out to OpenAI's embedding endpoint for embedding search docs.\n Args:\n texts: The list of texts to embed.\n chunk_size: The chunk size of embeddings. If None, will use the chunk size\n specified by the class.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n # NOTE: to keep things simple, we assume the list may contain texts longer\n # than the maximum context and use length-safe embedding function.\n return self._get_len_safe_embeddings(texts, engine=self.deployment)", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"}
+{"id": "c81b90f88854-7", "text": "return self._get_len_safe_embeddings(texts, engine=self.deployment)\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Call out to OpenAI's embedding endpoint for embedding query text.\n Args:\n text: The text to embed.\n Returns:\n Embedding for the text.\n \"\"\"\n embedding = self._embedding_func(text, engine=self.deployment)\n return embedding\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"}
+{"id": "4d6dc72553be-0", "text": "Source code for langchain.embeddings.mosaicml\n\"\"\"Wrapper around MosaicML APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional, Tuple\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\n[docs]class MosaicMLInstructorEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around MosaicML's embedding inference service.\n To use, you should have the\n environment variable ``MOSAICML_API_TOKEN`` set with your API token, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.llms import MosaicMLInstructorEmbeddings\n endpoint_url = (\n \"https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict\"\n )\n mosaic_llm = MosaicMLInstructorEmbeddings(\n endpoint_url=endpoint_url,\n mosaicml_api_token=\"my-api-key\"\n )\n \"\"\"\n endpoint_url: str = (\n \"https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict\"\n )\n \"\"\"Endpoint URL to use.\"\"\"\n embed_instruction: str = \"Represent the document for retrieval: \"\n \"\"\"Instruction used to embed documents.\"\"\"\n query_instruction: str = (\n \"Represent the question for retrieving supporting documents: \"\n )\n \"\"\"Instruction used to embed the query.\"\"\"\n retry_sleep: float = 1.0\n \"\"\"How long to try sleeping for if a rate limit is encountered\"\"\"\n mosaicml_api_token: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/mosaicml.html"}
+{"id": "4d6dc72553be-1", "text": "\"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n mosaicml_api_token = get_from_dict_or_env(\n values, \"mosaicml_api_token\", \"MOSAICML_API_TOKEN\"\n )\n values[\"mosaicml_api_token\"] = mosaicml_api_token\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\"endpoint_url\": self.endpoint_url}\n def _embed(\n self, input: List[Tuple[str, str]], is_retry: bool = False\n ) -> List[List[float]]:\n payload = {\"input_strings\": input}\n # HTTP headers for authorization\n headers = {\n \"Authorization\": f\"{self.mosaicml_api_token}\",\n \"Content-Type\": \"application/json\",\n }\n # send request\n try:\n response = requests.post(self.endpoint_url, headers=headers, json=payload)\n except requests.exceptions.RequestException as e:\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n try:\n parsed_response = response.json()\n if \"error\" in parsed_response:\n # if we get rate limited, try sleeping for 1 second\n if (\n not is_retry\n and \"rate limit exceeded\" in parsed_response[\"error\"].lower()\n ):\n import time\n time.sleep(self.retry_sleep)\n return self._embed(input, is_retry=True)\n raise ValueError(\n f\"Error raised by inference API: {parsed_response['error']}\"\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/mosaicml.html"}
+{"id": "4d6dc72553be-2", "text": "f\"Error raised by inference API: {parsed_response['error']}\"\n )\n if \"data\" not in parsed_response:\n raise ValueError(\n f\"Error raised by inference API, no key data: {parsed_response}\"\n )\n embeddings = parsed_response[\"data\"]\n except requests.exceptions.JSONDecodeError as e:\n raise ValueError(\n f\"Error raised by inference API: {e}.\\nResponse: {response.text}\"\n )\n return embeddings\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Embed documents using a MosaicML deployed instructor embedding model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n instruction_pairs = [(self.embed_instruction, text) for text in texts]\n embeddings = self._embed(instruction_pairs)\n return embeddings\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Embed a query using a MosaicML deployed instructor embedding model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n instruction_pair = (self.query_instruction, text)\n embedding = self._embed([instruction_pair])[0]\n return embedding\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/mosaicml.html"}
+{"id": "f28621e5e35c-0", "text": "Source code for langchain.embeddings.sagemaker_endpoint\n\"\"\"Wrapper around Sagemaker InvokeEndpoint API.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.llms.sagemaker_endpoint import ContentHandlerBase\nclass EmbeddingsContentHandler(ContentHandlerBase[List[str], List[List[float]]]):\n \"\"\"Content handler for LLM class.\"\"\"\n[docs]class SagemakerEndpointEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around custom Sagemaker Inference Endpoints.\n To use, you must supply the endpoint name from your deployed\n Sagemaker model & the region where it is deployed.\n To authenticate, the AWS client uses the following methods to\n automatically load credentials:\n https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n If a specific credential profile should be used, you must pass\n the name of the profile from the ~/.aws/credentials file that is to be used.\n Make sure the credentials / roles used have the required policies to\n access the Sagemaker endpoint.\n See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n from langchain.embeddings import SagemakerEndpointEmbeddings\n endpoint_name = (\n \"my-endpoint-name\"\n )\n region_name = (\n \"us-west-2\"\n )\n credentials_profile_name = (\n \"default\"\n )\n se = SagemakerEndpointEmbeddings(\n endpoint_name=endpoint_name,\n region_name=region_name,\n credentials_profile_name=credentials_profile_name\n )\n \"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/sagemaker_endpoint.html"}
+{"id": "f28621e5e35c-1", "text": "credentials_profile_name=credentials_profile_name\n )\n \"\"\"\n client: Any #: :meta private:\n endpoint_name: str = \"\"\n \"\"\"The name of the endpoint from the deployed Sagemaker model.\n Must be unique within an AWS Region.\"\"\"\n region_name: str = \"\"\n \"\"\"The aws region where the Sagemaker model is deployed, eg. `us-west-2`.\"\"\"\n credentials_profile_name: Optional[str] = None\n \"\"\"The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\n has either access keys or role information specified.\n If not specified, the default credential profile or, if on an EC2 instance,\n credentials from IMDS will be used.\n See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n \"\"\"\n content_handler: EmbeddingsContentHandler\n \"\"\"The content handler class that provides an input and\n output transform functions to handle formats between LLM\n and the endpoint.\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n from langchain.embeddings.sagemaker_endpoint import EmbeddingsContentHandler\n class ContentHandler(EmbeddingsContentHandler):\n content_type = \"application/json\"\n accepts = \"application/json\"\n def transform_input(self, prompts: List[str], model_kwargs: Dict) -> bytes:\n input_str = json.dumps({prompts: prompts, **model_kwargs})\n return input_str.encode('utf-8')\n def transform_output(self, output: bytes) -> List[List[float]]:\n response_json = json.loads(output.read().decode(\"utf-8\"))\n return response_json[\"vectors\"]\n \"\"\" # noqa: E501\n model_kwargs: Optional[Dict] = None", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/sagemaker_endpoint.html"}
+{"id": "f28621e5e35c-2", "text": "\"\"\" # noqa: E501\n model_kwargs: Optional[Dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n endpoint_kwargs: Optional[Dict] = None\n \"\"\"Optional attributes passed to the invoke_endpoint\n function. See `boto3`_. docs for more info.\n .. _boto3: \n \"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that AWS credentials to and python package exists in environment.\"\"\"\n try:\n import boto3\n try:\n if values[\"credentials_profile_name\"] is not None:\n session = boto3.Session(\n profile_name=values[\"credentials_profile_name\"]\n )\n else:\n # use default credentials\n session = boto3.Session()\n values[\"client\"] = session.client(\n \"sagemaker-runtime\", region_name=values[\"region_name\"]\n )\n except Exception as e:\n raise ValueError(\n \"Could not load credentials to authenticate with AWS client. \"\n \"Please check that credentials in the specified \"\n \"profile name are valid.\"\n ) from e\n except ImportError:\n raise ValueError(\n \"Could not import boto3 python package. \"\n \"Please install it with `pip install boto3`.\"\n )\n return values\n def _embedding_func(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Call out to SageMaker Inference embedding endpoint.\"\"\"\n # replace newlines, which can negatively affect performance.", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/sagemaker_endpoint.html"}
+{"id": "f28621e5e35c-3", "text": "# replace newlines, which can negatively affect performance.\n texts = list(map(lambda x: x.replace(\"\\n\", \" \"), texts))\n _model_kwargs = self.model_kwargs or {}\n _endpoint_kwargs = self.endpoint_kwargs or {}\n body = self.content_handler.transform_input(texts, _model_kwargs)\n content_type = self.content_handler.content_type\n accepts = self.content_handler.accepts\n # send request\n try:\n response = self.client.invoke_endpoint(\n EndpointName=self.endpoint_name,\n Body=body,\n ContentType=content_type,\n Accept=accepts,\n **_endpoint_kwargs,\n )\n except Exception as e:\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n return self.content_handler.transform_output(response[\"Body\"])\n[docs] def embed_documents(\n self, texts: List[str], chunk_size: int = 64\n ) -> List[List[float]]:\n \"\"\"Compute doc embeddings using a SageMaker Inference Endpoint.\n Args:\n texts: The list of texts to embed.\n chunk_size: The chunk size defines how many input texts will\n be grouped together as request. If None, will use the\n chunk size specified by the class.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n results = []\n _chunk_size = len(texts) if chunk_size > len(texts) else chunk_size\n for i in range(0, len(texts), _chunk_size):\n response = self._embedding_func(texts[i : i + _chunk_size])\n results.extend(response)\n return results\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a SageMaker inference endpoint.", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/sagemaker_endpoint.html"}
+{"id": "f28621e5e35c-4", "text": "\"\"\"Compute query embeddings using a SageMaker inference endpoint.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n return self._embedding_func([text])[0]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/embeddings/sagemaker_endpoint.html"}
+{"id": "eaf6e26ef177-0", "text": "Source code for langchain.document_loaders.sitemap\n\"\"\"Loader that fetches a sitemap and loads those URLs.\"\"\"\nimport itertools\nimport re\nfrom typing import Any, Callable, Generator, Iterable, List, Optional\nfrom langchain.document_loaders.web_base import WebBaseLoader\nfrom langchain.schema import Document\ndef _default_parsing_function(content: Any) -> str:\n return str(content.get_text())\ndef _default_meta_function(meta: dict, _content: Any) -> dict:\n return {\"source\": meta[\"loc\"], **meta}\ndef _batch_block(iterable: Iterable, size: int) -> Generator[List[dict], None, None]:\n it = iter(iterable)\n while item := list(itertools.islice(it, size)):\n yield item\n[docs]class SitemapLoader(WebBaseLoader):\n \"\"\"Loader that fetches a sitemap and loads those URLs.\"\"\"\n def __init__(\n self,\n web_path: str,\n filter_urls: Optional[List[str]] = None,\n parsing_function: Optional[Callable] = None,\n blocksize: Optional[int] = None,\n blocknum: int = 0,\n meta_function: Optional[Callable] = None,\n is_local: bool = False,\n ):\n \"\"\"Initialize with webpage path and optional filter URLs.\n Args:\n web_path: url of the sitemap. can also be a local path\n filter_urls: list of strings or regexes that will be applied to filter the\n urls that are parsed and loaded\n parsing_function: Function to parse bs4.Soup output\n blocksize: number of sitemap locations per block\n blocknum: the number of the block that should be loaded - zero indexed\n meta_function: Function to parse bs4.Soup output for metadata", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/sitemap.html"}
+{"id": "eaf6e26ef177-1", "text": "meta_function: Function to parse bs4.Soup output for metadata\n remember when setting this method to also copy metadata[\"loc\"]\n to metadata[\"source\"] if you are using this field\n is_local: whether the sitemap is a local file\n \"\"\"\n if blocksize is not None and blocksize < 1:\n raise ValueError(\"Sitemap blocksize should be at least 1\")\n if blocknum < 0:\n raise ValueError(\"Sitemap blocknum can not be lower then 0\")\n try:\n import lxml # noqa:F401\n except ImportError:\n raise ImportError(\n \"lxml package not found, please install it with \" \"`pip install lxml`\"\n )\n super().__init__(web_path)\n self.filter_urls = filter_urls\n self.parsing_function = parsing_function or _default_parsing_function\n self.meta_function = meta_function or _default_meta_function\n self.blocksize = blocksize\n self.blocknum = blocknum\n self.is_local = is_local\n[docs] def parse_sitemap(self, soup: Any) -> List[dict]:\n \"\"\"Parse sitemap xml and load into a list of dicts.\"\"\"\n els = []\n for url in soup.find_all(\"url\"):\n loc = url.find(\"loc\")\n if not loc:\n continue\n # Strip leading and trailing whitespace and newlines\n loc_text = loc.text.strip()\n if self.filter_urls and not any(\n re.match(r, loc_text) for r in self.filter_urls\n ):\n continue\n els.append(\n {\n tag: prop.text\n for tag in [\"loc\", \"lastmod\", \"changefreq\", \"priority\"]\n if (prop := url.find(tag))\n }\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/sitemap.html"}
+{"id": "eaf6e26ef177-2", "text": "if (prop := url.find(tag))\n }\n )\n for sitemap in soup.find_all(\"sitemap\"):\n loc = sitemap.find(\"loc\")\n if not loc:\n continue\n soup_child = self.scrape_all([loc.text], \"xml\")[0]\n els.extend(self.parse_sitemap(soup_child))\n return els\n[docs] def load(self) -> List[Document]:\n \"\"\"Load sitemap.\"\"\"\n if self.is_local:\n try:\n import bs4\n except ImportError:\n raise ImportError(\n \"beautifulsoup4 package not found, please install it\"\n \" with `pip install beautifulsoup4`\"\n )\n fp = open(self.web_path)\n soup = bs4.BeautifulSoup(fp, \"xml\")\n else:\n soup = self.scrape(\"xml\")\n els = self.parse_sitemap(soup)\n if self.blocksize is not None:\n elblocks = list(_batch_block(els, self.blocksize))\n blockcount = len(elblocks)\n if blockcount - 1 < self.blocknum:\n raise ValueError(\n \"Selected sitemap does not contain enough blocks for given blocknum\"\n )\n else:\n els = elblocks[self.blocknum]\n results = self.scrape_all([el[\"loc\"].strip() for el in els if \"loc\" in el])\n return [\n Document(\n page_content=self.parsing_function(results[i]),\n metadata=self.meta_function(els[i], results[i]),\n )\n for i in range(len(results))\n ]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/sitemap.html"}
+{"id": "6138d69df50f-0", "text": "Source code for langchain.document_loaders.image\n\"\"\"Loader that loads image files.\"\"\"\nfrom typing import List\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class UnstructuredImageLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load image files, such as PNGs and JPGs.\"\"\"\n def _get_elements(self) -> List:\n from unstructured.partition.image import partition_image\n return partition_image(filename=self.file_path, **self.unstructured_kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/image.html"}
+{"id": "df3c2a58ad24-0", "text": "Source code for langchain.document_loaders.modern_treasury\n\"\"\"Loader that fetches data from Modern Treasury\"\"\"\nimport json\nimport urllib.request\nfrom base64 import b64encode\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import get_from_env, stringify_value\nMODERN_TREASURY_ENDPOINTS = {\n \"payment_orders\": \"https://app.moderntreasury.com/api/payment_orders\",\n \"expected_payments\": \"https://app.moderntreasury.com/api/expected_payments\",\n \"returns\": \"https://app.moderntreasury.com/api/returns\",\n \"incoming_payment_details\": \"https://app.moderntreasury.com/api/\\\nincoming_payment_details\",\n \"counterparties\": \"https://app.moderntreasury.com/api/counterparties\",\n \"internal_accounts\": \"https://app.moderntreasury.com/api/internal_accounts\",\n \"external_accounts\": \"https://app.moderntreasury.com/api/external_accounts\",\n \"transactions\": \"https://app.moderntreasury.com/api/transactions\",\n \"ledgers\": \"https://app.moderntreasury.com/api/ledgers\",\n \"ledger_accounts\": \"https://app.moderntreasury.com/api/ledger_accounts\",\n \"ledger_transactions\": \"https://app.moderntreasury.com/api/ledger_transactions\",\n \"events\": \"https://app.moderntreasury.com/api/events\",\n \"invoices\": \"https://app.moderntreasury.com/api/invoices\",\n}\n[docs]class ModernTreasuryLoader(BaseLoader):\n def __init__(\n self,\n resource: str,\n organization_id: Optional[str] = None,", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/modern_treasury.html"}
+{"id": "df3c2a58ad24-1", "text": "resource: str,\n organization_id: Optional[str] = None,\n api_key: Optional[str] = None,\n ) -> None:\n self.resource = resource\n organization_id = organization_id or get_from_env(\n \"organization_id\", \"MODERN_TREASURY_ORGANIZATION_ID\"\n )\n api_key = api_key or get_from_env(\"api_key\", \"MODERN_TREASURY_API_KEY\")\n credentials = f\"{organization_id}:{api_key}\".encode(\"utf-8\")\n basic_auth_token = b64encode(credentials).decode(\"utf-8\")\n self.headers = {\"Authorization\": f\"Basic {basic_auth_token}\"}\n def _make_request(self, url: str) -> List[Document]:\n request = urllib.request.Request(url, headers=self.headers)\n with urllib.request.urlopen(request) as response:\n json_data = json.loads(response.read().decode())\n text = stringify_value(json_data)\n metadata = {\"source\": url}\n return [Document(page_content=text, metadata=metadata)]\n def _get_resource(self) -> List[Document]:\n endpoint = MODERN_TREASURY_ENDPOINTS.get(self.resource)\n if endpoint is None:\n return []\n return self._make_request(endpoint)\n[docs] def load(self) -> List[Document]:\n return self._get_resource()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/modern_treasury.html"}
+{"id": "0f79fd7fa21a-0", "text": "Source code for langchain.document_loaders.odt\n\"\"\"Loader that loads Open Office ODT files.\"\"\"\nfrom typing import Any, List\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n validate_unstructured_version,\n)\n[docs]class UnstructuredODTLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load open office ODT files.\"\"\"\n def __init__(\n self, file_path: str, mode: str = \"single\", **unstructured_kwargs: Any\n ):\n validate_unstructured_version(min_unstructured_version=\"0.6.3\")\n super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.odt import partition_odt\n return partition_odt(filename=self.file_path, **self.unstructured_kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/odt.html"}
+{"id": "848403781281-0", "text": "Source code for langchain.document_loaders.slack_directory\n\"\"\"Loader for documents from a Slack export.\"\"\"\nimport json\nimport zipfile\nfrom pathlib import Path\nfrom typing import Dict, List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class SlackDirectoryLoader(BaseLoader):\n \"\"\"Loader for loading documents from a Slack directory dump.\"\"\"\n def __init__(self, zip_path: str, workspace_url: Optional[str] = None):\n \"\"\"Initialize the SlackDirectoryLoader.\n Args:\n zip_path (str): The path to the Slack directory dump zip file.\n workspace_url (Optional[str]): The Slack workspace URL.\n Including the URL will turn\n sources into links. Defaults to None.\n \"\"\"\n self.zip_path = Path(zip_path)\n self.workspace_url = workspace_url\n self.channel_id_map = self._get_channel_id_map(self.zip_path)\n @staticmethod\n def _get_channel_id_map(zip_path: Path) -> Dict[str, str]:\n \"\"\"Get a dictionary mapping channel names to their respective IDs.\"\"\"\n with zipfile.ZipFile(zip_path, \"r\") as zip_file:\n try:\n with zip_file.open(\"channels.json\", \"r\") as f:\n channels = json.load(f)\n return {channel[\"name\"]: channel[\"id\"] for channel in channels}\n except KeyError:\n return {}\n[docs] def load(self) -> List[Document]:\n \"\"\"Load and return documents from the Slack directory dump.\"\"\"\n docs = []\n with zipfile.ZipFile(self.zip_path, \"r\") as zip_file:\n for channel_path in zip_file.namelist():\n channel_name = Path(channel_path).parent.name\n if not channel_name:\n continue", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/slack_directory.html"}
+{"id": "848403781281-1", "text": "if not channel_name:\n continue\n if channel_path.endswith(\".json\"):\n messages = self._read_json(zip_file, channel_path)\n for message in messages:\n document = self._convert_message_to_document(\n message, channel_name\n )\n docs.append(document)\n return docs\n def _read_json(self, zip_file: zipfile.ZipFile, file_path: str) -> List[dict]:\n \"\"\"Read JSON data from a zip subfile.\"\"\"\n with zip_file.open(file_path, \"r\") as f:\n data = json.load(f)\n return data\n def _convert_message_to_document(\n self, message: dict, channel_name: str\n ) -> Document:\n \"\"\"\n Convert a message to a Document object.\n Args:\n message (dict): A message in the form of a dictionary.\n channel_name (str): The name of the channel the message belongs to.\n Returns:\n Document: A Document object representing the message.\n \"\"\"\n text = message.get(\"text\", \"\")\n metadata = self._get_message_metadata(message, channel_name)\n return Document(\n page_content=text,\n metadata=metadata,\n )\n def _get_message_metadata(self, message: dict, channel_name: str) -> dict:\n \"\"\"Create and return metadata for a given message and channel.\"\"\"\n timestamp = message.get(\"ts\", \"\")\n user = message.get(\"user\", \"\")\n source = self._get_message_source(channel_name, user, timestamp)\n return {\n \"source\": source,\n \"channel\": channel_name,\n \"timestamp\": timestamp,\n \"user\": user,\n }", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/slack_directory.html"}
+{"id": "848403781281-2", "text": "\"timestamp\": timestamp,\n \"user\": user,\n }\n def _get_message_source(self, channel_name: str, user: str, timestamp: str) -> str:\n \"\"\"\n Get the message source as a string.\n Args:\n channel_name (str): The name of the channel the message belongs to.\n user (str): The user ID who sent the message.\n timestamp (str): The timestamp of the message.\n Returns:\n str: The message source.\n \"\"\"\n if self.workspace_url:\n channel_id = self.channel_id_map.get(channel_name, \"\")\n return (\n f\"{self.workspace_url}/archives/{channel_id}\"\n + f\"/p{timestamp.replace('.', '')}\"\n )\n else:\n return f\"{channel_name} - {user} - {timestamp}\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/slack_directory.html"}
+{"id": "48bf3cfd44ae-0", "text": "Source code for langchain.document_loaders.max_compute\nfrom __future__ import annotations\nfrom typing import Any, Iterator, List, Optional, Sequence\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utilities.max_compute import MaxComputeAPIWrapper\n[docs]class MaxComputeLoader(BaseLoader):\n \"\"\"Loads a query result from Alibaba Cloud MaxCompute table into documents.\"\"\"\n def __init__(\n self,\n query: str,\n api_wrapper: MaxComputeAPIWrapper,\n *,\n page_content_columns: Optional[Sequence[str]] = None,\n metadata_columns: Optional[Sequence[str]] = None,\n ):\n \"\"\"Initialize Alibaba Cloud MaxCompute document loader.\n Args:\n query: SQL query to execute.\n api_wrapper: MaxCompute API wrapper.\n page_content_columns: The columns to write into the `page_content` of the\n Document. If unspecified, all columns will be written to `page_content`.\n metadata_columns: The columns to write into the `metadata` of the Document.\n If unspecified, all columns not added to `page_content` will be written.\n \"\"\"\n self.query = query\n self.api_wrapper = api_wrapper\n self.page_content_columns = page_content_columns\n self.metadata_columns = metadata_columns\n[docs] @classmethod\n def from_params(\n cls,\n query: str,\n endpoint: str,\n project: str,\n *,\n access_id: Optional[str] = None,\n secret_access_key: Optional[str] = None,\n **kwargs: Any,\n ) -> MaxComputeLoader:\n \"\"\"Convenience constructor that builds the MaxCompute API wrapper from\n given parameters.\n Args:\n query: SQL query to execute.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/max_compute.html"}
+{"id": "48bf3cfd44ae-1", "text": "given parameters.\n Args:\n query: SQL query to execute.\n endpoint: MaxCompute endpoint.\n project: A project is a basic organizational unit of MaxCompute, which is\n similar to a database.\n access_id: MaxCompute access ID. Should be passed in directly or set as the\n environment variable `MAX_COMPUTE_ACCESS_ID`.\n secret_access_key: MaxCompute secret access key. Should be passed in\n directly or set as the environment variable\n `MAX_COMPUTE_SECRET_ACCESS_KEY`.\n \"\"\"\n api_wrapper = MaxComputeAPIWrapper.from_params(\n endpoint, project, access_id=access_id, secret_access_key=secret_access_key\n )\n return cls(query, api_wrapper, **kwargs)\n[docs] def lazy_load(self) -> Iterator[Document]:\n for row in self.api_wrapper.query(self.query):\n if self.page_content_columns:\n page_content_data = {\n k: v for k, v in row.items() if k in self.page_content_columns\n }\n else:\n page_content_data = row\n page_content = \"\\n\".join(f\"{k}: {v}\" for k, v in page_content_data.items())\n if self.metadata_columns:\n metadata = {k: v for k, v in row.items() if k in self.metadata_columns}\n else:\n metadata = {k: v for k, v in row.items() if k not in page_content_data}\n yield Document(page_content=page_content, metadata=metadata)\n[docs] def load(self) -> List[Document]:\n return list(self.lazy_load())\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/max_compute.html"}
+{"id": "29670375e236-0", "text": "Source code for langchain.document_loaders.toml\nimport json\nfrom pathlib import Path\nfrom typing import Iterator, List, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class TomlLoader(BaseLoader):\n \"\"\"\n A TOML document loader that inherits from the BaseLoader class.\n This class can be initialized with either a single source file or a source\n directory containing TOML files.\n \"\"\"\n def __init__(self, source: Union[str, Path]):\n \"\"\"Initialize the TomlLoader with a source file or directory.\"\"\"\n self.source = Path(source)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load and return all documents.\"\"\"\n return list(self.lazy_load())\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"Lazily load the TOML documents from the source file or directory.\"\"\"\n import tomli\n if self.source.is_file() and self.source.suffix == \".toml\":\n files = [self.source]\n elif self.source.is_dir():\n files = list(self.source.glob(\"**/*.toml\"))\n else:\n raise ValueError(\"Invalid source path or file type\")\n for file_path in files:\n with file_path.open(\"r\", encoding=\"utf-8\") as file:\n content = file.read()\n try:\n data = tomli.loads(content)\n doc = Document(\n page_content=json.dumps(data),\n metadata={\"source\": str(file_path)},\n )\n yield doc\n except tomli.TOMLDecodeError as e:\n print(f\"Error parsing TOML file {file_path}: {e}\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/toml.html"}
+{"id": "29670375e236-1", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/toml.html"}
+{"id": "c5c5d132028a-0", "text": "Source code for langchain.document_loaders.epub\n\"\"\"Loader that loads EPub files.\"\"\"\nfrom typing import List\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n satisfies_min_unstructured_version,\n)\n[docs]class UnstructuredEPubLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load epub files.\"\"\"\n def _get_elements(self) -> List:\n min_unstructured_version = \"0.5.4\"\n if not satisfies_min_unstructured_version(min_unstructured_version):\n raise ValueError(\n \"Partitioning epub files is only supported in \"\n f\"unstructured>={min_unstructured_version}.\"\n )\n from unstructured.partition.epub import partition_epub\n return partition_epub(filename=self.file_path, **self.unstructured_kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/epub.html"}
+{"id": "3eaba7880439-0", "text": "Source code for langchain.document_loaders.xml\n\"\"\"Loader that loads Microsoft Excel files.\"\"\"\nfrom typing import Any, List\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n validate_unstructured_version,\n)\n[docs]class UnstructuredXMLLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load XML files.\"\"\"\n def __init__(\n self, file_path: str, mode: str = \"single\", **unstructured_kwargs: Any\n ):\n validate_unstructured_version(min_unstructured_version=\"0.6.7\")\n super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.xml import partition_xml\n return partition_xml(filename=self.file_path, **self.unstructured_kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/xml.html"}
+{"id": "5dd27cd67ba1-0", "text": "Source code for langchain.document_loaders.notiondb\n\"\"\"Notion DB loader for langchain\"\"\"\nfrom typing import Any, Dict, List, Optional\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nNOTION_BASE_URL = \"https://api.notion.com/v1\"\nDATABASE_URL = NOTION_BASE_URL + \"/databases/{database_id}/query\"\nPAGE_URL = NOTION_BASE_URL + \"/pages/{page_id}\"\nBLOCK_URL = NOTION_BASE_URL + \"/blocks/{block_id}/children\"\n[docs]class NotionDBLoader(BaseLoader):\n \"\"\"Notion DB Loader.\n Reads content from pages within a Noton Database.\n Args:\n integration_token (str): Notion integration token.\n database_id (str): Notion database id.\n request_timeout_sec (int): Timeout for Notion requests in seconds.\n \"\"\"\n def __init__(\n self,\n integration_token: str,\n database_id: str,\n request_timeout_sec: Optional[int] = 10,\n ) -> None:\n \"\"\"Initialize with parameters.\"\"\"\n if not integration_token:\n raise ValueError(\"integration_token must be provided\")\n if not database_id:\n raise ValueError(\"database_id must be provided\")\n self.token = integration_token\n self.database_id = database_id\n self.headers = {\n \"Authorization\": \"Bearer \" + self.token,\n \"Content-Type\": \"application/json\",\n \"Notion-Version\": \"2022-06-28\",\n }\n self.request_timeout_sec = request_timeout_sec\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents from the Notion database.\n Returns:\n List[Document]: List of documents.\n \"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/notiondb.html"}
+{"id": "5dd27cd67ba1-1", "text": "Returns:\n List[Document]: List of documents.\n \"\"\"\n page_ids = self._retrieve_page_ids()\n return list(self.load_page(page_id) for page_id in page_ids)\n def _retrieve_page_ids(\n self, query_dict: Dict[str, Any] = {\"page_size\": 100}\n ) -> List[str]:\n \"\"\"Get all the pages from a Notion database.\"\"\"\n pages: List[Dict[str, Any]] = []\n while True:\n data = self._request(\n DATABASE_URL.format(database_id=self.database_id),\n method=\"POST\",\n query_dict=query_dict,\n )\n pages.extend(data.get(\"results\"))\n if not data.get(\"has_more\"):\n break\n query_dict[\"start_cursor\"] = data.get(\"next_cursor\")\n page_ids = [page[\"id\"] for page in pages]\n return page_ids\n[docs] def load_page(self, page_id: str) -> Document:\n \"\"\"Read a page.\"\"\"\n data = self._request(PAGE_URL.format(page_id=page_id))\n # load properties as metadata\n metadata: Dict[str, Any] = {}\n for prop_name, prop_data in data[\"properties\"].items():\n prop_type = prop_data[\"type\"]\n if prop_type == \"rich_text\":\n value = (\n prop_data[\"rich_text\"][0][\"plain_text\"]\n if prop_data[\"rich_text\"]\n else None\n )\n elif prop_type == \"title\":\n value = (\n prop_data[\"title\"][0][\"plain_text\"] if prop_data[\"title\"] else None\n )\n elif prop_type == \"multi_select\":\n value = (\n [item[\"name\"] for item in prop_data[\"multi_select\"]]", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/notiondb.html"}
+{"id": "5dd27cd67ba1-2", "text": "[item[\"name\"] for item in prop_data[\"multi_select\"]]\n if prop_data[\"multi_select\"]\n else []\n )\n elif prop_type == \"url\":\n value = prop_data[\"url\"]\n else:\n value = None\n metadata[prop_name.lower()] = value\n metadata[\"id\"] = page_id\n return Document(page_content=self._load_blocks(page_id), metadata=metadata)\n def _load_blocks(self, block_id: str, num_tabs: int = 0) -> str:\n \"\"\"Read a block and its children.\"\"\"\n result_lines_arr: List[str] = []\n cur_block_id: str = block_id\n while cur_block_id:\n data = self._request(BLOCK_URL.format(block_id=cur_block_id))\n for result in data[\"results\"]:\n result_obj = result[result[\"type\"]]\n if \"rich_text\" not in result_obj:\n continue\n cur_result_text_arr: List[str] = []\n for rich_text in result_obj[\"rich_text\"]:\n if \"text\" in rich_text:\n cur_result_text_arr.append(\n \"\\t\" * num_tabs + rich_text[\"text\"][\"content\"]\n )\n if result[\"has_children\"]:\n children_text = self._load_blocks(\n result[\"id\"], num_tabs=num_tabs + 1\n )\n cur_result_text_arr.append(children_text)\n result_lines_arr.append(\"\\n\".join(cur_result_text_arr))\n cur_block_id = data.get(\"next_cursor\")\n return \"\\n\".join(result_lines_arr)\n def _request(\n self, url: str, method: str = \"GET\", query_dict: Dict[str, Any] = {}\n ) -> Any:\n res = requests.request(\n method,", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/notiondb.html"}
+{"id": "5dd27cd67ba1-3", "text": ") -> Any:\n res = requests.request(\n method,\n url,\n headers=self.headers,\n json=query_dict,\n timeout=self.request_timeout_sec,\n )\n res.raise_for_status()\n return res.json()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/notiondb.html"}
+{"id": "7672107d393e-0", "text": "Source code for langchain.document_loaders.diffbot\n\"\"\"Loader that uses Diffbot to load webpages in text format.\"\"\"\nimport logging\nfrom typing import Any, List\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\n[docs]class DiffbotLoader(BaseLoader):\n \"\"\"Loader that loads Diffbot file json.\"\"\"\n def __init__(\n self, api_token: str, urls: List[str], continue_on_failure: bool = True\n ):\n \"\"\"Initialize with API token, ids, and key.\"\"\"\n self.api_token = api_token\n self.urls = urls\n self.continue_on_failure = continue_on_failure\n def _diffbot_api_url(self, diffbot_api: str) -> str:\n return f\"https://api.diffbot.com/v3/{diffbot_api}\"\n def _get_diffbot_data(self, url: str) -> Any:\n \"\"\"Get Diffbot file from Diffbot REST API.\"\"\"\n # TODO: Add support for other Diffbot APIs\n diffbot_url = self._diffbot_api_url(\"article\")\n params = {\n \"token\": self.api_token,\n \"url\": url,\n }\n response = requests.get(diffbot_url, params=params, timeout=10)\n # TODO: handle non-ok errors\n return response.json() if response.ok else {}\n[docs] def load(self) -> List[Document]:\n \"\"\"Extract text from Diffbot on all the URLs and return Document instances\"\"\"\n docs: List[Document] = list()\n for url in self.urls:\n try:\n data = self._get_diffbot_data(url)\n text = data[\"objects\"][0][\"text\"] if \"objects\" in data else \"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/diffbot.html"}
+{"id": "7672107d393e-1", "text": "text = data[\"objects\"][0][\"text\"] if \"objects\" in data else \"\"\n metadata = {\"source\": url}\n docs.append(Document(page_content=text, metadata=metadata))\n except Exception as e:\n if self.continue_on_failure:\n logger.error(f\"Error fetching or processing {url}, exception: {e}\")\n else:\n raise e\n return docs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/diffbot.html"}
+{"id": "caedc7bd3cf2-0", "text": "Source code for langchain.document_loaders.git\nimport os\nfrom typing import Callable, List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class GitLoader(BaseLoader):\n \"\"\"Loads files from a Git repository into a list of documents.\n Repository can be local on disk available at `repo_path`,\n or remote at `clone_url` that will be cloned to `repo_path`.\n Currently supports only text files.\n Each document represents one file in the repository. The `path` points to\n the local Git repository, and the `branch` specifies the branch to load\n files from. By default, it loads from the `main` branch.\n \"\"\"\n def __init__(\n self,\n repo_path: str,\n clone_url: Optional[str] = None,\n branch: Optional[str] = \"main\",\n file_filter: Optional[Callable[[str], bool]] = None,\n ):\n self.repo_path = repo_path\n self.clone_url = clone_url\n self.branch = branch\n self.file_filter = file_filter\n[docs] def load(self) -> List[Document]:\n try:\n from git import Blob, Repo # type: ignore\n except ImportError as ex:\n raise ImportError(\n \"Could not import git python package. \"\n \"Please install it with `pip install GitPython`.\"\n ) from ex\n if not os.path.exists(self.repo_path) and self.clone_url is None:\n raise ValueError(f\"Path {self.repo_path} does not exist\")\n elif self.clone_url:\n repo = Repo.clone_from(self.clone_url, self.repo_path)\n repo.git.checkout(self.branch)\n else:\n repo = Repo(self.repo_path)", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/git.html"}
+{"id": "caedc7bd3cf2-1", "text": "else:\n repo = Repo(self.repo_path)\n repo.git.checkout(self.branch)\n docs: List[Document] = []\n for item in repo.tree().traverse():\n if not isinstance(item, Blob):\n continue\n file_path = os.path.join(self.repo_path, item.path)\n ignored_files = repo.ignored([file_path]) # type: ignore\n if len(ignored_files):\n continue\n # uses filter to skip files\n if self.file_filter and not self.file_filter(file_path):\n continue\n rel_file_path = os.path.relpath(file_path, self.repo_path)\n try:\n with open(file_path, \"rb\") as f:\n content = f.read()\n file_type = os.path.splitext(item.name)[1]\n # loads only text files\n try:\n text_content = content.decode(\"utf-8\")\n except UnicodeDecodeError:\n continue\n metadata = {\n \"source\": rel_file_path,\n \"file_path\": rel_file_path,\n \"file_name\": item.name,\n \"file_type\": file_type,\n }\n doc = Document(page_content=text_content, metadata=metadata)\n docs.append(doc)\n except Exception as e:\n print(f\"Error reading file {file_path}: {e}\")\n return docs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/git.html"}
+{"id": "bb937521acca-0", "text": "Source code for langchain.document_loaders.image_captions\n\"\"\"\nLoader that loads image captions\nBy default, the loader utilizes the pre-trained BLIP image captioning model.\nhttps://huggingface.co/Salesforce/blip-image-captioning-base\n\"\"\"\nfrom typing import Any, List, Tuple, Union\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class ImageCaptionLoader(BaseLoader):\n \"\"\"Loader that loads the captions of an image\"\"\"\n def __init__(\n self,\n path_images: Union[str, List[str]],\n blip_processor: str = \"Salesforce/blip-image-captioning-base\",\n blip_model: str = \"Salesforce/blip-image-captioning-base\",\n ):\n \"\"\"\n Initialize with a list of image paths\n \"\"\"\n if isinstance(path_images, str):\n self.image_paths = [path_images]\n else:\n self.image_paths = path_images\n self.blip_processor = blip_processor\n self.blip_model = blip_model\n[docs] def load(self) -> List[Document]:\n \"\"\"\n Load from a list of image files\n \"\"\"\n try:\n from transformers import BlipForConditionalGeneration, BlipProcessor\n except ImportError:\n raise ImportError(\n \"`transformers` package not found, please install with \"\n \"`pip install transformers`.\"\n )\n processor = BlipProcessor.from_pretrained(self.blip_processor)\n model = BlipForConditionalGeneration.from_pretrained(self.blip_model)\n results = []\n for path_image in self.image_paths:\n caption, metadata = self._get_captions_and_metadata(\n model=model, processor=processor, path_image=path_image\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/image_captions.html"}
+{"id": "bb937521acca-1", "text": "model=model, processor=processor, path_image=path_image\n )\n doc = Document(page_content=caption, metadata=metadata)\n results.append(doc)\n return results\n def _get_captions_and_metadata(\n self, model: Any, processor: Any, path_image: str\n ) -> Tuple[str, dict]:\n \"\"\"\n Helper function for getting the captions and metadata of an image\n \"\"\"\n try:\n from PIL import Image\n except ImportError:\n raise ImportError(\n \"`PIL` package not found, please install with `pip install pillow`\"\n )\n try:\n if path_image.startswith(\"http://\") or path_image.startswith(\"https://\"):\n image = Image.open(requests.get(path_image, stream=True).raw).convert(\n \"RGB\"\n )\n else:\n image = Image.open(path_image).convert(\"RGB\")\n except Exception:\n raise ValueError(f\"Could not get image data for {path_image}\")\n inputs = processor(image, \"an image of\", return_tensors=\"pt\")\n output = model.generate(**inputs)\n caption: str = processor.decode(output[0])\n metadata: dict = {\"image_path\": path_image}\n return caption, metadata\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/image_captions.html"}
+{"id": "4c02ba3c14cb-0", "text": "Source code for langchain.document_loaders.weather\n\"\"\"Simple reader that reads weather data from OpenWeatherMap API\"\"\"\nfrom __future__ import annotations\nfrom datetime import datetime\nfrom typing import Iterator, List, Optional, Sequence\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utilities.openweathermap import OpenWeatherMapAPIWrapper\n[docs]class WeatherDataLoader(BaseLoader):\n \"\"\"Weather Reader.\n Reads the forecast & current weather of any location using OpenWeatherMap's free\n API. Checkout 'https://openweathermap.org/appid' for more on how to generate a free\n OpenWeatherMap API.\n \"\"\"\n def __init__(\n self,\n client: OpenWeatherMapAPIWrapper,\n places: Sequence[str],\n ) -> None:\n \"\"\"Initialize with parameters.\"\"\"\n super().__init__()\n self.client = client\n self.places = places\n[docs] @classmethod\n def from_params(\n cls, places: Sequence[str], *, openweathermap_api_key: Optional[str] = None\n ) -> WeatherDataLoader:\n client = OpenWeatherMapAPIWrapper(openweathermap_api_key=openweathermap_api_key)\n return cls(client, places)\n[docs] def lazy_load(\n self,\n ) -> Iterator[Document]:\n \"\"\"Lazily load weather data for the given locations.\"\"\"\n for place in self.places:\n metadata = {\"queried_at\": datetime.now()}\n content = self.client.run(place)\n yield Document(page_content=content, metadata=metadata)\n[docs] def load(\n self,\n ) -> List[Document]:\n \"\"\"Load weather data for the given locations.\"\"\"\n return list(self.lazy_load())\nBy Harrison Chase", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/weather.html"}
+{"id": "4c02ba3c14cb-1", "text": "return list(self.lazy_load())\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/weather.html"}
+{"id": "93920e4b1a96-0", "text": "Source code for langchain.document_loaders.gitbook\n\"\"\"Loader that loads GitBook.\"\"\"\nfrom typing import Any, List, Optional\nfrom urllib.parse import urljoin, urlparse\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.web_base import WebBaseLoader\n[docs]class GitbookLoader(WebBaseLoader):\n \"\"\"Load GitBook data.\n 1. load from either a single page, or\n 2. load all (relative) paths in the navbar.\n \"\"\"\n def __init__(\n self,\n web_page: str,\n load_all_paths: bool = False,\n base_url: Optional[str] = None,\n content_selector: str = \"main\",\n ):\n \"\"\"Initialize with web page and whether to load all paths.\n Args:\n web_page: The web page to load or the starting point from where\n relative paths are discovered.\n load_all_paths: If set to True, all relative paths in the navbar\n are loaded instead of only `web_page`.\n base_url: If `load_all_paths` is True, the relative paths are\n appended to this base url. Defaults to `web_page` if not set.\n \"\"\"\n self.base_url = base_url or web_page\n if self.base_url.endswith(\"/\"):\n self.base_url = self.base_url[:-1]\n if load_all_paths:\n # set web_path to the sitemap if we want to crawl all paths\n web_paths = f\"{self.base_url}/sitemap.xml\"\n else:\n web_paths = web_page\n super().__init__(web_paths)\n self.load_all_paths = load_all_paths\n self.content_selector = content_selector\n[docs] def load(self) -> List[Document]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/gitbook.html"}
+{"id": "93920e4b1a96-1", "text": "[docs] def load(self) -> List[Document]:\n \"\"\"Fetch text from one single GitBook page.\"\"\"\n if self.load_all_paths:\n soup_info = self.scrape()\n relative_paths = self._get_paths(soup_info)\n documents = []\n for path in relative_paths:\n url = urljoin(self.base_url, path)\n print(f\"Fetching text from {url}\")\n soup_info = self._scrape(url)\n documents.append(self._get_document(soup_info, url))\n return [d for d in documents if d]\n else:\n soup_info = self.scrape()\n documents = [self._get_document(soup_info, self.web_path)]\n return [d for d in documents if d]\n def _get_document(\n self, soup: Any, custom_url: Optional[str] = None\n ) -> Optional[Document]:\n \"\"\"Fetch content from page and return Document.\"\"\"\n page_content_raw = soup.find(self.content_selector)\n if not page_content_raw:\n return None\n content = page_content_raw.get_text(separator=\"\\n\").strip()\n title_if_exists = page_content_raw.find(\"h1\")\n title = title_if_exists.text if title_if_exists else \"\"\n metadata = {\"source\": custom_url or self.web_path, \"title\": title}\n return Document(page_content=content, metadata=metadata)\n def _get_paths(self, soup: Any) -> List[str]:\n \"\"\"Fetch all relative paths in the navbar.\"\"\"\n return [urlparse(loc.text).path for loc in soup.find_all(\"loc\")]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/gitbook.html"}
+{"id": "6dc3135f9bac-0", "text": "Source code for langchain.document_loaders.fauna\nfrom typing import Iterator, List, Optional, Sequence\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class FaunaLoader(BaseLoader):\n \"\"\"\n Attributes:\n query (str): The FQL query string to execute.\n page_content_field (str): The field that contains the content of each page.\n secret (str): The secret key for authenticating to FaunaDB.\n metadata_fields (Optional[Sequence[str]]):\n Optional list of field names to include in metadata.\n \"\"\"\n def __init__(\n self,\n query: str,\n page_content_field: str,\n secret: str,\n metadata_fields: Optional[Sequence[str]] = None,\n ):\n self.query = query\n self.page_content_field = page_content_field\n self.secret = secret\n self.metadata_fields = metadata_fields\n[docs] def load(self) -> List[Document]:\n return list(self.lazy_load())\n[docs] def lazy_load(self) -> Iterator[Document]:\n try:\n from fauna import Page, fql\n from fauna.client import Client\n from fauna.encoding import QuerySuccess\n except ImportError:\n raise ImportError(\n \"Could not import fauna python package. \"\n \"Please install it with `pip install fauna`.\"\n )\n # Create Fauna Client\n client = Client(secret=self.secret)\n # Run FQL Query\n response: QuerySuccess = client.query(fql(self.query))\n page: Page = response.data\n for result in page:\n if result is not None:\n document_dict = dict(result.items())\n page_content = \"\"\n for key, value in document_dict.items():", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/fauna.html"}
+{"id": "6dc3135f9bac-1", "text": "page_content = \"\"\n for key, value in document_dict.items():\n if key == self.page_content_field:\n page_content = value\n document: Document = Document(\n page_content=page_content,\n metadata={\"id\": result.id, \"ts\": result.ts},\n )\n yield document\n if page.after is not None:\n yield Document(\n page_content=\"Next Page Exists\",\n metadata={\"after\": page.after},\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/fauna.html"}
+{"id": "16d971fe4631-0", "text": "Source code for langchain.document_loaders.text\nimport logging\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.helpers import detect_file_encodings\nlogger = logging.getLogger(__name__)\n[docs]class TextLoader(BaseLoader):\n \"\"\"Load text files.\n Args:\n file_path: Path to the file to load.\n encoding: File encoding to use. If `None`, the file will be loaded\n with the default system encoding.\n autodetect_encoding: Whether to try to autodetect the file encoding\n if the specified encoding fails.\n \"\"\"\n def __init__(\n self,\n file_path: str,\n encoding: Optional[str] = None,\n autodetect_encoding: bool = False,\n ):\n \"\"\"Initialize with file path.\"\"\"\n self.file_path = file_path\n self.encoding = encoding\n self.autodetect_encoding = autodetect_encoding\n[docs] def load(self) -> List[Document]:\n \"\"\"Load from file path.\"\"\"\n text = \"\"\n try:\n with open(self.file_path, encoding=self.encoding) as f:\n text = f.read()\n except UnicodeDecodeError as e:\n if self.autodetect_encoding:\n detected_encodings = detect_file_encodings(self.file_path)\n for encoding in detected_encodings:\n logger.debug(\"Trying encoding: \", encoding.encoding)\n try:\n with open(self.file_path, encoding=encoding.encoding) as f:\n text = f.read()\n break\n except UnicodeDecodeError:\n continue\n else:\n raise RuntimeError(f\"Error loading {self.file_path}\") from e\n except Exception as e:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/text.html"}
+{"id": "16d971fe4631-1", "text": "except Exception as e:\n raise RuntimeError(f\"Error loading {self.file_path}\") from e\n metadata = {\"source\": self.file_path}\n return [Document(page_content=text, metadata=metadata)]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/text.html"}
+{"id": "bc9622e975f5-0", "text": "Source code for langchain.document_loaders.apify_dataset\n\"\"\"Logic for loading documents from Apify datasets.\"\"\"\nfrom typing import Any, Callable, Dict, List\nfrom pydantic import BaseModel, root_validator\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class ApifyDatasetLoader(BaseLoader, BaseModel):\n \"\"\"Logic for loading documents from Apify datasets.\"\"\"\n apify_client: Any\n dataset_id: str\n \"\"\"The ID of the dataset on the Apify platform.\"\"\"\n dataset_mapping_function: Callable[[Dict], Document]\n \"\"\"A custom function that takes a single dictionary (an Apify dataset item)\n and converts it to an instance of the Document class.\"\"\"\n def __init__(\n self, dataset_id: str, dataset_mapping_function: Callable[[Dict], Document]\n ):\n \"\"\"Initialize the loader with an Apify dataset ID and a mapping function.\n Args:\n dataset_id (str): The ID of the dataset on the Apify platform.\n dataset_mapping_function (Callable): A function that takes a single\n dictionary (an Apify dataset item) and converts it to an instance\n of the Document class.\n \"\"\"\n super().__init__(\n dataset_id=dataset_id, dataset_mapping_function=dataset_mapping_function\n )\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate environment.\"\"\"\n try:\n from apify_client import ApifyClient\n values[\"apify_client\"] = ApifyClient()\n except ImportError:\n raise ImportError(\n \"Could not import apify-client Python package. \"\n \"Please install it with `pip install apify-client`.\"\n )\n return values", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/apify_dataset.html"}
+{"id": "bc9622e975f5-1", "text": ")\n return values\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n dataset_items = self.apify_client.dataset(self.dataset_id).list_items().items\n return list(map(self.dataset_mapping_function, dataset_items))\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/apify_dataset.html"}
+{"id": "170a12b28bed-0", "text": "Source code for langchain.document_loaders.stripe\n\"\"\"Loader that fetches data from Stripe\"\"\"\nimport json\nimport urllib.request\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import get_from_env, stringify_dict\nSTRIPE_ENDPOINTS = {\n \"balance_transactions\": \"https://api.stripe.com/v1/balance_transactions\",\n \"charges\": \"https://api.stripe.com/v1/charges\",\n \"customers\": \"https://api.stripe.com/v1/customers\",\n \"events\": \"https://api.stripe.com/v1/events\",\n \"refunds\": \"https://api.stripe.com/v1/refunds\",\n \"disputes\": \"https://api.stripe.com/v1/disputes\",\n}\n[docs]class StripeLoader(BaseLoader):\n def __init__(self, resource: str, access_token: Optional[str] = None) -> None:\n self.resource = resource\n access_token = access_token or get_from_env(\n \"access_token\", \"STRIPE_ACCESS_TOKEN\"\n )\n self.headers = {\"Authorization\": f\"Bearer {access_token}\"}\n def _make_request(self, url: str) -> List[Document]:\n request = urllib.request.Request(url, headers=self.headers)\n with urllib.request.urlopen(request) as response:\n json_data = json.loads(response.read().decode())\n text = stringify_dict(json_data)\n metadata = {\"source\": url}\n return [Document(page_content=text, metadata=metadata)]\n def _get_resource(self) -> List[Document]:\n endpoint = STRIPE_ENDPOINTS.get(self.resource)\n if endpoint is None:\n return []\n return self._make_request(endpoint)", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/stripe.html"}
+{"id": "170a12b28bed-1", "text": "if endpoint is None:\n return []\n return self._make_request(endpoint)\n[docs] def load(self) -> List[Document]:\n return self._get_resource()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/stripe.html"}
+{"id": "8dd1a45e48e6-0", "text": "Source code for langchain.document_loaders.airbyte_json\n\"\"\"Loader that loads local airbyte json files.\"\"\"\nimport json\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import stringify_dict\n[docs]class AirbyteJSONLoader(BaseLoader):\n \"\"\"Loader that loads local airbyte json files.\"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path. This should start with '/tmp/airbyte_local/'.\"\"\"\n self.file_path = file_path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load file.\"\"\"\n text = \"\"\n for line in open(self.file_path, \"r\"):\n data = json.loads(line)[\"_airbyte_data\"]\n text += stringify_dict(data)\n metadata = {\"source\": self.file_path}\n return [Document(page_content=text, metadata=metadata)]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/airbyte_json.html"}
+{"id": "33e8c73efc98-0", "text": "Source code for langchain.document_loaders.blackboard\n\"\"\"Loader that loads all documents from a blackboard course.\"\"\"\nimport contextlib\nimport re\nfrom pathlib import Path\nfrom typing import Any, List, Optional, Tuple\nfrom urllib.parse import unquote\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.directory import DirectoryLoader\nfrom langchain.document_loaders.pdf import PyPDFLoader\nfrom langchain.document_loaders.web_base import WebBaseLoader\n[docs]class BlackboardLoader(WebBaseLoader):\n \"\"\"Loader that loads all documents from a Blackboard course.\n This loader is not compatible with all Blackboard courses. It is only\n compatible with courses that use the new Blackboard interface.\n To use this loader, you must have the BbRouter cookie. You can get this\n cookie by logging into the course and then copying the value of the\n BbRouter cookie from the browser's developer tools.\n Example:\n .. code-block:: python\n from langchain.document_loaders import BlackboardLoader\n loader = BlackboardLoader(\n blackboard_course_url=\"https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1\",\n bbrouter=\"expires:12345...\",\n )\n documents = loader.load()\n \"\"\"\n base_url: str\n folder_path: str\n load_all_recursively: bool\n def __init__(\n self,\n blackboard_course_url: str,\n bbrouter: str,\n load_all_recursively: bool = True,\n basic_auth: Optional[Tuple[str, str]] = None,\n cookies: Optional[dict] = None,\n ):\n \"\"\"Initialize with blackboard course url.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/blackboard.html"}
+{"id": "33e8c73efc98-1", "text": "):\n \"\"\"Initialize with blackboard course url.\n The BbRouter cookie is required for most blackboard courses.\n Args:\n blackboard_course_url: Blackboard course url.\n bbrouter: BbRouter cookie.\n load_all_recursively: If True, load all documents recursively.\n basic_auth: Basic auth credentials.\n cookies: Cookies.\n Raises:\n ValueError: If blackboard course url is invalid.\n \"\"\"\n super().__init__(blackboard_course_url)\n # Get base url\n try:\n self.base_url = blackboard_course_url.split(\"/webapps/blackboard\")[0]\n except IndexError:\n raise ValueError(\n \"Invalid blackboard course url. \"\n \"Please provide a url that starts with \"\n \"https:///webapps/blackboard\"\n )\n if basic_auth is not None:\n self.session.auth = basic_auth\n # Combine cookies\n if cookies is None:\n cookies = {}\n cookies.update({\"BbRouter\": bbrouter})\n self.session.cookies.update(cookies)\n self.load_all_recursively = load_all_recursively\n self.check_bs4()\n[docs] def check_bs4(self) -> None:\n \"\"\"Check if BeautifulSoup4 is installed.\n Raises:\n ImportError: If BeautifulSoup4 is not installed.\n \"\"\"\n try:\n import bs4 # noqa: F401\n except ImportError:\n raise ImportError(\n \"BeautifulSoup4 is required for BlackboardLoader. \"\n \"Please install it with `pip install beautifulsoup4`.\"\n )\n[docs] def load(self) -> List[Document]:\n \"\"\"Load data into document objects.\n Returns:\n List of documents.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/blackboard.html"}
+{"id": "33e8c73efc98-2", "text": "\"\"\"Load data into document objects.\n Returns:\n List of documents.\n \"\"\"\n if self.load_all_recursively:\n soup_info = self.scrape()\n self.folder_path = self._get_folder_path(soup_info)\n relative_paths = self._get_paths(soup_info)\n documents = []\n for path in relative_paths:\n url = self.base_url + path\n print(f\"Fetching documents from {url}\")\n soup_info = self._scrape(url)\n with contextlib.suppress(ValueError):\n documents.extend(self._get_documents(soup_info))\n return documents\n else:\n print(f\"Fetching documents from {self.web_path}\")\n soup_info = self.scrape()\n self.folder_path = self._get_folder_path(soup_info)\n return self._get_documents(soup_info)\n def _get_folder_path(self, soup: Any) -> str:\n \"\"\"Get the folder path to save the documents in.\n Args:\n soup: BeautifulSoup4 soup object.\n Returns:\n Folder path.\n \"\"\"\n # Get the course name\n course_name = soup.find(\"span\", {\"id\": \"crumb_1\"})\n if course_name is None:\n raise ValueError(\"No course name found.\")\n course_name = course_name.text.strip()\n # Prepare the folder path\n course_name_clean = (\n unquote(course_name)\n .replace(\" \", \"_\")\n .replace(\"/\", \"_\")\n .replace(\":\", \"_\")\n .replace(\",\", \"_\")\n .replace(\"?\", \"_\")\n .replace(\"'\", \"_\")\n .replace(\"!\", \"_\")\n .replace('\"', \"_\")\n )\n # Get the folder path\n folder_path = Path(\".\") / course_name_clean", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/blackboard.html"}
+{"id": "33e8c73efc98-3", "text": "# Get the folder path\n folder_path = Path(\".\") / course_name_clean\n return str(folder_path)\n def _get_documents(self, soup: Any) -> List[Document]:\n \"\"\"Fetch content from page and return Documents.\n Args:\n soup: BeautifulSoup4 soup object.\n Returns:\n List of documents.\n \"\"\"\n attachments = self._get_attachments(soup)\n self._download_attachments(attachments)\n documents = self._load_documents()\n return documents\n def _get_attachments(self, soup: Any) -> List[str]:\n \"\"\"Get all attachments from a page.\n Args:\n soup: BeautifulSoup4 soup object.\n Returns:\n List of attachments.\n \"\"\"\n from bs4 import BeautifulSoup, Tag\n # Get content list\n content_list = soup.find(\"ul\", {\"class\": \"contentList\"})\n if content_list is None:\n raise ValueError(\"No content list found.\")\n content_list: BeautifulSoup # type: ignore\n # Get all attachments\n attachments = []\n for attachment in content_list.find_all(\"ul\", {\"class\": \"attachments\"}):\n attachment: Tag # type: ignore\n for link in attachment.find_all(\"a\"):\n link: Tag # type: ignore\n href = link.get(\"href\")\n # Only add if href is not None and does not start with #\n if href is not None and not href.startswith(\"#\"):\n attachments.append(href)\n return attachments\n def _download_attachments(self, attachments: List[str]) -> None:\n \"\"\"Download all attachments.\n Args:\n attachments: List of attachments.\n \"\"\"\n # Make sure the folder exists\n Path(self.folder_path).mkdir(parents=True, exist_ok=True)", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/blackboard.html"}
+{"id": "33e8c73efc98-4", "text": "Path(self.folder_path).mkdir(parents=True, exist_ok=True)\n # Download all attachments\n for attachment in attachments:\n self.download(attachment)\n def _load_documents(self) -> List[Document]:\n \"\"\"Load all documents in the folder.\n Returns:\n List of documents.\n \"\"\"\n # Create the document loader\n loader = DirectoryLoader(\n path=self.folder_path, glob=\"*.pdf\", loader_cls=PyPDFLoader # type: ignore\n )\n # Load the documents\n documents = loader.load()\n # Return all documents\n return documents\n def _get_paths(self, soup: Any) -> List[str]:\n \"\"\"Get all relative paths in the navbar.\"\"\"\n relative_paths = []\n course_menu = soup.find(\"ul\", {\"class\": \"courseMenu\"})\n if course_menu is None:\n raise ValueError(\"No course menu found.\")\n for link in course_menu.find_all(\"a\"):\n href = link.get(\"href\")\n if href is not None and href.startswith(\"/\"):\n relative_paths.append(href)\n return relative_paths\n[docs] def download(self, path: str) -> None:\n \"\"\"Download a file from a url.\n Args:\n path: Path to the file.\n \"\"\"\n # Get the file content\n response = self.session.get(self.base_url + path, allow_redirects=True)\n # Get the filename\n filename = self.parse_filename(response.url)\n # Write the file to disk\n with open(Path(self.folder_path) / filename, \"wb\") as f:\n f.write(response.content)\n[docs] def parse_filename(self, url: str) -> str:\n \"\"\"Parse the filename from a url.\n Args:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/blackboard.html"}
+{"id": "33e8c73efc98-5", "text": "\"\"\"Parse the filename from a url.\n Args:\n url: Url to parse the filename from.\n Returns:\n The filename.\n \"\"\"\n if (url_path := Path(url)) and url_path.suffix == \".pdf\":\n return url_path.name\n else:\n return self._parse_filename_from_url(url)\n def _parse_filename_from_url(self, url: str) -> str:\n \"\"\"Parse the filename from a url.\n Args:\n url: Url to parse the filename from.\n Returns:\n The filename.\n Raises:\n ValueError: If the filename could not be parsed.\n \"\"\"\n filename_matches = re.search(r\"filename%2A%3DUTF-8%27%27(.+)\", url)\n if filename_matches:\n filename = filename_matches.group(1)\n else:\n raise ValueError(f\"Could not parse filename from {url}\")\n if \".pdf\" not in filename:\n raise ValueError(f\"Incorrect file type: {filename}\")\n filename = filename.split(\".pdf\")[0] + \".pdf\"\n filename = unquote(filename)\n filename = filename.replace(\"%20\", \" \")\n return filename\nif __name__ == \"__main__\":\n loader = BlackboardLoader(\n \"https:///webapps/blackboard/content/listContent.jsp?course_id=__1&content_id=__1&mode=reset\",\n \"\",\n load_all_recursively=True,\n )\n documents = loader.load()\n print(f\"Loaded {len(documents)} pages of PDFs from {loader.web_path}\")\nBy Harrison Chase", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/blackboard.html"}
+{"id": "33e8c73efc98-6", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/blackboard.html"}
+{"id": "bc36f5400b5c-0", "text": "Source code for langchain.document_loaders.url\n\"\"\"Loader that uses unstructured to load HTML files.\"\"\"\nimport logging\nfrom typing import Any, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\n[docs]class UnstructuredURLLoader(BaseLoader):\n \"\"\"Loader that uses unstructured to load HTML files.\"\"\"\n def __init__(\n self,\n urls: List[str],\n continue_on_failure: bool = True,\n mode: str = \"single\",\n **unstructured_kwargs: Any,\n ):\n \"\"\"Initialize with file path.\"\"\"\n try:\n import unstructured # noqa:F401\n from unstructured.__version__ import __version__ as __unstructured_version__\n self.__version = __unstructured_version__\n except ImportError:\n raise ValueError(\n \"unstructured package not found, please install it with \"\n \"`pip install unstructured`\"\n )\n self._validate_mode(mode)\n self.mode = mode\n headers = unstructured_kwargs.pop(\"headers\", {})\n if len(headers.keys()) != 0:\n warn_about_headers = False\n if self.__is_non_html_available():\n warn_about_headers = not self.__is_headers_available_for_non_html()\n else:\n warn_about_headers = not self.__is_headers_available_for_html()\n if warn_about_headers:\n logger.warning(\n \"You are using an old version of unstructured. \"\n \"The headers parameter is ignored\"\n )\n self.urls = urls\n self.continue_on_failure = continue_on_failure\n self.headers = headers\n self.unstructured_kwargs = unstructured_kwargs\n def _validate_mode(self, mode: str) -> None:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/url.html"}
+{"id": "bc36f5400b5c-1", "text": "def _validate_mode(self, mode: str) -> None:\n _valid_modes = {\"single\", \"elements\"}\n if mode not in _valid_modes:\n raise ValueError(\n f\"Got {mode} for `mode`, but should be one of `{_valid_modes}`\"\n )\n def __is_headers_available_for_html(self) -> bool:\n _unstructured_version = self.__version.split(\"-\")[0]\n unstructured_version = tuple([int(x) for x in _unstructured_version.split(\".\")])\n return unstructured_version >= (0, 5, 7)\n def __is_headers_available_for_non_html(self) -> bool:\n _unstructured_version = self.__version.split(\"-\")[0]\n unstructured_version = tuple([int(x) for x in _unstructured_version.split(\".\")])\n return unstructured_version >= (0, 5, 13)\n def __is_non_html_available(self) -> bool:\n _unstructured_version = self.__version.split(\"-\")[0]\n unstructured_version = tuple([int(x) for x in _unstructured_version.split(\".\")])\n return unstructured_version >= (0, 5, 12)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load file.\"\"\"\n from unstructured.partition.auto import partition\n from unstructured.partition.html import partition_html\n docs: List[Document] = list()\n for url in self.urls:\n try:\n if self.__is_non_html_available():\n if self.__is_headers_available_for_non_html():\n elements = partition(\n url=url, headers=self.headers, **self.unstructured_kwargs\n )\n else:\n elements = partition(url=url, **self.unstructured_kwargs)\n else:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/url.html"}
+{"id": "bc36f5400b5c-2", "text": "elements = partition(url=url, **self.unstructured_kwargs)\n else:\n if self.__is_headers_available_for_html():\n elements = partition_html(\n url=url, headers=self.headers, **self.unstructured_kwargs\n )\n else:\n elements = partition_html(url=url, **self.unstructured_kwargs)\n except Exception as e:\n if self.continue_on_failure:\n logger.error(f\"Error fetching or processing {url}, exeption: {e}\")\n continue\n else:\n raise e\n if self.mode == \"single\":\n text = \"\\n\\n\".join([str(el) for el in elements])\n metadata = {\"source\": url}\n docs.append(Document(page_content=text, metadata=metadata))\n elif self.mode == \"elements\":\n for element in elements:\n metadata = element.metadata.to_dict()\n metadata[\"category\"] = element.category\n docs.append(Document(page_content=str(element), metadata=metadata))\n return docs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/url.html"}
+{"id": "44a25d988b52-0", "text": "Source code for langchain.document_loaders.gutenberg\n\"\"\"Loader that loads .txt web files.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class GutenbergLoader(BaseLoader):\n \"\"\"Loader that uses urllib to load .txt web files.\"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path.\"\"\"\n if not file_path.startswith(\"https://www.gutenberg.org\"):\n raise ValueError(\"file path must start with 'https://www.gutenberg.org'\")\n if not file_path.endswith(\".txt\"):\n raise ValueError(\"file path must end with '.txt'\")\n self.file_path = file_path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load file.\"\"\"\n from urllib.request import urlopen\n elements = urlopen(self.file_path)\n text = \"\\n\\n\".join([str(el.decode(\"utf-8-sig\")) for el in elements])\n metadata = {\"source\": self.file_path}\n return [Document(page_content=text, metadata=metadata)]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/gutenberg.html"}
+{"id": "5f451bb58bca-0", "text": "Source code for langchain.document_loaders.json_loader\n\"\"\"Loader that loads data from JSON.\"\"\"\nimport json\nfrom pathlib import Path\nfrom typing import Any, Callable, Dict, List, Optional, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class JSONLoader(BaseLoader):\n \"\"\"Loads a JSON file and references a jq schema provided to load the text into\n documents.\n Example:\n [{\"text\": ...}, {\"text\": ...}, {\"text\": ...}] -> schema = .[].text\n {\"key\": [{\"text\": ...}, {\"text\": ...}, {\"text\": ...}]} -> schema = .key[].text\n [\"\", \"\", \"\"] -> schema = .[]\n \"\"\"\n def __init__(\n self,\n file_path: Union[str, Path],\n jq_schema: str,\n content_key: Optional[str] = None,\n metadata_func: Optional[Callable[[Dict, Dict], Dict]] = None,\n text_content: bool = True,\n ):\n \"\"\"Initialize the JSONLoader.\n Args:\n file_path (Union[str, Path]): The path to the JSON file.\n jq_schema (str): The jq schema to use to extract the data or text from\n the JSON.\n content_key (str): The key to use to extract the content from the JSON if\n the jq_schema results to a list of objects (dict).\n metadata_func (Callable[Dict, Dict]): A function that takes in the JSON\n object extracted by the jq_schema and the default metadata and returns\n a dict of the updated metadata.\n text_content (bool): Boolean flag to indicates whether the content is in\n string format, default to True\n \"\"\"\n try:\n import jq # noqa:F401", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/json_loader.html"}
+{"id": "5f451bb58bca-1", "text": "\"\"\"\n try:\n import jq # noqa:F401\n except ImportError:\n raise ImportError(\n \"jq package not found, please install it with `pip install jq`\"\n )\n self.file_path = Path(file_path).resolve()\n self._jq_schema = jq.compile(jq_schema)\n self._content_key = content_key\n self._metadata_func = metadata_func\n self._text_content = text_content\n[docs] def load(self) -> List[Document]:\n \"\"\"Load and return documents from the JSON file.\"\"\"\n data = self._jq_schema.input(json.loads(self.file_path.read_text()))\n # Perform some validation\n # This is not a perfect validation, but it should catch most cases\n # and prevent the user from getting a cryptic error later on.\n if self._content_key is not None:\n self._validate_content_key(data)\n docs = []\n for i, sample in enumerate(data, 1):\n metadata = dict(\n source=str(self.file_path),\n seq_num=i,\n )\n text = self._get_text(sample=sample, metadata=metadata)\n docs.append(Document(page_content=text, metadata=metadata))\n return docs\n def _get_text(self, sample: Any, metadata: dict) -> str:\n \"\"\"Convert sample to string format\"\"\"\n if self._content_key is not None:\n content = sample.get(self._content_key)\n if self._metadata_func is not None:\n # We pass in the metadata dict to the metadata_func\n # so that the user can customize the default metadata\n # based on the content of the JSON object.\n metadata = self._metadata_func(sample, metadata)\n else:\n content = sample", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/json_loader.html"}
+{"id": "5f451bb58bca-2", "text": "else:\n content = sample\n if self._text_content and not isinstance(content, str):\n raise ValueError(\n f\"Expected page_content is string, got {type(content)} instead. \\\n Set `text_content=False` if the desired input for \\\n `page_content` is not a string\"\n )\n # In case the text is None, set it to an empty string\n elif isinstance(content, str):\n return content\n elif isinstance(content, dict):\n return json.dumps(content) if content else \"\"\n else:\n return str(content) if content is not None else \"\"\n def _validate_content_key(self, data: Any) -> None:\n \"\"\"Check if content key is valid\"\"\"\n sample = data.first()\n if not isinstance(sample, dict):\n raise ValueError(\n f\"Expected the jq schema to result in a list of objects (dict), \\\n so sample must be a dict but got `{type(sample)}`\"\n )\n if sample.get(self._content_key) is None:\n raise ValueError(\n f\"Expected the jq schema to result in a list of objects (dict) \\\n with the key `{self._content_key}`\"\n )\n if self._metadata_func is not None:\n sample_metadata = self._metadata_func(sample, {})\n if not isinstance(sample_metadata, dict):\n raise ValueError(\n f\"Expected the metadata_func to return a dict but got \\\n `{type(sample_metadata)}`\"\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/json_loader.html"}
+{"id": "97d16573aaaa-0", "text": "Source code for langchain.document_loaders.pdf\n\"\"\"Loader that loads PDF files.\"\"\"\nimport json\nimport logging\nimport os\nimport tempfile\nimport time\nfrom abc import ABC\nfrom io import StringIO\nfrom pathlib import Path\nfrom typing import Any, Iterator, List, Mapping, Optional\nfrom urllib.parse import urlparse\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.blob_loaders import Blob\nfrom langchain.document_loaders.parsers.pdf import (\n PDFMinerParser,\n PDFPlumberParser,\n PyMuPDFParser,\n PyPDFium2Parser,\n PyPDFParser,\n)\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__file__)\n[docs]class UnstructuredPDFLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load PDF files.\"\"\"\n def _get_elements(self) -> List:\n from unstructured.partition.pdf import partition_pdf\n return partition_pdf(filename=self.file_path, **self.unstructured_kwargs)\nclass BasePDFLoader(BaseLoader, ABC):\n \"\"\"Base loader class for PDF files.\n Defaults to check for local file, but if the file is a web path, it will download it\n to a temporary file, and use that, then clean up the temporary file after completion\n \"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path.\"\"\"\n self.file_path = file_path\n self.web_path = None\n if \"~\" in self.file_path:\n self.file_path = os.path.expanduser(self.file_path)\n # If the file is a web path, download it to a temporary file, and use that", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"}
+{"id": "97d16573aaaa-1", "text": "if not os.path.isfile(self.file_path) and self._is_valid_url(self.file_path):\n r = requests.get(self.file_path)\n if r.status_code != 200:\n raise ValueError(\n \"Check the url of your file; returned status code %s\"\n % r.status_code\n )\n self.web_path = self.file_path\n self.temp_file = tempfile.NamedTemporaryFile()\n self.temp_file.write(r.content)\n self.file_path = self.temp_file.name\n elif not os.path.isfile(self.file_path):\n raise ValueError(\"File path %s is not a valid file or url\" % self.file_path)\n def __del__(self) -> None:\n if hasattr(self, \"temp_file\"):\n self.temp_file.close()\n @staticmethod\n def _is_valid_url(url: str) -> bool:\n \"\"\"Check if the url is valid.\"\"\"\n parsed = urlparse(url)\n return bool(parsed.netloc) and bool(parsed.scheme)\n @property\n def source(self) -> str:\n return self.web_path if self.web_path is not None else self.file_path\n[docs]class OnlinePDFLoader(BasePDFLoader):\n \"\"\"Loader that loads online PDFs.\"\"\"\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n loader = UnstructuredPDFLoader(str(self.file_path))\n return loader.load()\n[docs]class PyPDFLoader(BasePDFLoader):\n \"\"\"Loads a PDF with pypdf and chunks at character level.\n Loader also stores page numbers in metadatas.\n \"\"\"\n def __init__(self, file_path: str) -> None:\n \"\"\"Initialize with file path.\"\"\"\n try:\n import pypdf # noqa:F401\n except ImportError:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"}
+{"id": "97d16573aaaa-2", "text": "try:\n import pypdf # noqa:F401\n except ImportError:\n raise ImportError(\n \"pypdf package not found, please install it with \" \"`pip install pypdf`\"\n )\n self.parser = PyPDFParser()\n super().__init__(file_path)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load given path as pages.\"\"\"\n return list(self.lazy_load())\n[docs] def lazy_load(\n self,\n ) -> Iterator[Document]:\n \"\"\"Lazy load given path as pages.\"\"\"\n blob = Blob.from_path(self.file_path)\n yield from self.parser.parse(blob)\n[docs]class PyPDFium2Loader(BasePDFLoader):\n \"\"\"Loads a PDF with pypdfium2 and chunks at character level.\"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path.\"\"\"\n super().__init__(file_path)\n self.parser = PyPDFium2Parser()\n[docs] def load(self) -> List[Document]:\n \"\"\"Load given path as pages.\"\"\"\n return list(self.lazy_load())\n[docs] def lazy_load(\n self,\n ) -> Iterator[Document]:\n \"\"\"Lazy load given path as pages.\"\"\"\n blob = Blob.from_path(self.file_path)\n yield from self.parser.parse(blob)\n[docs]class PyPDFDirectoryLoader(BaseLoader):\n \"\"\"Loads a directory with PDF files with pypdf and chunks at character level.\n Loader also stores page numbers in metadatas.\n \"\"\"\n def __init__(\n self,\n path: str,\n glob: str = \"**/[!.]*.pdf\",\n silent_errors: bool = False,\n load_hidden: bool = False,", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"}
+{"id": "97d16573aaaa-3", "text": "silent_errors: bool = False,\n load_hidden: bool = False,\n recursive: bool = False,\n ):\n self.path = path\n self.glob = glob\n self.load_hidden = load_hidden\n self.recursive = recursive\n self.silent_errors = silent_errors\n @staticmethod\n def _is_visible(path: Path) -> bool:\n return not any(part.startswith(\".\") for part in path.parts)\n[docs] def load(self) -> List[Document]:\n p = Path(self.path)\n docs = []\n items = p.rglob(self.glob) if self.recursive else p.glob(self.glob)\n for i in items:\n if i.is_file():\n if self._is_visible(i.relative_to(p)) or self.load_hidden:\n try:\n loader = PyPDFLoader(str(i))\n sub_docs = loader.load()\n for doc in sub_docs:\n doc.metadata[\"source\"] = str(i)\n docs.extend(sub_docs)\n except Exception as e:\n if self.silent_errors:\n logger.warning(e)\n else:\n raise e\n return docs\n[docs]class PDFMinerLoader(BasePDFLoader):\n \"\"\"Loader that uses PDFMiner to load PDF files.\"\"\"\n def __init__(self, file_path: str) -> None:\n \"\"\"Initialize with file path.\"\"\"\n try:\n from pdfminer.high_level import extract_text # noqa:F401\n except ImportError:\n raise ImportError(\n \"`pdfminer` package not found, please install it with \"\n \"`pip install pdfminer.six`\"\n )\n super().__init__(file_path)\n self.parser = PDFMinerParser()\n[docs] def load(self) -> List[Document]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"}
+{"id": "97d16573aaaa-4", "text": "[docs] def load(self) -> List[Document]:\n \"\"\"Eagerly load the content.\"\"\"\n return list(self.lazy_load())\n[docs] def lazy_load(\n self,\n ) -> Iterator[Document]:\n \"\"\"Lazily lod documents.\"\"\"\n blob = Blob.from_path(self.file_path)\n yield from self.parser.parse(blob)\n[docs]class PDFMinerPDFasHTMLLoader(BasePDFLoader):\n \"\"\"Loader that uses PDFMiner to load PDF files as HTML content.\"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path.\"\"\"\n try:\n from pdfminer.high_level import extract_text_to_fp # noqa:F401\n except ImportError:\n raise ImportError(\n \"`pdfminer` package not found, please install it with \"\n \"`pip install pdfminer.six`\"\n )\n super().__init__(file_path)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load file.\"\"\"\n from pdfminer.high_level import extract_text_to_fp\n from pdfminer.layout import LAParams\n from pdfminer.utils import open_filename\n output_string = StringIO()\n with open_filename(self.file_path, \"rb\") as fp:\n extract_text_to_fp(\n fp, # type: ignore[arg-type]\n output_string,\n codec=\"\",\n laparams=LAParams(),\n output_type=\"html\",\n )\n metadata = {\"source\": self.file_path}\n return [Document(page_content=output_string.getvalue(), metadata=metadata)]\n[docs]class PyMuPDFLoader(BasePDFLoader):\n \"\"\"Loader that uses PyMuPDF to load PDF files.\"\"\"\n def __init__(self, file_path: str) -> None:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"}
+{"id": "97d16573aaaa-5", "text": "def __init__(self, file_path: str) -> None:\n \"\"\"Initialize with file path.\"\"\"\n try:\n import fitz # noqa:F401\n except ImportError:\n raise ImportError(\n \"`PyMuPDF` package not found, please install it with \"\n \"`pip install pymupdf`\"\n )\n super().__init__(file_path)\n[docs] def load(self, **kwargs: Optional[Any]) -> List[Document]:\n \"\"\"Load file.\"\"\"\n parser = PyMuPDFParser(text_kwargs=kwargs)\n blob = Blob.from_path(self.file_path)\n return parser.parse(blob)\n# MathpixPDFLoader implementation taken largely from Daniel Gross's:\n# https://gist.github.com/danielgross/3ab4104e14faccc12b49200843adab21\n[docs]class MathpixPDFLoader(BasePDFLoader):\n def __init__(\n self,\n file_path: str,\n processed_file_format: str = \"mmd\",\n max_wait_time_seconds: int = 500,\n should_clean_pdf: bool = False,\n **kwargs: Any,\n ) -> None:\n super().__init__(file_path)\n self.mathpix_api_key = get_from_dict_or_env(\n kwargs, \"mathpix_api_key\", \"MATHPIX_API_KEY\"\n )\n self.mathpix_api_id = get_from_dict_or_env(\n kwargs, \"mathpix_api_id\", \"MATHPIX_API_ID\"\n )\n self.processed_file_format = processed_file_format\n self.max_wait_time_seconds = max_wait_time_seconds\n self.should_clean_pdf = should_clean_pdf\n @property\n def headers(self) -> dict:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"}
+{"id": "97d16573aaaa-6", "text": "@property\n def headers(self) -> dict:\n return {\"app_id\": self.mathpix_api_id, \"app_key\": self.mathpix_api_key}\n @property\n def url(self) -> str:\n return \"https://api.mathpix.com/v3/pdf\"\n @property\n def data(self) -> dict:\n options = {\"conversion_formats\": {self.processed_file_format: True}}\n return {\"options_json\": json.dumps(options)}\n[docs] def send_pdf(self) -> str:\n with open(self.file_path, \"rb\") as f:\n files = {\"file\": f}\n response = requests.post(\n self.url, headers=self.headers, files=files, data=self.data\n )\n response_data = response.json()\n if \"pdf_id\" in response_data:\n pdf_id = response_data[\"pdf_id\"]\n return pdf_id\n else:\n raise ValueError(\"Unable to send PDF to Mathpix.\")\n[docs] def wait_for_processing(self, pdf_id: str) -> None:\n url = self.url + \"/\" + pdf_id\n for _ in range(0, self.max_wait_time_seconds, 5):\n response = requests.get(url, headers=self.headers)\n response_data = response.json()\n status = response_data.get(\"status\", None)\n if status == \"completed\":\n return\n elif status == \"error\":\n raise ValueError(\"Unable to retrieve PDF from Mathpix\")\n else:\n print(f\"Status: {status}, waiting for processing to complete\")\n time.sleep(5)\n raise TimeoutError\n[docs] def get_processed_pdf(self, pdf_id: str) -> str:\n self.wait_for_processing(pdf_id)", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"}
+{"id": "97d16573aaaa-7", "text": "self.wait_for_processing(pdf_id)\n url = f\"{self.url}/{pdf_id}.{self.processed_file_format}\"\n response = requests.get(url, headers=self.headers)\n return response.content.decode(\"utf-8\")\n[docs] def clean_pdf(self, contents: str) -> str:\n contents = \"\\n\".join(\n [line for line in contents.split(\"\\n\") if not line.startswith(\"![]\")]\n )\n # replace \\section{Title} with # Title\n contents = contents.replace(\"\\\\section{\", \"# \").replace(\"}\", \"\")\n # replace the \"\\\" slash that Mathpix adds to escape $, %, (, etc.\n contents = (\n contents.replace(r\"\\$\", \"$\")\n .replace(r\"\\%\", \"%\")\n .replace(r\"\\(\", \"(\")\n .replace(r\"\\)\", \")\")\n )\n return contents\n[docs] def load(self) -> List[Document]:\n pdf_id = self.send_pdf()\n contents = self.get_processed_pdf(pdf_id)\n if self.should_clean_pdf:\n contents = self.clean_pdf(contents)\n metadata = {\"source\": self.source, \"file_path\": self.source}\n return [Document(page_content=contents, metadata=metadata)]\n[docs]class PDFPlumberLoader(BasePDFLoader):\n \"\"\"Loader that uses pdfplumber to load PDF files.\"\"\"\n def __init__(\n self, file_path: str, text_kwargs: Optional[Mapping[str, Any]] = None\n ) -> None:\n \"\"\"Initialize with file path.\"\"\"\n try:\n import pdfplumber # noqa:F401\n except ImportError:\n raise ImportError(\n \"pdfplumber package not found, please install it with \"\n \"`pip install pdfplumber`\"\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"}
+{"id": "97d16573aaaa-8", "text": "\"`pip install pdfplumber`\"\n )\n super().__init__(file_path)\n self.text_kwargs = text_kwargs or {}\n[docs] def load(self) -> List[Document]:\n \"\"\"Load file.\"\"\"\n parser = PDFPlumberParser(text_kwargs=self.text_kwargs)\n blob = Blob.from_path(self.file_path)\n return parser.parse(blob)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"}
+{"id": "9ad56175bd25-0", "text": "Source code for langchain.document_loaders.s3_directory\n\"\"\"Loading logic for loading documents from an s3 directory.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.s3_file import S3FileLoader\n[docs]class S3DirectoryLoader(BaseLoader):\n \"\"\"Loading logic for loading documents from s3.\"\"\"\n def __init__(self, bucket: str, prefix: str = \"\"):\n \"\"\"Initialize with bucket and key name.\"\"\"\n self.bucket = bucket\n self.prefix = prefix\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n import boto3\n except ImportError:\n raise ImportError(\n \"Could not import boto3 python package. \"\n \"Please install it with `pip install boto3`.\"\n )\n s3 = boto3.resource(\"s3\")\n bucket = s3.Bucket(self.bucket)\n docs = []\n for obj in bucket.objects.filter(Prefix=self.prefix):\n loader = S3FileLoader(self.bucket, obj.key)\n docs.extend(loader.load())\n return docs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/s3_directory.html"}
+{"id": "bef5daff10f9-0", "text": "Source code for langchain.document_loaders.wikipedia\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utilities.wikipedia import WikipediaAPIWrapper\n[docs]class WikipediaLoader(BaseLoader):\n \"\"\"Loads a query result from www.wikipedia.org into a list of Documents.\n The hard limit on the number of downloaded Documents is 300 for now.\n Each wiki page represents one Document.\n \"\"\"\n def __init__(\n self,\n query: str,\n lang: str = \"en\",\n load_max_docs: Optional[int] = 100,\n load_all_available_meta: Optional[bool] = False,\n ):\n self.query = query\n self.lang = lang\n self.load_max_docs = load_max_docs\n self.load_all_available_meta = load_all_available_meta\n[docs] def load(self) -> List[Document]:\n client = WikipediaAPIWrapper(\n lang=self.lang,\n top_k_results=self.load_max_docs,\n load_all_available_meta=self.load_all_available_meta,\n )\n docs = client.load(self.query)\n return docs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/wikipedia.html"}
+{"id": "dc118a08c82f-0", "text": "Source code for langchain.document_loaders.hugging_face_dataset\n\"\"\"Loader that loads HuggingFace datasets.\"\"\"\nfrom typing import Iterator, List, Mapping, Optional, Sequence, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class HuggingFaceDatasetLoader(BaseLoader):\n \"\"\"Loading logic for loading documents from the Hugging Face Hub.\"\"\"\n def __init__(\n self,\n path: str,\n page_content_column: str = \"text\",\n name: Optional[str] = None,\n data_dir: Optional[str] = None,\n data_files: Optional[\n Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]\n ] = None,\n cache_dir: Optional[str] = None,\n keep_in_memory: Optional[bool] = None,\n save_infos: bool = False,\n use_auth_token: Optional[Union[bool, str]] = None,\n num_proc: Optional[int] = None,\n ):\n \"\"\"Initialize the HuggingFaceDatasetLoader.\n Args:\n path: Path or name of the dataset.\n page_content_column: Page content column name.\n name: Name of the dataset configuration.\n data_dir: Data directory of the dataset configuration.\n data_files: Path(s) to source data file(s).\n cache_dir: Directory to read/write data.\n keep_in_memory: Whether to copy the dataset in-memory.\n save_infos: Save the dataset information (checksums/size/splits/...).\n use_auth_token: Bearer token for remote files on the Datasets Hub.\n num_proc: Number of processes.\n \"\"\"\n self.path = path\n self.page_content_column = page_content_column\n self.name = name", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/hugging_face_dataset.html"}
+{"id": "dc118a08c82f-1", "text": "self.page_content_column = page_content_column\n self.name = name\n self.data_dir = data_dir\n self.data_files = data_files\n self.cache_dir = cache_dir\n self.keep_in_memory = keep_in_memory\n self.save_infos = save_infos\n self.use_auth_token = use_auth_token\n self.num_proc = num_proc\n[docs] def lazy_load(\n self,\n ) -> Iterator[Document]:\n \"\"\"Load documents lazily.\"\"\"\n try:\n from datasets import load_dataset\n except ImportError:\n raise ImportError(\n \"Could not import datasets python package. \"\n \"Please install it with `pip install datasets`.\"\n )\n dataset = load_dataset(\n path=self.path,\n name=self.name,\n data_dir=self.data_dir,\n data_files=self.data_files,\n cache_dir=self.cache_dir,\n keep_in_memory=self.keep_in_memory,\n save_infos=self.save_infos,\n use_auth_token=self.use_auth_token,\n num_proc=self.num_proc,\n )\n yield from (\n Document(\n page_content=row.pop(self.page_content_column),\n metadata=row,\n )\n for key in dataset.keys()\n for row in dataset[key]\n )\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n return list(self.lazy_load())\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/hugging_face_dataset.html"}
+{"id": "5c4b21ff844e-0", "text": "Source code for langchain.document_loaders.url_playwright\n\"\"\"Loader that uses Playwright to load a page, then uses unstructured to load the html.\n\"\"\"\nimport logging\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\n[docs]class PlaywrightURLLoader(BaseLoader):\n \"\"\"Loader that uses Playwright and to load a page and unstructured to load the html.\n This is useful for loading pages that require javascript to render.\n Attributes:\n urls (List[str]): List of URLs to load.\n continue_on_failure (bool): If True, continue loading other URLs on failure.\n headless (bool): If True, the browser will run in headless mode.\n \"\"\"\n def __init__(\n self,\n urls: List[str],\n continue_on_failure: bool = True,\n headless: bool = True,\n remove_selectors: Optional[List[str]] = None,\n ):\n \"\"\"Load a list of URLs using Playwright and unstructured.\"\"\"\n try:\n import playwright # noqa:F401\n except ImportError:\n raise ImportError(\n \"playwright package not found, please install it with \"\n \"`pip install playwright`\"\n )\n try:\n import unstructured # noqa:F401\n except ImportError:\n raise ValueError(\n \"unstructured package not found, please install it with \"\n \"`pip install unstructured`\"\n )\n self.urls = urls\n self.continue_on_failure = continue_on_failure\n self.headless = headless\n self.remove_selectors = remove_selectors\n[docs] def load(self) -> List[Document]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/url_playwright.html"}
+{"id": "5c4b21ff844e-1", "text": "[docs] def load(self) -> List[Document]:\n \"\"\"Load the specified URLs using Playwright and create Document instances.\n Returns:\n List[Document]: A list of Document instances with loaded content.\n \"\"\"\n from playwright.sync_api import sync_playwright\n from unstructured.partition.html import partition_html\n docs: List[Document] = list()\n with sync_playwright() as p:\n browser = p.chromium.launch(headless=self.headless)\n for url in self.urls:\n try:\n page = browser.new_page()\n page.goto(url)\n for selector in self.remove_selectors or []:\n elements = page.locator(selector).all()\n for element in elements:\n if element.is_visible():\n element.evaluate(\"element => element.remove()\")\n page_source = page.content()\n elements = partition_html(text=page_source)\n text = \"\\n\\n\".join([str(el) for el in elements])\n metadata = {\"source\": url}\n docs.append(Document(page_content=text, metadata=metadata))\n except Exception as e:\n if self.continue_on_failure:\n logger.error(\n f\"Error fetching or processing {url}, exception: {e}\"\n )\n else:\n raise e\n browser.close()\n return docs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/url_playwright.html"}
+{"id": "30135ae4794f-0", "text": "Source code for langchain.document_loaders.onedrive_file\nfrom __future__ import annotations\nimport tempfile\nfrom typing import TYPE_CHECKING, List\nfrom pydantic import BaseModel, Field\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\nif TYPE_CHECKING:\n from O365.drive import File\nCHUNK_SIZE = 1024 * 1024 * 5\n[docs]class OneDriveFileLoader(BaseLoader, BaseModel):\n file: File = Field(...)\n class Config:\n arbitrary_types_allowed = True\n[docs] def load(self) -> List[Document]:\n \"\"\"Load Documents\"\"\"\n with tempfile.TemporaryDirectory() as temp_dir:\n file_path = f\"{temp_dir}/{self.file.name}\"\n self.file.download(to_path=temp_dir, chunk_size=CHUNK_SIZE)\n loader = UnstructuredFileLoader(file_path)\n return loader.load()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/onedrive_file.html"}
+{"id": "11d69320d52a-0", "text": "Source code for langchain.document_loaders.obsidian\n\"\"\"Loader that loads Obsidian directory dump.\"\"\"\nimport re\nfrom pathlib import Path\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class ObsidianLoader(BaseLoader):\n \"\"\"Loader that loads Obsidian files from disk.\"\"\"\n FRONT_MATTER_REGEX = re.compile(r\"^---\\n(.*?)\\n---\\n\", re.MULTILINE | re.DOTALL)\n def __init__(\n self, path: str, encoding: str = \"UTF-8\", collect_metadata: bool = True\n ):\n \"\"\"Initialize with path.\"\"\"\n self.file_path = path\n self.encoding = encoding\n self.collect_metadata = collect_metadata\n def _parse_front_matter(self, content: str) -> dict:\n \"\"\"Parse front matter metadata from the content and return it as a dict.\"\"\"\n if not self.collect_metadata:\n return {}\n match = self.FRONT_MATTER_REGEX.search(content)\n front_matter = {}\n if match:\n lines = match.group(1).split(\"\\n\")\n for line in lines:\n if \":\" in line:\n key, value = line.split(\":\", 1)\n front_matter[key.strip()] = value.strip()\n else:\n # Skip lines without a colon\n continue\n return front_matter\n def _remove_front_matter(self, content: str) -> str:\n \"\"\"Remove front matter metadata from the given content.\"\"\"\n if not self.collect_metadata:\n return content\n return self.FRONT_MATTER_REGEX.sub(\"\", content)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n ps = list(Path(self.file_path).glob(\"**/*.md\"))", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/obsidian.html"}
+{"id": "11d69320d52a-1", "text": "ps = list(Path(self.file_path).glob(\"**/*.md\"))\n docs = []\n for p in ps:\n with open(p, encoding=self.encoding) as f:\n text = f.read()\n front_matter = self._parse_front_matter(text)\n text = self._remove_front_matter(text)\n metadata = {\n \"source\": str(p.name),\n \"path\": str(p),\n \"created\": p.stat().st_ctime,\n \"last_modified\": p.stat().st_mtime,\n \"last_accessed\": p.stat().st_atime,\n **front_matter,\n }\n docs.append(Document(page_content=text, metadata=metadata))\n return docs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/obsidian.html"}
+{"id": "a6b6fd42cec9-0", "text": "Source code for langchain.document_loaders.hn\n\"\"\"Loader that loads HN.\"\"\"\nfrom typing import Any, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.web_base import WebBaseLoader\n[docs]class HNLoader(WebBaseLoader):\n \"\"\"Load Hacker News data from either main page results or the comments page.\"\"\"\n[docs] def load(self) -> List[Document]:\n \"\"\"Get important HN webpage information.\n Components are:\n - title\n - content\n - source url,\n - time of post\n - author of the post\n - number of comments\n - rank of the post\n \"\"\"\n soup_info = self.scrape()\n if \"item\" in self.web_path:\n return self.load_comments(soup_info)\n else:\n return self.load_results(soup_info)\n[docs] def load_comments(self, soup_info: Any) -> List[Document]:\n \"\"\"Load comments from a HN post.\"\"\"\n comments = soup_info.select(\"tr[class='athing comtr']\")\n title = soup_info.select_one(\"tr[id='pagespace']\").get(\"title\")\n return [\n Document(\n page_content=comment.text.strip(),\n metadata={\"source\": self.web_path, \"title\": title},\n )\n for comment in comments\n ]\n[docs] def load_results(self, soup: Any) -> List[Document]:\n \"\"\"Load items from an HN page.\"\"\"\n items = soup.select(\"tr[class='athing']\")\n documents = []\n for lineItem in items:\n ranking = lineItem.select_one(\"span[class='rank']\").text\n link = lineItem.find(\"span\", {\"class\": \"titleline\"}).find(\"a\").get(\"href\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/hn.html"}
+{"id": "a6b6fd42cec9-1", "text": "title = lineItem.find(\"span\", {\"class\": \"titleline\"}).text.strip()\n metadata = {\n \"source\": self.web_path,\n \"title\": title,\n \"link\": link,\n \"ranking\": ranking,\n }\n documents.append(\n Document(\n page_content=title, link=link, ranking=ranking, metadata=metadata\n )\n )\n return documents\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/hn.html"}
+{"id": "a4aa4b3426c5-0", "text": "Source code for langchain.document_loaders.powerpoint\n\"\"\"Loader that loads powerpoint files.\"\"\"\nimport os\nfrom typing import List\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class UnstructuredPowerPointLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load powerpoint files.\"\"\"\n def _get_elements(self) -> List:\n from unstructured.__version__ import __version__ as __unstructured_version__\n from unstructured.file_utils.filetype import FileType, detect_filetype\n unstructured_version = tuple(\n [int(x) for x in __unstructured_version__.split(\".\")]\n )\n # NOTE(MthwRobinson) - magic will raise an import error if the libmagic\n # system dependency isn't installed. If it's not installed, we'll just\n # check the file extension\n try:\n import magic # noqa: F401\n is_ppt = detect_filetype(self.file_path) == FileType.PPT\n except ImportError:\n _, extension = os.path.splitext(str(self.file_path))\n is_ppt = extension == \".ppt\"\n if is_ppt and unstructured_version < (0, 4, 11):\n raise ValueError(\n f\"You are on unstructured version {__unstructured_version__}. \"\n \"Partitioning .ppt files is only supported in unstructured>=0.4.11. \"\n \"Please upgrade the unstructured package and try again.\"\n )\n if is_ppt:\n from unstructured.partition.ppt import partition_ppt\n return partition_ppt(filename=self.file_path, **self.unstructured_kwargs)\n else:\n from unstructured.partition.pptx import partition_pptx\n return partition_pptx(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/powerpoint.html"}
+{"id": "a4aa4b3426c5-1", "text": "return partition_pptx(filename=self.file_path, **self.unstructured_kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/powerpoint.html"}
+{"id": "cfa0069873a2-0", "text": "Source code for langchain.document_loaders.ifixit\n\"\"\"Loader that loads iFixit data.\"\"\"\nfrom typing import List, Optional\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.web_base import WebBaseLoader\nIFIXIT_BASE_URL = \"https://www.ifixit.com/api/2.0\"\n[docs]class IFixitLoader(BaseLoader):\n \"\"\"Load iFixit repair guides, device wikis and answers.\n iFixit is the largest, open repair community on the web. The site contains nearly\n 100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is\n licensed under CC-BY.\n This loader will allow you to download the text of a repair guide, text of Q&A's\n and wikis from devices on iFixit using their open APIs and web scraping.\n \"\"\"\n def __init__(self, web_path: str):\n \"\"\"Initialize with web path.\"\"\"\n if not web_path.startswith(\"https://www.ifixit.com\"):\n raise ValueError(\"web path must start with 'https://www.ifixit.com'\")\n path = web_path.replace(\"https://www.ifixit.com\", \"\")\n allowed_paths = [\"/Device\", \"/Guide\", \"/Answers\", \"/Teardown\"]\n \"\"\" TODO: Add /Wiki \"\"\"\n if not any(path.startswith(allowed_path) for allowed_path in allowed_paths):\n raise ValueError(\n \"web path must start with /Device, /Guide, /Teardown or /Answers\"\n )\n pieces = [x for x in path.split(\"/\") if x]\n \"\"\"Teardowns are just guides by a different name\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/ifixit.html"}
+{"id": "cfa0069873a2-1", "text": "\"\"\"Teardowns are just guides by a different name\"\"\"\n self.page_type = pieces[0] if pieces[0] != \"Teardown\" else \"Guide\"\n if self.page_type == \"Guide\" or self.page_type == \"Answers\":\n self.id = pieces[2]\n else:\n self.id = pieces[1]\n self.web_path = web_path\n[docs] def load(self) -> List[Document]:\n if self.page_type == \"Device\":\n return self.load_device()\n elif self.page_type == \"Guide\" or self.page_type == \"Teardown\":\n return self.load_guide()\n elif self.page_type == \"Answers\":\n return self.load_questions_and_answers()\n else:\n raise ValueError(\"Unknown page type: \" + self.page_type)\n[docs] @staticmethod\n def load_suggestions(query: str = \"\", doc_type: str = \"all\") -> List[Document]:\n res = requests.get(\n IFIXIT_BASE_URL + \"/suggest/\" + query + \"?doctypes=\" + doc_type\n )\n if res.status_code != 200:\n raise ValueError(\n 'Could not load suggestions for \"' + query + '\"\\n' + res.json()\n )\n data = res.json()\n results = data[\"results\"]\n output = []\n for result in results:\n try:\n loader = IFixitLoader(result[\"url\"])\n if loader.page_type == \"Device\":\n output += loader.load_device(include_guides=False)\n else:\n output += loader.load()\n except ValueError:\n continue\n return output\n[docs] def load_questions_and_answers(\n self, url_override: Optional[str] = None", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/ifixit.html"}
+{"id": "cfa0069873a2-2", "text": "self, url_override: Optional[str] = None\n ) -> List[Document]:\n loader = WebBaseLoader(self.web_path if url_override is None else url_override)\n soup = loader.scrape()\n output = []\n title = soup.find(\"h1\", \"post-title\").text\n output.append(\"# \" + title)\n output.append(soup.select_one(\".post-content .post-text\").text.strip())\n answersHeader = soup.find(\"div\", \"post-answers-header\")\n if answersHeader:\n output.append(\"\\n## \" + answersHeader.text.strip())\n for answer in soup.select(\".js-answers-list .post.post-answer\"):\n if answer.has_attr(\"itemprop\") and \"acceptedAnswer\" in answer[\"itemprop\"]:\n output.append(\"\\n### Accepted Answer\")\n elif \"post-helpful\" in answer[\"class\"]:\n output.append(\"\\n### Most Helpful Answer\")\n else:\n output.append(\"\\n### Other Answer\")\n output += [\n a.text.strip() for a in answer.select(\".post-content .post-text\")\n ]\n output.append(\"\\n\")\n text = \"\\n\".join(output).strip()\n metadata = {\"source\": self.web_path, \"title\": title}\n return [Document(page_content=text, metadata=metadata)]\n[docs] def load_device(\n self, url_override: Optional[str] = None, include_guides: bool = True\n ) -> List[Document]:\n documents = []\n if url_override is None:\n url = IFIXIT_BASE_URL + \"/wikis/CATEGORY/\" + self.id\n else:\n url = url_override\n res = requests.get(url)\n data = res.json()\n text = \"\\n\".join(\n [", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/ifixit.html"}
+{"id": "cfa0069873a2-3", "text": "data = res.json()\n text = \"\\n\".join(\n [\n data[key]\n for key in [\"title\", \"description\", \"contents_raw\"]\n if key in data\n ]\n ).strip()\n metadata = {\"source\": self.web_path, \"title\": data[\"title\"]}\n documents.append(Document(page_content=text, metadata=metadata))\n if include_guides:\n \"\"\"Load and return documents for each guide linked to from the device\"\"\"\n guide_urls = [guide[\"url\"] for guide in data[\"guides\"]]\n for guide_url in guide_urls:\n documents.append(IFixitLoader(guide_url).load()[0])\n return documents\n[docs] def load_guide(self, url_override: Optional[str] = None) -> List[Document]:\n if url_override is None:\n url = IFIXIT_BASE_URL + \"/guides/\" + self.id\n else:\n url = url_override\n res = requests.get(url)\n if res.status_code != 200:\n raise ValueError(\n \"Could not load guide: \" + self.web_path + \"\\n\" + res.json()\n )\n data = res.json()\n doc_parts = [\"# \" + data[\"title\"], data[\"introduction_raw\"]]\n doc_parts.append(\"\\n\\n###Tools Required:\")\n if len(data[\"tools\"]) == 0:\n doc_parts.append(\"\\n - None\")\n else:\n for tool in data[\"tools\"]:\n doc_parts.append(\"\\n - \" + tool[\"text\"])\n doc_parts.append(\"\\n\\n###Parts Required:\")\n if len(data[\"parts\"]) == 0:\n doc_parts.append(\"\\n - None\")\n else:\n for part in data[\"parts\"]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/ifixit.html"}
+{"id": "cfa0069873a2-4", "text": "else:\n for part in data[\"parts\"]:\n doc_parts.append(\"\\n - \" + part[\"text\"])\n for row in data[\"steps\"]:\n doc_parts.append(\n \"\\n\\n## \"\n + (\n row[\"title\"]\n if row[\"title\"] != \"\"\n else \"Step {}\".format(row[\"orderby\"])\n )\n )\n for line in row[\"lines\"]:\n doc_parts.append(line[\"text_raw\"])\n doc_parts.append(data[\"conclusion_raw\"])\n text = \"\\n\".join(doc_parts)\n metadata = {\"source\": self.web_path, \"title\": data[\"title\"]}\n return [Document(page_content=text, metadata=metadata)]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/ifixit.html"}
+{"id": "beb54a9dda33-0", "text": "Source code for langchain.document_loaders.unstructured\n\"\"\"Loader that uses unstructured to load files.\"\"\"\nimport collections\nfrom abc import ABC, abstractmethod\nfrom typing import IO, Any, List, Sequence, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\ndef satisfies_min_unstructured_version(min_version: str) -> bool:\n \"\"\"Checks to see if the installed unstructured version exceeds the minimum version\n for the feature in question.\"\"\"\n from unstructured.__version__ import __version__ as __unstructured_version__\n min_version_tuple = tuple([int(x) for x in min_version.split(\".\")])\n # NOTE(MthwRobinson) - enables the loader to work when you're using pre-release\n # versions of unstructured like 0.4.17-dev1\n _unstructured_version = __unstructured_version__.split(\"-\")[0]\n unstructured_version_tuple = tuple(\n [int(x) for x in _unstructured_version.split(\".\")]\n )\n return unstructured_version_tuple >= min_version_tuple\ndef validate_unstructured_version(min_unstructured_version: str) -> None:\n \"\"\"Raises an error if the unstructured version does not exceed the\n specified minimum.\"\"\"\n if not satisfies_min_unstructured_version(min_unstructured_version):\n raise ValueError(\n f\"unstructured>={min_unstructured_version} is required in this loader.\"\n )\nclass UnstructuredBaseLoader(BaseLoader, ABC):\n \"\"\"Loader that uses unstructured to load files.\"\"\"\n def __init__(self, mode: str = \"single\", **unstructured_kwargs: Any):\n \"\"\"Initialize with file path.\"\"\"\n try:\n import unstructured # noqa:F401\n except ImportError:\n raise ValueError(", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/unstructured.html"}
+{"id": "beb54a9dda33-1", "text": "import unstructured # noqa:F401\n except ImportError:\n raise ValueError(\n \"unstructured package not found, please install it with \"\n \"`pip install unstructured`\"\n )\n _valid_modes = {\"single\", \"elements\"}\n if mode not in _valid_modes:\n raise ValueError(\n f\"Got {mode} for `mode`, but should be one of `{_valid_modes}`\"\n )\n self.mode = mode\n if not satisfies_min_unstructured_version(\"0.5.4\"):\n if \"strategy\" in unstructured_kwargs:\n unstructured_kwargs.pop(\"strategy\")\n self.unstructured_kwargs = unstructured_kwargs\n @abstractmethod\n def _get_elements(self) -> List:\n \"\"\"Get elements.\"\"\"\n @abstractmethod\n def _get_metadata(self) -> dict:\n \"\"\"Get metadata.\"\"\"\n def load(self) -> List[Document]:\n \"\"\"Load file.\"\"\"\n elements = self._get_elements()\n if self.mode == \"elements\":\n docs: List[Document] = list()\n for element in elements:\n metadata = self._get_metadata()\n # NOTE(MthwRobinson) - the attribute check is for backward compatibility\n # with unstructured<0.4.9. The metadata attributed was added in 0.4.9.\n if hasattr(element, \"metadata\"):\n metadata.update(element.metadata.to_dict())\n if hasattr(element, \"category\"):\n metadata[\"category\"] = element.category\n docs.append(Document(page_content=str(element), metadata=metadata))\n elif self.mode == \"single\":\n metadata = self._get_metadata()\n text = \"\\n\\n\".join([str(el) for el in elements])\n docs = [Document(page_content=text, metadata=metadata)]", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/unstructured.html"}
+{"id": "beb54a9dda33-2", "text": "docs = [Document(page_content=text, metadata=metadata)]\n else:\n raise ValueError(f\"mode of {self.mode} not supported.\")\n return docs\n[docs]class UnstructuredFileLoader(UnstructuredBaseLoader):\n \"\"\"Loader that uses unstructured to load files.\"\"\"\n def __init__(\n self,\n file_path: Union[str, List[str]],\n mode: str = \"single\",\n **unstructured_kwargs: Any,\n ):\n \"\"\"Initialize with file path.\"\"\"\n self.file_path = file_path\n super().__init__(mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.auto import partition\n return partition(filename=self.file_path, **self.unstructured_kwargs)\n def _get_metadata(self) -> dict:\n return {\"source\": self.file_path}\ndef get_elements_from_api(\n file_path: Union[str, List[str], None] = None,\n file: Union[IO, Sequence[IO], None] = None,\n api_url: str = \"https://api.unstructured.io/general/v0/general\",\n api_key: str = \"\",\n **unstructured_kwargs: Any,\n) -> List:\n \"\"\"Retrieves a list of elements from the Unstructured API.\"\"\"\n if isinstance(file, collections.abc.Sequence) or isinstance(file_path, list):\n from unstructured.partition.api import partition_multiple_via_api\n _doc_elements = partition_multiple_via_api(\n filenames=file_path,\n files=file,\n api_key=api_key,\n api_url=api_url,\n **unstructured_kwargs,\n )\n elements = []\n for _elements in _doc_elements:\n elements.extend(_elements)\n return elements\n else:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/unstructured.html"}
+{"id": "beb54a9dda33-3", "text": "elements.extend(_elements)\n return elements\n else:\n from unstructured.partition.api import partition_via_api\n return partition_via_api(\n filename=file_path,\n file=file,\n api_key=api_key,\n api_url=api_url,\n **unstructured_kwargs,\n )\n[docs]class UnstructuredAPIFileLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses the unstructured web API to load files.\"\"\"\n def __init__(\n self,\n file_path: Union[str, List[str]] = \"\",\n mode: str = \"single\",\n url: str = \"https://api.unstructured.io/general/v0/general\",\n api_key: str = \"\",\n **unstructured_kwargs: Any,\n ):\n \"\"\"Initialize with file path.\"\"\"\n if isinstance(file_path, str):\n validate_unstructured_version(min_unstructured_version=\"0.6.2\")\n else:\n validate_unstructured_version(min_unstructured_version=\"0.6.3\")\n self.url = url\n self.api_key = api_key\n super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)\n def _get_metadata(self) -> dict:\n return {\"source\": self.file_path}\n def _get_elements(self) -> List:\n return get_elements_from_api(\n file_path=self.file_path,\n api_key=self.api_key,\n api_url=self.url,\n **self.unstructured_kwargs,\n )\n[docs]class UnstructuredFileIOLoader(UnstructuredBaseLoader):\n \"\"\"Loader that uses unstructured to load file IO objects.\"\"\"\n def __init__(\n self,\n file: Union[IO, Sequence[IO]],\n mode: str = \"single\",", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/unstructured.html"}
+{"id": "beb54a9dda33-4", "text": "mode: str = \"single\",\n **unstructured_kwargs: Any,\n ):\n \"\"\"Initialize with file path.\"\"\"\n self.file = file\n super().__init__(mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.auto import partition\n return partition(file=self.file, **self.unstructured_kwargs)\n def _get_metadata(self) -> dict:\n return {}\n[docs]class UnstructuredAPIFileIOLoader(UnstructuredFileIOLoader):\n \"\"\"Loader that uses the unstructured web API to load file IO objects.\"\"\"\n def __init__(\n self,\n file: Union[IO, Sequence[IO]],\n mode: str = \"single\",\n url: str = \"https://api.unstructured.io/general/v0/general\",\n api_key: str = \"\",\n **unstructured_kwargs: Any,\n ):\n \"\"\"Initialize with file path.\"\"\"\n if isinstance(file, collections.abc.Sequence):\n validate_unstructured_version(min_unstructured_version=\"0.6.3\")\n if file:\n validate_unstructured_version(min_unstructured_version=\"0.6.2\")\n self.url = url\n self.api_key = api_key\n super().__init__(file=file, mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n return get_elements_from_api(\n file=self.file,\n api_key=self.api_key,\n api_url=self.url,\n **self.unstructured_kwargs,\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/unstructured.html"}
+{"id": "d8814b16415b-0", "text": "Source code for langchain.document_loaders.imsdb\n\"\"\"Loader that loads IMSDb.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.web_base import WebBaseLoader\n[docs]class IMSDbLoader(WebBaseLoader):\n \"\"\"Loader that loads IMSDb webpages.\"\"\"\n[docs] def load(self) -> List[Document]:\n \"\"\"Load webpage.\"\"\"\n soup = self.scrape()\n text = soup.select_one(\"td[class='scrtext']\").text\n metadata = {\"source\": self.web_path}\n return [Document(page_content=text, metadata=metadata)]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/imsdb.html"}
+{"id": "3a0b01827c54-0", "text": "Source code for langchain.document_loaders.gcs_directory\n\"\"\"Loading logic for loading documents from an GCS directory.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.gcs_file import GCSFileLoader\n[docs]class GCSDirectoryLoader(BaseLoader):\n \"\"\"Loading logic for loading documents from GCS.\"\"\"\n def __init__(self, project_name: str, bucket: str, prefix: str = \"\"):\n \"\"\"Initialize with bucket and key name.\"\"\"\n self.project_name = project_name\n self.bucket = bucket\n self.prefix = prefix\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n from google.cloud import storage\n except ImportError:\n raise ValueError(\n \"Could not import google-cloud-storage python package. \"\n \"Please install it with `pip install google-cloud-storage`.\"\n )\n client = storage.Client(project=self.project_name)\n docs = []\n for blob in client.list_blobs(self.bucket, prefix=self.prefix):\n # we shall just skip directories since GCSFileLoader creates\n # intermediate directories on the fly\n if blob.name.endswith(\"/\"):\n continue\n loader = GCSFileLoader(self.project_name, self.bucket, blob.name)\n docs.extend(loader.load())\n return docs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/gcs_directory.html"}
+{"id": "5bb4bff50760-0", "text": "Source code for langchain.document_loaders.conllu\n\"\"\"Load CoNLL-U files.\"\"\"\nimport csv\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class CoNLLULoader(BaseLoader):\n \"\"\"Load CoNLL-U files.\"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path.\"\"\"\n self.file_path = file_path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load from file path.\"\"\"\n with open(self.file_path, encoding=\"utf8\") as f:\n tsv = list(csv.reader(f, delimiter=\"\\t\"))\n # If len(line) > 1, the line is not a comment\n lines = [line for line in tsv if len(line) > 1]\n text = \"\"\n for i, line in enumerate(lines):\n # Do not add a space after a punctuation mark or at the end of the sentence\n if line[9] == \"SpaceAfter=No\" or i == len(lines) - 1:\n text += line[1]\n else:\n text += line[1] + \" \"\n metadata = {\"source\": self.file_path}\n return [Document(page_content=text, metadata=metadata)]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/conllu.html"}
+{"id": "a0856dfb4c47-0", "text": "Source code for langchain.document_loaders.googledrive\n\"\"\"Loader that loads data from Google Drive.\"\"\"\n# Prerequisites:\n# 1. Create a Google Cloud project\n# 2. Enable the Google Drive API:\n# https://console.cloud.google.com/flows/enableapi?apiid=drive.googleapis.com\n# 3. Authorize credentials for desktop app:\n# https://developers.google.com/drive/api/quickstart/python#authorize_credentials_for_a_desktop_application # noqa: E501\n# 4. For service accounts visit\n# https://cloud.google.com/iam/docs/service-accounts-create\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional, Sequence, Union\nfrom pydantic import BaseModel, root_validator, validator\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nSCOPES = [\"https://www.googleapis.com/auth/drive.readonly\"]\n[docs]class GoogleDriveLoader(BaseLoader, BaseModel):\n \"\"\"Loader that loads Google Docs from Google Drive.\"\"\"\n service_account_key: Path = Path.home() / \".credentials\" / \"keys.json\"\n credentials_path: Path = Path.home() / \".credentials\" / \"credentials.json\"\n token_path: Path = Path.home() / \".credentials\" / \"token.json\"\n folder_id: Optional[str] = None\n document_ids: Optional[List[str]] = None\n file_ids: Optional[List[str]] = None\n recursive: bool = False\n file_types: Optional[Sequence[str]] = None\n load_trashed_files: bool = False\n @root_validator\n def validate_inputs(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Validate that either folder_id or document_ids is set, but not both.\"\"\"\n if values.get(\"folder_id\") and (", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"}
+{"id": "a0856dfb4c47-1", "text": "if values.get(\"folder_id\") and (\n values.get(\"document_ids\") or values.get(\"file_ids\")\n ):\n raise ValueError(\n \"Cannot specify both folder_id and document_ids nor \"\n \"folder_id and file_ids\"\n )\n if (\n not values.get(\"folder_id\")\n and not values.get(\"document_ids\")\n and not values.get(\"file_ids\")\n ):\n raise ValueError(\"Must specify either folder_id, document_ids, or file_ids\")\n file_types = values.get(\"file_types\")\n if file_types:\n if values.get(\"document_ids\") or values.get(\"file_ids\"):\n raise ValueError(\n \"file_types can only be given when folder_id is given,\"\n \" (not when document_ids or file_ids are given).\"\n )\n type_mapping = {\n \"document\": \"application/vnd.google-apps.document\",\n \"sheet\": \"application/vnd.google-apps.spreadsheet\",\n \"pdf\": \"application/pdf\",\n }\n allowed_types = list(type_mapping.keys()) + list(type_mapping.values())\n short_names = \", \".join([f\"'{x}'\" for x in type_mapping.keys()])\n full_names = \", \".join([f\"'{x}'\" for x in type_mapping.values()])\n for file_type in file_types:\n if file_type not in allowed_types:\n raise ValueError(\n f\"Given file type {file_type} is not supported. \"\n f\"Supported values are: {short_names}; and \"\n f\"their full-form names: {full_names}\"\n )\n # replace short-form file types by full-form file types\n def full_form(x: str) -> str:\n return type_mapping[x] if x in type_mapping else x", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"}
+{"id": "a0856dfb4c47-2", "text": "return type_mapping[x] if x in type_mapping else x\n values[\"file_types\"] = [full_form(file_type) for file_type in file_types]\n return values\n @validator(\"credentials_path\")\n def validate_credentials_path(cls, v: Any, **kwargs: Any) -> Any:\n \"\"\"Validate that credentials_path exists.\"\"\"\n if not v.exists():\n raise ValueError(f\"credentials_path {v} does not exist\")\n return v\n def _load_credentials(self) -> Any:\n \"\"\"Load credentials.\"\"\"\n # Adapted from https://developers.google.com/drive/api/v3/quickstart/python\n try:\n from google.auth.transport.requests import Request\n from google.oauth2 import service_account\n from google.oauth2.credentials import Credentials\n from google_auth_oauthlib.flow import InstalledAppFlow\n except ImportError:\n raise ImportError(\n \"You must run \"\n \"`pip install --upgrade \"\n \"google-api-python-client google-auth-httplib2 \"\n \"google-auth-oauthlib` \"\n \"to use the Google Drive loader.\"\n )\n creds = None\n if self.service_account_key.exists():\n return service_account.Credentials.from_service_account_file(\n str(self.service_account_key), scopes=SCOPES\n )\n if self.token_path.exists():\n creds = Credentials.from_authorized_user_file(str(self.token_path), SCOPES)\n if not creds or not creds.valid:\n if creds and creds.expired and creds.refresh_token:\n creds.refresh(Request())\n else:\n flow = InstalledAppFlow.from_client_secrets_file(\n str(self.credentials_path), SCOPES\n )\n creds = flow.run_local_server(port=0)\n with open(self.token_path, \"w\") as token:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"}
+{"id": "a0856dfb4c47-3", "text": "with open(self.token_path, \"w\") as token:\n token.write(creds.to_json())\n return creds\n def _load_sheet_from_id(self, id: str) -> List[Document]:\n \"\"\"Load a sheet and all tabs from an ID.\"\"\"\n from googleapiclient.discovery import build\n creds = self._load_credentials()\n sheets_service = build(\"sheets\", \"v4\", credentials=creds)\n spreadsheet = sheets_service.spreadsheets().get(spreadsheetId=id).execute()\n sheets = spreadsheet.get(\"sheets\", [])\n documents = []\n for sheet in sheets:\n sheet_name = sheet[\"properties\"][\"title\"]\n result = (\n sheets_service.spreadsheets()\n .values()\n .get(spreadsheetId=id, range=sheet_name)\n .execute()\n )\n values = result.get(\"values\", [])\n header = values[0]\n for i, row in enumerate(values[1:], start=1):\n metadata = {\n \"source\": (\n f\"https://docs.google.com/spreadsheets/d/{id}/\"\n f\"edit?gid={sheet['properties']['sheetId']}\"\n ),\n \"title\": f\"{spreadsheet['properties']['title']} - {sheet_name}\",\n \"row\": i,\n }\n content = []\n for j, v in enumerate(row):\n title = header[j].strip() if len(header) > j else \"\"\n content.append(f\"{title}: {v.strip()}\")\n page_content = \"\\n\".join(content)\n documents.append(Document(page_content=page_content, metadata=metadata))\n return documents\n def _load_document_from_id(self, id: str) -> Document:\n \"\"\"Load a document from an ID.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"}
+{"id": "a0856dfb4c47-4", "text": "\"\"\"Load a document from an ID.\"\"\"\n from io import BytesIO\n from googleapiclient.discovery import build\n from googleapiclient.errors import HttpError\n from googleapiclient.http import MediaIoBaseDownload\n creds = self._load_credentials()\n service = build(\"drive\", \"v3\", credentials=creds)\n file = service.files().get(fileId=id, supportsAllDrives=True).execute()\n request = service.files().export_media(fileId=id, mimeType=\"text/plain\")\n fh = BytesIO()\n downloader = MediaIoBaseDownload(fh, request)\n done = False\n try:\n while done is False:\n status, done = downloader.next_chunk()\n except HttpError as e:\n if e.resp.status == 404:\n print(\"File not found: {}\".format(id))\n else:\n print(\"An error occurred: {}\".format(e))\n text = fh.getvalue().decode(\"utf-8\")\n metadata = {\n \"source\": f\"https://docs.google.com/document/d/{id}/edit\",\n \"title\": f\"{file.get('name')}\",\n }\n return Document(page_content=text, metadata=metadata)\n def _load_documents_from_folder(\n self, folder_id: str, *, file_types: Optional[Sequence[str]] = None\n ) -> List[Document]:\n \"\"\"Load documents from a folder.\"\"\"\n from googleapiclient.discovery import build\n creds = self._load_credentials()\n service = build(\"drive\", \"v3\", credentials=creds)\n files = self._fetch_files_recursive(service, folder_id)\n # If file types filter is provided, we'll filter by the file type.\n if file_types:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"}
+{"id": "a0856dfb4c47-5", "text": "if file_types:\n _files = [f for f in files if f[\"mimeType\"] in file_types] # type: ignore\n else:\n _files = files\n returns = []\n for file in files:\n if file[\"trashed\"] and not self.load_trashed_files:\n continue\n elif file[\"mimeType\"] == \"application/vnd.google-apps.document\":\n returns.append(self._load_document_from_id(file[\"id\"])) # type: ignore\n elif file[\"mimeType\"] == \"application/vnd.google-apps.spreadsheet\":\n returns.extend(self._load_sheet_from_id(file[\"id\"])) # type: ignore\n elif file[\"mimeType\"] == \"application/pdf\":\n returns.extend(self._load_file_from_id(file[\"id\"])) # type: ignore\n else:\n pass\n return returns\n def _fetch_files_recursive(\n self, service: Any, folder_id: str\n ) -> List[Dict[str, Union[str, List[str]]]]:\n \"\"\"Fetch all files and subfolders recursively.\"\"\"\n results = (\n service.files()\n .list(\n q=f\"'{folder_id}' in parents\",\n pageSize=1000,\n includeItemsFromAllDrives=True,\n supportsAllDrives=True,\n fields=\"nextPageToken, files(id, name, mimeType, parents, trashed)\",\n )\n .execute()\n )\n files = results.get(\"files\", [])\n returns = []\n for file in files:\n if file[\"mimeType\"] == \"application/vnd.google-apps.folder\":\n if self.recursive:\n returns.extend(self._fetch_files_recursive(service, file[\"id\"]))\n else:\n returns.append(file)\n return returns", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"}
+{"id": "a0856dfb4c47-6", "text": "else:\n returns.append(file)\n return returns\n def _load_documents_from_ids(self) -> List[Document]:\n \"\"\"Load documents from a list of IDs.\"\"\"\n if not self.document_ids:\n raise ValueError(\"document_ids must be set\")\n return [self._load_document_from_id(doc_id) for doc_id in self.document_ids]\n def _load_file_from_id(self, id: str) -> List[Document]:\n \"\"\"Load a file from an ID.\"\"\"\n from io import BytesIO\n from googleapiclient.discovery import build\n from googleapiclient.http import MediaIoBaseDownload\n creds = self._load_credentials()\n service = build(\"drive\", \"v3\", credentials=creds)\n file = service.files().get(fileId=id, supportsAllDrives=True).execute()\n request = service.files().get_media(fileId=id)\n fh = BytesIO()\n downloader = MediaIoBaseDownload(fh, request)\n done = False\n while done is False:\n status, done = downloader.next_chunk()\n content = fh.getvalue()\n from PyPDF2 import PdfReader\n pdf_reader = PdfReader(BytesIO(content))\n return [\n Document(\n page_content=page.extract_text(),\n metadata={\n \"source\": f\"https://drive.google.com/file/d/{id}/view\",\n \"title\": f\"{file.get('name')}\",\n \"page\": i,\n },\n )\n for i, page in enumerate(pdf_reader.pages)\n ]\n def _load_file_from_ids(self) -> List[Document]:\n \"\"\"Load files from a list of IDs.\"\"\"\n if not self.file_ids:\n raise ValueError(\"file_ids must be set\")\n docs = []", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"}
+{"id": "a0856dfb4c47-7", "text": "raise ValueError(\"file_ids must be set\")\n docs = []\n for file_id in self.file_ids:\n docs.extend(self._load_file_from_id(file_id))\n return docs\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n if self.folder_id:\n return self._load_documents_from_folder(\n self.folder_id, file_types=self.file_types\n )\n elif self.document_ids:\n return self._load_documents_from_ids()\n else:\n return self._load_file_from_ids()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"}
+{"id": "fcd39ef8d7da-0", "text": "Source code for langchain.document_loaders.airtable\nfrom typing import Iterator, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class AirtableLoader(BaseLoader):\n \"\"\"Loader that loads local airbyte json files.\"\"\"\n def __init__(self, api_token: str, table_id: str, base_id: str):\n \"\"\"Initialize with API token and the IDs for table and base\"\"\"\n self.api_token = api_token\n self.table_id = table_id\n self.base_id = base_id\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"Load Table.\"\"\"\n from pyairtable import Table\n table = Table(self.api_token, self.base_id, self.table_id)\n records = table.all()\n for record in records:\n # Need to convert record from dict to str\n yield Document(\n page_content=str(record),\n metadata={\n \"source\": self.base_id + \"_\" + self.table_id,\n \"base_id\": self.base_id,\n \"table_id\": self.table_id,\n },\n )\n[docs] def load(self) -> List[Document]:\n \"\"\"Load Table.\"\"\"\n return list(self.lazy_load())\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/airtable.html"}
+{"id": "cfbc5444e971-0", "text": "Source code for langchain.document_loaders.email\n\"\"\"Loader that loads email files.\"\"\"\nimport os\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n satisfies_min_unstructured_version,\n)\n[docs]class UnstructuredEmailLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load email files.\"\"\"\n def _get_elements(self) -> List:\n from unstructured.file_utils.filetype import FileType, detect_filetype\n filetype = detect_filetype(self.file_path)\n if filetype == FileType.EML:\n from unstructured.partition.email import partition_email\n return partition_email(filename=self.file_path, **self.unstructured_kwargs)\n elif satisfies_min_unstructured_version(\"0.5.8\") and filetype == FileType.MSG:\n from unstructured.partition.msg import partition_msg\n return partition_msg(filename=self.file_path, **self.unstructured_kwargs)\n else:\n raise ValueError(\n f\"Filetype {filetype} is not supported in UnstructuredEmailLoader.\"\n )\n[docs]class OutlookMessageLoader(BaseLoader):\n \"\"\"\n Loader that loads Outlook Message files using extract_msg.\n https://github.com/TeamMsgExtractor/msg-extractor\n \"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path.\"\"\"\n self.file_path = file_path\n if not os.path.isfile(self.file_path):\n raise ValueError(\"File path %s is not a valid file\" % self.file_path)\n try:\n import extract_msg # noqa:F401\n except ImportError:\n raise ImportError(\n \"extract_msg is not installed. Please install it with \"\n \"`pip install extract_msg`\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/email.html"}
+{"id": "cfbc5444e971-1", "text": "\"`pip install extract_msg`\"\n )\n[docs] def load(self) -> List[Document]:\n \"\"\"Load data into document objects.\"\"\"\n import extract_msg\n msg = extract_msg.Message(self.file_path)\n return [\n Document(\n page_content=msg.body,\n metadata={\n \"subject\": msg.subject,\n \"sender\": msg.sender,\n \"date\": msg.date,\n },\n )\n ]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/email.html"}
+{"id": "6bb22e22e79d-0", "text": "Source code for langchain.document_loaders.azure_blob_storage_file\n\"\"\"Loading logic for loading documents from an Azure Blob Storage file.\"\"\"\nimport os\nimport tempfile\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class AzureBlobStorageFileLoader(BaseLoader):\n \"\"\"Loading logic for loading documents from Azure Blob Storage.\"\"\"\n def __init__(self, conn_str: str, container: str, blob_name: str):\n \"\"\"Initialize with connection string, container and blob name.\"\"\"\n self.conn_str = conn_str\n self.container = container\n self.blob = blob_name\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n from azure.storage.blob import BlobClient\n except ImportError as exc:\n raise ValueError(\n \"Could not import azure storage blob python package. \"\n \"Please install it with `pip install azure-storage-blob`.\"\n ) from exc\n client = BlobClient.from_connection_string(\n conn_str=self.conn_str, container_name=self.container, blob_name=self.blob\n )\n with tempfile.TemporaryDirectory() as temp_dir:\n file_path = f\"{temp_dir}/{self.container}/{self.blob}\"\n os.makedirs(os.path.dirname(file_path), exist_ok=True)\n with open(f\"{file_path}\", \"wb\") as file:\n blob_data = client.download_blob()\n blob_data.readinto(file)\n loader = UnstructuredFileLoader(file_path)\n return loader.load()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/azure_blob_storage_file.html"}
+{"id": "e97d715c5e64-0", "text": "Source code for langchain.document_loaders.college_confidential\n\"\"\"Loader that loads College Confidential.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.web_base import WebBaseLoader\n[docs]class CollegeConfidentialLoader(WebBaseLoader):\n \"\"\"Loader that loads College Confidential webpages.\"\"\"\n[docs] def load(self) -> List[Document]:\n \"\"\"Load webpage.\"\"\"\n soup = self.scrape()\n text = soup.select_one(\"main[class='skin-handler']\").text\n metadata = {\"source\": self.web_path}\n return [Document(page_content=text, metadata=metadata)]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/college_confidential.html"}
+{"id": "422e8a4a127a-0", "text": "Source code for langchain.document_loaders.notebook\n\"\"\"Loader that loads .ipynb notebook files.\"\"\"\nimport json\nfrom pathlib import Path\nfrom typing import Any, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\ndef concatenate_cells(\n cell: dict, include_outputs: bool, max_output_length: int, traceback: bool\n) -> str:\n \"\"\"Combine cells information in a readable format ready to be used.\"\"\"\n cell_type = cell[\"cell_type\"]\n source = cell[\"source\"]\n output = cell[\"outputs\"]\n if include_outputs and cell_type == \"code\" and output:\n if \"ename\" in output[0].keys():\n error_name = output[0][\"ename\"]\n error_value = output[0][\"evalue\"]\n if traceback:\n traceback = output[0][\"traceback\"]\n return (\n f\"'{cell_type}' cell: '{source}'\\n, gives error '{error_name}',\"\n f\" with description '{error_value}'\\n\"\n f\"and traceback '{traceback}'\\n\\n\"\n )\n else:\n return (\n f\"'{cell_type}' cell: '{source}'\\n, gives error '{error_name}',\"\n f\"with description '{error_value}'\\n\\n\"\n )\n elif output[0][\"output_type\"] == \"stream\":\n output = output[0][\"text\"]\n min_output = min(max_output_length, len(output))\n return (\n f\"'{cell_type}' cell: '{source}'\\n with \"\n f\"output: '{output[:min_output]}'\\n\\n\"\n )\n else:\n return f\"'{cell_type}' cell: '{source}'\\n\\n\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/notebook.html"}
+{"id": "422e8a4a127a-1", "text": "return f\"'{cell_type}' cell: '{source}'\\n\\n\"\n return \"\"\ndef remove_newlines(x: Any) -> Any:\n \"\"\"Remove recursively newlines, no matter the data structure they are stored in.\"\"\"\n import pandas as pd\n if isinstance(x, str):\n return x.replace(\"\\n\", \"\")\n elif isinstance(x, list):\n return [remove_newlines(elem) for elem in x]\n elif isinstance(x, pd.DataFrame):\n return x.applymap(remove_newlines)\n else:\n return x\n[docs]class NotebookLoader(BaseLoader):\n \"\"\"Loader that loads .ipynb notebook files.\"\"\"\n def __init__(\n self,\n path: str,\n include_outputs: bool = False,\n max_output_length: int = 10,\n remove_newline: bool = False,\n traceback: bool = False,\n ):\n \"\"\"Initialize with path.\"\"\"\n self.file_path = path\n self.include_outputs = include_outputs\n self.max_output_length = max_output_length\n self.remove_newline = remove_newline\n self.traceback = traceback\n[docs] def load(\n self,\n ) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n import pandas as pd\n except ImportError:\n raise ImportError(\n \"pandas is needed for Notebook Loader, \"\n \"please install with `pip install pandas`\"\n )\n p = Path(self.file_path)\n with open(p, encoding=\"utf8\") as f:\n d = json.load(f)\n data = pd.json_normalize(d[\"cells\"])\n filtered_data = data[[\"cell_type\", \"source\", \"outputs\"]]\n if self.remove_newline:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/notebook.html"}
+{"id": "422e8a4a127a-2", "text": "if self.remove_newline:\n filtered_data = filtered_data.applymap(remove_newlines)\n text = filtered_data.apply(\n lambda x: concatenate_cells(\n x, self.include_outputs, self.max_output_length, self.traceback\n ),\n axis=1,\n ).str.cat(sep=\" \")\n metadata = {\"source\": str(p)}\n return [Document(page_content=text, metadata=metadata)]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/notebook.html"}
+{"id": "b2a790cad902-0", "text": "Source code for langchain.document_loaders.gcs_file\n\"\"\"Loading logic for loading documents from a GCS file.\"\"\"\nimport os\nimport tempfile\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class GCSFileLoader(BaseLoader):\n \"\"\"Loading logic for loading documents from GCS.\"\"\"\n def __init__(self, project_name: str, bucket: str, blob: str):\n \"\"\"Initialize with bucket and key name.\"\"\"\n self.bucket = bucket\n self.blob = blob\n self.project_name = project_name\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n from google.cloud import storage\n except ImportError:\n raise ValueError(\n \"Could not import google-cloud-storage python package. \"\n \"Please install it with `pip install google-cloud-storage`.\"\n )\n # Initialise a client\n storage_client = storage.Client(self.project_name)\n # Create a bucket object for our bucket\n bucket = storage_client.get_bucket(self.bucket)\n # Create a blob object from the filepath\n blob = bucket.blob(self.blob)\n with tempfile.TemporaryDirectory() as temp_dir:\n file_path = f\"{temp_dir}/{self.blob}\"\n os.makedirs(os.path.dirname(file_path), exist_ok=True)\n # Download the file to a destination\n blob.download_to_filename(file_path)\n loader = UnstructuredFileLoader(file_path)\n return loader.load()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/gcs_file.html"}
+{"id": "7ac23cbec1c8-0", "text": "Source code for langchain.document_loaders.pyspark_dataframe\n\"\"\"Load from a Spark Dataframe object\"\"\"\nimport itertools\nimport logging\nimport sys\nfrom typing import TYPE_CHECKING, Any, Iterator, List, Optional, Tuple\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__file__)\nif TYPE_CHECKING:\n from pyspark.sql import SparkSession\n[docs]class PySparkDataFrameLoader(BaseLoader):\n \"\"\"Load PySpark DataFrames\"\"\"\n def __init__(\n self,\n spark_session: Optional[\"SparkSession\"] = None,\n df: Optional[Any] = None,\n page_content_column: str = \"text\",\n fraction_of_memory: float = 0.1,\n ):\n \"\"\"Initialize with a Spark DataFrame object.\"\"\"\n try:\n from pyspark.sql import DataFrame, SparkSession\n except ImportError:\n raise ImportError(\n \"pyspark is not installed. \"\n \"Please install it with `pip install pyspark`\"\n )\n self.spark = (\n spark_session if spark_session else SparkSession.builder.getOrCreate()\n )\n if not isinstance(df, DataFrame):\n raise ValueError(\n f\"Expected data_frame to be a PySpark DataFrame, got {type(df)}\"\n )\n self.df = df\n self.page_content_column = page_content_column\n self.fraction_of_memory = fraction_of_memory\n self.num_rows, self.max_num_rows = self.get_num_rows()\n self.rdd_df = self.df.rdd.map(list)\n self.column_names = self.df.columns\n[docs] def get_num_rows(self) -> Tuple[int, int]:\n \"\"\"Gets the amount of \"feasible\" rows for the DataFrame\"\"\"\n try:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/pyspark_dataframe.html"}
+{"id": "7ac23cbec1c8-1", "text": "\"\"\"Gets the amount of \"feasible\" rows for the DataFrame\"\"\"\n try:\n import psutil\n except ImportError as e:\n raise ImportError(\n \"psutil not installed. Please install it with `pip install psutil`.\"\n ) from e\n row = self.df.limit(1).collect()[0]\n estimated_row_size = sys.getsizeof(row)\n mem_info = psutil.virtual_memory()\n available_memory = mem_info.available\n max_num_rows = int(\n (available_memory / estimated_row_size) * self.fraction_of_memory\n )\n return min(max_num_rows, self.df.count()), max_num_rows\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"A lazy loader for document content.\"\"\"\n for row in self.rdd_df.toLocalIterator():\n metadata = {self.column_names[i]: row[i] for i in range(len(row))}\n text = metadata[self.page_content_column]\n metadata.pop(self.page_content_column)\n yield Document(page_content=text, metadata=metadata)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load from the dataframe.\"\"\"\n if self.df.count() > self.max_num_rows:\n logger.warning(\n f\"The number of DataFrame rows is {self.df.count()}, \"\n f\"but we will only include the amount \"\n f\"of rows that can reasonably fit in memory: {self.num_rows}.\"\n )\n lazy_load_iterator = self.lazy_load()\n return list(itertools.islice(lazy_load_iterator, self.num_rows))\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/pyspark_dataframe.html"}
+{"id": "7f46c4fac07c-0", "text": "Source code for langchain.document_loaders.mediawikidump\n\"\"\"Load Data from a MediaWiki dump xml.\"\"\"\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class MWDumpLoader(BaseLoader):\n \"\"\"\n Load MediaWiki dump from XML file\n Example:\n .. code-block:: python\n from langchain.document_loaders import MWDumpLoader\n loader = MWDumpLoader(\n file_path=\"myWiki.xml\",\n encoding=\"utf8\"\n )\n docs = loader.load()\n from langchain.text_splitter import RecursiveCharacterTextSplitter\n text_splitter = RecursiveCharacterTextSplitter(\n chunk_size=1000, chunk_overlap=0\n )\n texts = text_splitter.split_documents(docs)\n :param file_path: XML local file path\n :type file_path: str\n :param encoding: Charset encoding, defaults to \"utf8\"\n :type encoding: str, optional\n \"\"\"\n def __init__(self, file_path: str, encoding: Optional[str] = \"utf8\"):\n \"\"\"Initialize with file path.\"\"\"\n self.file_path = file_path\n self.encoding = encoding\n[docs] def load(self) -> List[Document]:\n \"\"\"Load from file path.\"\"\"\n import mwparserfromhell\n import mwxml\n dump = mwxml.Dump.from_file(open(self.file_path, encoding=self.encoding))\n docs = []\n for page in dump.pages:\n for revision in page:\n code = mwparserfromhell.parse(revision.text)\n text = code.strip_code(\n normalize=True, collapse=True, keep_template_params=False\n )\n metadata = {\"source\": page.title}", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/mediawikidump.html"}
+{"id": "7f46c4fac07c-1", "text": ")\n metadata = {\"source\": page.title}\n docs.append(Document(page_content=text, metadata=metadata))\n return docs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/mediawikidump.html"}
+{"id": "322d4766d188-0", "text": "Source code for langchain.document_loaders.joplin\nimport json\nimport urllib\nfrom datetime import datetime\nfrom typing import Iterator, List, Optional\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.schema import Document\nfrom langchain.utils import get_from_env\nLINK_NOTE_TEMPLATE = \"joplin://x-callback-url/openNote?id={id}\"\n[docs]class JoplinLoader(BaseLoader):\n \"\"\"\n Loader that fetches notes from Joplin.\n In order to use this loader, you need to have Joplin running with the\n Web Clipper enabled (look for \"Web Clipper\" in the app settings).\n To get the access token, you need to go to the Web Clipper options and\n under \"Advanced Options\" you will find the access token.\n You can find more information about the Web Clipper service here:\n https://joplinapp.org/clipper/\n \"\"\"\n def __init__(\n self,\n access_token: Optional[str] = None,\n port: int = 41184,\n host: str = \"localhost\",\n ) -> None:\n access_token = access_token or get_from_env(\n \"access_token\", \"JOPLIN_ACCESS_TOKEN\"\n )\n base_url = f\"http://{host}:{port}\"\n self._get_note_url = (\n f\"{base_url}/notes?token={access_token}\"\n f\"&fields=id,parent_id,title,body,created_time,updated_time&page={{page}}\"\n )\n self._get_folder_url = (\n f\"{base_url}/folders/{{id}}?token={access_token}&fields=title\"\n )\n self._get_tag_url = (", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/joplin.html"}
+{"id": "322d4766d188-1", "text": ")\n self._get_tag_url = (\n f\"{base_url}/notes/{{id}}/tags?token={access_token}&fields=title\"\n )\n def _get_notes(self) -> Iterator[Document]:\n has_more = True\n page = 1\n while has_more:\n req_note = urllib.request.Request(self._get_note_url.format(page=page))\n with urllib.request.urlopen(req_note) as response:\n json_data = json.loads(response.read().decode())\n for note in json_data[\"items\"]:\n metadata = {\n \"source\": LINK_NOTE_TEMPLATE.format(id=note[\"id\"]),\n \"folder\": self._get_folder(note[\"parent_id\"]),\n \"tags\": self._get_tags(note[\"id\"]),\n \"title\": note[\"title\"],\n \"created_time\": self._convert_date(note[\"created_time\"]),\n \"updated_time\": self._convert_date(note[\"updated_time\"]),\n }\n yield Document(page_content=note[\"body\"], metadata=metadata)\n has_more = json_data[\"has_more\"]\n page += 1\n def _get_folder(self, folder_id: str) -> str:\n req_folder = urllib.request.Request(self._get_folder_url.format(id=folder_id))\n with urllib.request.urlopen(req_folder) as response:\n json_data = json.loads(response.read().decode())\n return json_data[\"title\"]\n def _get_tags(self, note_id: str) -> List[str]:\n req_tag = urllib.request.Request(self._get_tag_url.format(id=note_id))\n with urllib.request.urlopen(req_tag) as response:\n json_data = json.loads(response.read().decode())\n return [tag[\"title\"] for tag in json_data[\"items\"]]\n def _convert_date(self, date: int) -> str:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/joplin.html"}
+{"id": "322d4766d188-2", "text": "def _convert_date(self, date: int) -> str:\n return datetime.fromtimestamp(date / 1000).strftime(\"%Y-%m-%d %H:%M:%S\")\n[docs] def lazy_load(self) -> Iterator[Document]:\n yield from self._get_notes()\n[docs] def load(self) -> List[Document]:\n return list(self.lazy_load())\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/joplin.html"}
+{"id": "ca2d904d5a97-0", "text": "Source code for langchain.document_loaders.notion\n\"\"\"Loader that loads Notion directory dump.\"\"\"\nfrom pathlib import Path\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class NotionDirectoryLoader(BaseLoader):\n \"\"\"Loader that loads Notion directory dump.\"\"\"\n def __init__(self, path: str):\n \"\"\"Initialize with path.\"\"\"\n self.file_path = path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n ps = list(Path(self.file_path).glob(\"**/*.md\"))\n docs = []\n for p in ps:\n with open(p) as f:\n text = f.read()\n metadata = {\"source\": str(p)}\n docs.append(Document(page_content=text, metadata=metadata))\n return docs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/notion.html"}
+{"id": "34ebe26b80f4-0", "text": "Source code for langchain.document_loaders.confluence\n\"\"\"Load Data from a Confluence Space\"\"\"\nimport logging\nfrom io import BytesIO\nfrom typing import Any, Callable, List, Optional, Union\nfrom tenacity import (\n before_sleep_log,\n retry,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\n[docs]class ConfluenceLoader(BaseLoader):\n \"\"\"\n Load Confluence pages. Port of https://llamahub.ai/l/confluence\n This currently supports username/api_key, Oauth2 login or personal access token\n authentication.\n Specify a list page_ids and/or space_key to load in the corresponding pages into\n Document objects, if both are specified the union of both sets will be returned.\n You can also specify a boolean `include_attachments` to include attachments, this\n is set to False by default, if set to True all attachments will be downloaded and\n ConfluenceReader will extract the text from the attachments and add it to the\n Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG,\n SVG, Word and Excel.\n Hint: space_key and page_id can both be found in the URL of a page in Confluence\n - https://yoursite.atlassian.com/wiki/spaces//pages/\n Example:\n .. code-block:: python\n from langchain.document_loaders import ConfluenceLoader\n loader = ConfluenceLoader(\n url=\"https://yoursite.atlassian.com/wiki\",\n username=\"me\",\n api_key=\"12345\"\n )\n documents = loader.load(space_key=\"SPACE\",limit=50)", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"}
+{"id": "34ebe26b80f4-1", "text": ")\n documents = loader.load(space_key=\"SPACE\",limit=50)\n :param url: _description_\n :type url: str\n :param api_key: _description_, defaults to None\n :type api_key: str, optional\n :param username: _description_, defaults to None\n :type username: str, optional\n :param oauth2: _description_, defaults to {}\n :type oauth2: dict, optional\n :param token: _description_, defaults to None\n :type token: str, optional\n :param cloud: _description_, defaults to True\n :type cloud: bool, optional\n :param number_of_retries: How many times to retry, defaults to 3\n :type number_of_retries: Optional[int], optional\n :param min_retry_seconds: defaults to 2\n :type min_retry_seconds: Optional[int], optional\n :param max_retry_seconds: defaults to 10\n :type max_retry_seconds: Optional[int], optional\n :param confluence_kwargs: additional kwargs to initialize confluence with\n :type confluence_kwargs: dict, optional\n :raises ValueError: Errors while validating input\n :raises ImportError: Required dependencies not installed.\n \"\"\"\n def __init__(\n self,\n url: str,\n api_key: Optional[str] = None,\n username: Optional[str] = None,\n oauth2: Optional[dict] = None,\n token: Optional[str] = None,\n cloud: Optional[bool] = True,\n number_of_retries: Optional[int] = 3,\n min_retry_seconds: Optional[int] = 2,\n max_retry_seconds: Optional[int] = 10,\n confluence_kwargs: Optional[dict] = None,\n ):", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"}
+{"id": "34ebe26b80f4-2", "text": "confluence_kwargs: Optional[dict] = None,\n ):\n confluence_kwargs = confluence_kwargs or {}\n errors = ConfluenceLoader.validate_init_args(\n url, api_key, username, oauth2, token\n )\n if errors:\n raise ValueError(f\"Error(s) while validating input: {errors}\")\n self.base_url = url\n self.number_of_retries = number_of_retries\n self.min_retry_seconds = min_retry_seconds\n self.max_retry_seconds = max_retry_seconds\n try:\n from atlassian import Confluence # noqa: F401\n except ImportError:\n raise ImportError(\n \"`atlassian` package not found, please run \"\n \"`pip install atlassian-python-api`\"\n )\n if oauth2:\n self.confluence = Confluence(\n url=url, oauth2=oauth2, cloud=cloud, **confluence_kwargs\n )\n elif token:\n self.confluence = Confluence(\n url=url, token=token, cloud=cloud, **confluence_kwargs\n )\n else:\n self.confluence = Confluence(\n url=url,\n username=username,\n password=api_key,\n cloud=cloud,\n **confluence_kwargs,\n )\n[docs] @staticmethod\n def validate_init_args(\n url: Optional[str] = None,\n api_key: Optional[str] = None,\n username: Optional[str] = None,\n oauth2: Optional[dict] = None,\n token: Optional[str] = None,\n ) -> Union[List, None]:\n \"\"\"Validates proper combinations of init arguments\"\"\"\n errors = []\n if url is None:\n errors.append(\"Must provide `base_url`\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"}
+{"id": "34ebe26b80f4-3", "text": "if url is None:\n errors.append(\"Must provide `base_url`\")\n if (api_key and not username) or (username and not api_key):\n errors.append(\n \"If one of `api_key` or `username` is provided, \"\n \"the other must be as well.\"\n )\n if (api_key or username) and oauth2:\n errors.append(\n \"Cannot provide a value for `api_key` and/or \"\n \"`username` and provide a value for `oauth2`\"\n )\n if oauth2 and oauth2.keys() != [\n \"access_token\",\n \"access_token_secret\",\n \"consumer_key\",\n \"key_cert\",\n ]:\n errors.append(\n \"You have either ommited require keys or added extra \"\n \"keys to the oauth2 dictionary. key values should be \"\n \"`['access_token', 'access_token_secret', 'consumer_key', 'key_cert']`\"\n )\n if token and (api_key or username or oauth2):\n errors.append(\n \"Cannot provide a value for `token` and a value for `api_key`, \"\n \"`username` or `oauth2`\"\n )\n if errors:\n return errors\n return None\n[docs] def load(\n self,\n space_key: Optional[str] = None,\n page_ids: Optional[List[str]] = None,\n label: Optional[str] = None,\n cql: Optional[str] = None,\n include_restricted_content: bool = False,\n include_archived_content: bool = False,\n include_attachments: bool = False,\n include_comments: bool = False,\n limit: Optional[int] = 50,", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"}
+{"id": "34ebe26b80f4-4", "text": "include_comments: bool = False,\n limit: Optional[int] = 50,\n max_pages: Optional[int] = 1000,\n ocr_languages: Optional[str] = None,\n ) -> List[Document]:\n \"\"\"\n :param space_key: Space key retrieved from a confluence URL, defaults to None\n :type space_key: Optional[str], optional\n :param page_ids: List of specific page IDs to load, defaults to None\n :type page_ids: Optional[List[str]], optional\n :param label: Get all pages with this label, defaults to None\n :type label: Optional[str], optional\n :param cql: CQL Expression, defaults to None\n :type cql: Optional[str], optional\n :param include_restricted_content: defaults to False\n :type include_restricted_content: bool, optional\n :param include_archived_content: Whether to include archived content,\n defaults to False\n :type include_archived_content: bool, optional\n :param include_attachments: defaults to False\n :type include_attachments: bool, optional\n :param include_comments: defaults to False\n :type include_comments: bool, optional\n :param limit: Maximum number of pages to retrieve per request, defaults to 50\n :type limit: int, optional\n :param max_pages: Maximum number of pages to retrieve in total, defaults 1000\n :type max_pages: int, optional\n :param ocr_languages: The languages to use for the Tesseract agent. To use a\n language, you'll first need to install the appropriate\n Tesseract language pack.\n :type ocr_languages: str, optional\n :raises ValueError: _description_\n :raises ImportError: _description_\n :return: _description_", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"}
+{"id": "34ebe26b80f4-5", "text": ":raises ImportError: _description_\n :return: _description_\n :rtype: List[Document]\n \"\"\"\n if not space_key and not page_ids and not label and not cql:\n raise ValueError(\n \"Must specify at least one among `space_key`, `page_ids`, \"\n \"`label`, `cql` parameters.\"\n )\n docs = []\n if space_key:\n pages = self.paginate_request(\n self.confluence.get_all_pages_from_space,\n space=space_key,\n limit=limit,\n max_pages=max_pages,\n status=\"any\" if include_archived_content else \"current\",\n expand=\"body.storage.value\",\n )\n docs += self.process_pages(\n pages,\n include_restricted_content,\n include_attachments,\n include_comments,\n ocr_languages,\n )\n if label:\n pages = self.paginate_request(\n self.confluence.get_all_pages_by_label,\n label=label,\n limit=limit,\n max_pages=max_pages,\n )\n ids_by_label = [page[\"id\"] for page in pages]\n if page_ids:\n page_ids = list(set(page_ids + ids_by_label))\n else:\n page_ids = list(set(ids_by_label))\n if cql:\n pages = self.paginate_request(\n self.confluence.cql,\n cql=cql,\n limit=limit,\n max_pages=max_pages,\n include_archived_spaces=include_archived_content,\n expand=\"body.storage.value\",\n )\n docs += self.process_pages(\n pages,\n include_restricted_content,\n include_attachments,\n include_comments,\n ocr_languages,\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"}
+{"id": "34ebe26b80f4-6", "text": "include_attachments,\n include_comments,\n ocr_languages,\n )\n if page_ids:\n for page_id in page_ids:\n get_page = retry(\n reraise=True,\n stop=stop_after_attempt(\n self.number_of_retries # type: ignore[arg-type]\n ),\n wait=wait_exponential(\n multiplier=1, # type: ignore[arg-type]\n min=self.min_retry_seconds, # type: ignore[arg-type]\n max=self.max_retry_seconds, # type: ignore[arg-type]\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )(self.confluence.get_page_by_id)\n page = get_page(page_id=page_id, expand=\"body.storage.value\")\n if not include_restricted_content and not self.is_public_page(page):\n continue\n doc = self.process_page(\n page, include_attachments, include_comments, ocr_languages\n )\n docs.append(doc)\n return docs\n[docs] def paginate_request(self, retrieval_method: Callable, **kwargs: Any) -> List:\n \"\"\"Paginate the various methods to retrieve groups of pages.\n Unfortunately, due to page size, sometimes the Confluence API\n doesn't match the limit value. If `limit` is >100 confluence\n seems to cap the response to 100. Also, due to the Atlassian Python\n package, we don't get the \"next\" values from the \"_links\" key because\n they only return the value from the results key. So here, the pagination\n starts from 0 and goes until the max_pages, getting the `limit` number\n of pages with each request. We have to manually check if there", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"}
+{"id": "34ebe26b80f4-7", "text": "of pages with each request. We have to manually check if there\n are more docs based on the length of the returned list of pages, rather than\n just checking for the presence of a `next` key in the response like this page\n would have you do:\n https://developer.atlassian.com/server/confluence/pagination-in-the-rest-api/\n :param retrieval_method: Function used to retrieve docs\n :type retrieval_method: callable\n :return: List of documents\n :rtype: List\n \"\"\"\n max_pages = kwargs.pop(\"max_pages\")\n docs: List[dict] = []\n while len(docs) < max_pages:\n get_pages = retry(\n reraise=True,\n stop=stop_after_attempt(\n self.number_of_retries # type: ignore[arg-type]\n ),\n wait=wait_exponential(\n multiplier=1,\n min=self.min_retry_seconds, # type: ignore[arg-type]\n max=self.max_retry_seconds, # type: ignore[arg-type]\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )(retrieval_method)\n batch = get_pages(**kwargs, start=len(docs))\n if not batch:\n break\n docs.extend(batch)\n return docs[:max_pages]\n[docs] def is_public_page(self, page: dict) -> bool:\n \"\"\"Check if a page is publicly accessible.\"\"\"\n restrictions = self.confluence.get_all_restrictions_for_content(page[\"id\"])\n return (\n page[\"status\"] == \"current\"\n and not restrictions[\"read\"][\"restrictions\"][\"user\"][\"results\"]\n and not restrictions[\"read\"][\"restrictions\"][\"group\"][\"results\"]\n )\n[docs] def process_pages(\n self,", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"}
+{"id": "34ebe26b80f4-8", "text": ")\n[docs] def process_pages(\n self,\n pages: List[dict],\n include_restricted_content: bool,\n include_attachments: bool,\n include_comments: bool,\n ocr_languages: Optional[str] = None,\n ) -> List[Document]:\n \"\"\"Process a list of pages into a list of documents.\"\"\"\n docs = []\n for page in pages:\n if not include_restricted_content and not self.is_public_page(page):\n continue\n doc = self.process_page(\n page, include_attachments, include_comments, ocr_languages\n )\n docs.append(doc)\n return docs\n[docs] def process_page(\n self,\n page: dict,\n include_attachments: bool,\n include_comments: bool,\n ocr_languages: Optional[str] = None,\n ) -> Document:\n try:\n from bs4 import BeautifulSoup # type: ignore\n except ImportError:\n raise ImportError(\n \"`beautifulsoup4` package not found, please run \"\n \"`pip install beautifulsoup4`\"\n )\n if include_attachments:\n attachment_texts = self.process_attachment(page[\"id\"], ocr_languages)\n else:\n attachment_texts = []\n text = BeautifulSoup(page[\"body\"][\"storage\"][\"value\"], \"lxml\").get_text(\n \" \", strip=True\n ) + \"\".join(attachment_texts)\n if include_comments:\n comments = self.confluence.get_page_comments(\n page[\"id\"], expand=\"body.view.value\", depth=\"all\"\n )[\"results\"]\n comment_texts = [\n BeautifulSoup(comment[\"body\"][\"view\"][\"value\"], \"lxml\").get_text(\n \" \", strip=True\n )\n for comment in comments\n ]", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"}
+{"id": "34ebe26b80f4-9", "text": "\" \", strip=True\n )\n for comment in comments\n ]\n text = text + \"\".join(comment_texts)\n return Document(\n page_content=text,\n metadata={\n \"title\": page[\"title\"],\n \"id\": page[\"id\"],\n \"source\": self.base_url.strip(\"/\") + page[\"_links\"][\"webui\"],\n },\n )\n[docs] def process_attachment(\n self,\n page_id: str,\n ocr_languages: Optional[str] = None,\n ) -> List[str]:\n try:\n from PIL import Image # noqa: F401\n except ImportError:\n raise ImportError(\n \"`Pillow` package not found, \" \"please run `pip install Pillow`\"\n )\n # depending on setup you may also need to set the correct path for\n # poppler and tesseract\n attachments = self.confluence.get_attachments_from_content(page_id)[\"results\"]\n texts = []\n for attachment in attachments:\n media_type = attachment[\"metadata\"][\"mediaType\"]\n absolute_url = self.base_url + attachment[\"_links\"][\"download\"]\n title = attachment[\"title\"]\n if media_type == \"application/pdf\":\n text = title + self.process_pdf(absolute_url, ocr_languages)\n elif (\n media_type == \"image/png\"\n or media_type == \"image/jpg\"\n or media_type == \"image/jpeg\"\n ):\n text = title + self.process_image(absolute_url, ocr_languages)\n elif (\n media_type == \"application/vnd.openxmlformats-officedocument\"\n \".wordprocessingml.document\"\n ):\n text = title + self.process_doc(absolute_url)\n elif media_type == \"application/vnd.ms-excel\":", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"}
+{"id": "34ebe26b80f4-10", "text": "elif media_type == \"application/vnd.ms-excel\":\n text = title + self.process_xls(absolute_url)\n elif media_type == \"image/svg+xml\":\n text = title + self.process_svg(absolute_url, ocr_languages)\n else:\n continue\n texts.append(text)\n return texts\n[docs] def process_pdf(\n self,\n link: str,\n ocr_languages: Optional[str] = None,\n ) -> str:\n try:\n import pytesseract # noqa: F401\n from pdf2image import convert_from_bytes # noqa: F401\n except ImportError:\n raise ImportError(\n \"`pytesseract` or `pdf2image` package not found, \"\n \"please run `pip install pytesseract pdf2image`\"\n )\n response = self.confluence.request(path=link, absolute=True)\n text = \"\"\n if (\n response.status_code != 200\n or response.content == b\"\"\n or response.content is None\n ):\n return text\n try:\n images = convert_from_bytes(response.content)\n except ValueError:\n return text\n for i, image in enumerate(images):\n image_text = pytesseract.image_to_string(image, lang=ocr_languages)\n text += f\"Page {i + 1}:\\n{image_text}\\n\\n\"\n return text\n[docs] def process_image(\n self,\n link: str,\n ocr_languages: Optional[str] = None,\n ) -> str:\n try:\n import pytesseract # noqa: F401\n from PIL import Image # noqa: F401\n except ImportError:\n raise ImportError(", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"}
+{"id": "34ebe26b80f4-11", "text": "except ImportError:\n raise ImportError(\n \"`pytesseract` or `Pillow` package not found, \"\n \"please run `pip install pytesseract Pillow`\"\n )\n response = self.confluence.request(path=link, absolute=True)\n text = \"\"\n if (\n response.status_code != 200\n or response.content == b\"\"\n or response.content is None\n ):\n return text\n try:\n image = Image.open(BytesIO(response.content))\n except OSError:\n return text\n return pytesseract.image_to_string(image, lang=ocr_languages)\n[docs] def process_doc(self, link: str) -> str:\n try:\n import docx2txt # noqa: F401\n except ImportError:\n raise ImportError(\n \"`docx2txt` package not found, please run `pip install docx2txt`\"\n )\n response = self.confluence.request(path=link, absolute=True)\n text = \"\"\n if (\n response.status_code != 200\n or response.content == b\"\"\n or response.content is None\n ):\n return text\n file_data = BytesIO(response.content)\n return docx2txt.process(file_data)\n[docs] def process_xls(self, link: str) -> str:\n try:\n import xlrd # noqa: F401\n except ImportError:\n raise ImportError(\"`xlrd` package not found, please run `pip install xlrd`\")\n response = self.confluence.request(path=link, absolute=True)\n text = \"\"\n if (\n response.status_code != 200\n or response.content == b\"\"\n or response.content is None\n ):\n return text", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"}
+{"id": "34ebe26b80f4-12", "text": "or response.content is None\n ):\n return text\n workbook = xlrd.open_workbook(file_contents=response.content)\n for sheet in workbook.sheets():\n text += f\"{sheet.name}:\\n\"\n for row in range(sheet.nrows):\n for col in range(sheet.ncols):\n text += f\"{sheet.cell_value(row, col)}\\t\"\n text += \"\\n\"\n text += \"\\n\"\n return text\n[docs] def process_svg(\n self,\n link: str,\n ocr_languages: Optional[str] = None,\n ) -> str:\n try:\n import pytesseract # noqa: F401\n from PIL import Image # noqa: F401\n from reportlab.graphics import renderPM # noqa: F401\n from svglib.svglib import svg2rlg # noqa: F401\n except ImportError:\n raise ImportError(\n \"`pytesseract`, `Pillow`, `reportlab` or `svglib` package not found, \"\n \"please run `pip install pytesseract Pillow reportlab svglib`\"\n )\n response = self.confluence.request(path=link, absolute=True)\n text = \"\"\n if (\n response.status_code != 200\n or response.content == b\"\"\n or response.content is None\n ):\n return text\n drawing = svg2rlg(BytesIO(response.content))\n img_data = BytesIO()\n renderPM.drawToFile(drawing, img_data, fmt=\"PNG\")\n img_data.seek(0)\n image = Image.open(img_data)\n return pytesseract.image_to_string(image, lang=ocr_languages)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"}
+{"id": "34ebe26b80f4-13", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"}
+{"id": "c3175a3c5f33-0", "text": "Source code for langchain.document_loaders.chatgpt\n\"\"\"Load conversations from ChatGPT data export\"\"\"\nimport datetime\nimport json\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\ndef concatenate_rows(message: dict, title: str) -> str:\n if not message:\n return \"\"\n sender = message[\"author\"][\"role\"] if message[\"author\"] else \"unknown\"\n text = message[\"content\"][\"parts\"][0]\n date = datetime.datetime.fromtimestamp(message[\"create_time\"]).strftime(\n \"%Y-%m-%d %H:%M:%S\"\n )\n return f\"{title} - {sender} on {date}: {text}\\n\\n\"\n[docs]class ChatGPTLoader(BaseLoader):\n \"\"\"Loader that loads conversations from exported ChatGPT data.\"\"\"\n def __init__(self, log_file: str, num_logs: int = -1):\n self.log_file = log_file\n self.num_logs = num_logs\n[docs] def load(self) -> List[Document]:\n with open(self.log_file, encoding=\"utf8\") as f:\n data = json.load(f)[: self.num_logs] if self.num_logs else json.load(f)\n documents = []\n for d in data:\n title = d[\"title\"]\n messages = d[\"mapping\"]\n text = \"\".join(\n [\n concatenate_rows(messages[key][\"message\"], title)\n for idx, key in enumerate(messages)\n if not (\n idx == 0\n and messages[key][\"message\"][\"author\"][\"role\"] == \"system\"\n )\n ]\n )\n metadata = {\"source\": str(self.log_file)}\n documents.append(Document(page_content=text, metadata=metadata))", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/chatgpt.html"}
+{"id": "c3175a3c5f33-1", "text": "documents.append(Document(page_content=text, metadata=metadata))\n return documents\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/chatgpt.html"}
+{"id": "baae0d890213-0", "text": "Source code for langchain.document_loaders.url_selenium\n\"\"\"Loader that uses Selenium to load a page, then uses unstructured to load the html.\n\"\"\"\nimport logging\nfrom typing import TYPE_CHECKING, List, Literal, Optional, Union\nif TYPE_CHECKING:\n from selenium.webdriver import Chrome, Firefox\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\n[docs]class SeleniumURLLoader(BaseLoader):\n \"\"\"Loader that uses Selenium and to load a page and unstructured to load the html.\n This is useful for loading pages that require javascript to render.\n Attributes:\n urls (List[str]): List of URLs to load.\n continue_on_failure (bool): If True, continue loading other URLs on failure.\n browser (str): The browser to use, either 'chrome' or 'firefox'.\n binary_location (Optional[str]): The location of the browser binary.\n executable_path (Optional[str]): The path to the browser executable.\n headless (bool): If True, the browser will run in headless mode.\n arguments [List[str]]: List of arguments to pass to the browser.\n \"\"\"\n def __init__(\n self,\n urls: List[str],\n continue_on_failure: bool = True,\n browser: Literal[\"chrome\", \"firefox\"] = \"chrome\",\n binary_location: Optional[str] = None,\n executable_path: Optional[str] = None,\n headless: bool = True,\n arguments: List[str] = [],\n ):\n \"\"\"Load a list of URLs using Selenium and unstructured.\"\"\"\n try:\n import selenium # noqa:F401\n except ImportError:\n raise ImportError(\n \"selenium package not found, please install it with \"", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/url_selenium.html"}
+{"id": "baae0d890213-1", "text": "raise ImportError(\n \"selenium package not found, please install it with \"\n \"`pip install selenium`\"\n )\n try:\n import unstructured # noqa:F401\n except ImportError:\n raise ImportError(\n \"unstructured package not found, please install it with \"\n \"`pip install unstructured`\"\n )\n self.urls = urls\n self.continue_on_failure = continue_on_failure\n self.browser = browser\n self.binary_location = binary_location\n self.executable_path = executable_path\n self.headless = headless\n self.arguments = arguments\n def _get_driver(self) -> Union[\"Chrome\", \"Firefox\"]:\n \"\"\"Create and return a WebDriver instance based on the specified browser.\n Raises:\n ValueError: If an invalid browser is specified.\n Returns:\n Union[Chrome, Firefox]: A WebDriver instance for the specified browser.\n \"\"\"\n if self.browser.lower() == \"chrome\":\n from selenium.webdriver import Chrome\n from selenium.webdriver.chrome.options import Options as ChromeOptions\n chrome_options = ChromeOptions()\n for arg in self.arguments:\n chrome_options.add_argument(arg)\n if self.headless:\n chrome_options.add_argument(\"--headless\")\n chrome_options.add_argument(\"--no-sandbox\")\n if self.binary_location is not None:\n chrome_options.binary_location = self.binary_location\n if self.executable_path is None:\n return Chrome(options=chrome_options)\n return Chrome(executable_path=self.executable_path, options=chrome_options)\n elif self.browser.lower() == \"firefox\":\n from selenium.webdriver import Firefox\n from selenium.webdriver.firefox.options import Options as FirefoxOptions\n firefox_options = FirefoxOptions()\n for arg in self.arguments:\n firefox_options.add_argument(arg)", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/url_selenium.html"}
+{"id": "baae0d890213-2", "text": "for arg in self.arguments:\n firefox_options.add_argument(arg)\n if self.headless:\n firefox_options.add_argument(\"--headless\")\n if self.binary_location is not None:\n firefox_options.binary_location = self.binary_location\n if self.executable_path is None:\n return Firefox(options=firefox_options)\n return Firefox(\n executable_path=self.executable_path, options=firefox_options\n )\n else:\n raise ValueError(\"Invalid browser specified. Use 'chrome' or 'firefox'.\")\n[docs] def load(self) -> List[Document]:\n \"\"\"Load the specified URLs using Selenium and create Document instances.\n Returns:\n List[Document]: A list of Document instances with loaded content.\n \"\"\"\n from unstructured.partition.html import partition_html\n docs: List[Document] = list()\n driver = self._get_driver()\n for url in self.urls:\n try:\n driver.get(url)\n page_content = driver.page_source\n elements = partition_html(text=page_content)\n text = \"\\n\\n\".join([str(el) for el in elements])\n metadata = {\"source\": url}\n docs.append(Document(page_content=text, metadata=metadata))\n except Exception as e:\n if self.continue_on_failure:\n logger.error(f\"Error fetching or processing {url}, exception: {e}\")\n else:\n raise e\n driver.quit()\n return docs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/url_selenium.html"}
+{"id": "fb61da65be09-0", "text": "Source code for langchain.document_loaders.evernote\n\"\"\"Load documents from Evernote.\nhttps://gist.github.com/foxmask/7b29c43a161e001ff04afdb2f181e31c\n\"\"\"\nimport hashlib\nimport logging\nfrom base64 import b64decode\nfrom time import strptime\nfrom typing import Any, Dict, Iterator, List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class EverNoteLoader(BaseLoader):\n \"\"\"EverNote Loader.\n Loads an EverNote notebook export file e.g. my_notebook.enex into Documents.\n Instructions on producing this file can be found at\n https://help.evernote.com/hc/en-us/articles/209005557-Export-notes-and-notebooks-as-ENEX-or-HTML\n Currently only the plain text in the note is extracted and stored as the contents\n of the Document, any non content metadata (e.g. 'author', 'created', 'updated' etc.\n but not 'content-raw' or 'resource') tags on the note will be extracted and stored\n as metadata on the Document.\n Args:\n file_path (str): The path to the notebook export with a .enex extension\n load_single_document (bool): Whether or not to concatenate the content of all\n notes into a single long Document.\n If this is set to True (default) then the only metadata on the document will be\n the 'source' which contains the file name of the export.\n \"\"\" # noqa: E501\n def __init__(self, file_path: str, load_single_document: bool = True):\n \"\"\"Initialize with file path.\"\"\"\n self.file_path = file_path\n self.load_single_document = load_single_document", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/evernote.html"}
+{"id": "fb61da65be09-1", "text": "self.file_path = file_path\n self.load_single_document = load_single_document\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents from EverNote export file.\"\"\"\n documents = [\n Document(\n page_content=note[\"content\"],\n metadata={\n **{\n key: value\n for key, value in note.items()\n if key not in [\"content\", \"content-raw\", \"resource\"]\n },\n **{\"source\": self.file_path},\n },\n )\n for note in self._parse_note_xml(self.file_path)\n if note.get(\"content\") is not None\n ]\n if not self.load_single_document:\n return documents\n return [\n Document(\n page_content=\"\".join([document.page_content for document in documents]),\n metadata={\"source\": self.file_path},\n )\n ]\n @staticmethod\n def _parse_content(content: str) -> str:\n try:\n import html2text\n return html2text.html2text(content).strip()\n except ImportError as e:\n logging.error(\n \"Could not import `html2text`. Although it is not a required package \"\n \"to use Langchain, using the EverNote loader requires `html2text`. \"\n \"Please install `html2text` via `pip install html2text` and try again.\"\n )\n raise e\n @staticmethod\n def _parse_resource(resource: list) -> dict:\n rsc_dict: Dict[str, Any] = {}\n for elem in resource:\n if elem.tag == \"data\":\n # Sometimes elem.text is None\n rsc_dict[elem.tag] = b64decode(elem.text) if elem.text else b\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/evernote.html"}
+{"id": "fb61da65be09-2", "text": "rsc_dict[\"hash\"] = hashlib.md5(rsc_dict[elem.tag]).hexdigest()\n else:\n rsc_dict[elem.tag] = elem.text\n return rsc_dict\n @staticmethod\n def _parse_note(note: List, prefix: Optional[str] = None) -> dict:\n note_dict: Dict[str, Any] = {}\n resources = []\n def add_prefix(element_tag: str) -> str:\n if prefix is None:\n return element_tag\n return f\"{prefix}.{element_tag}\"\n for elem in note:\n if elem.tag == \"content\":\n note_dict[elem.tag] = EverNoteLoader._parse_content(elem.text)\n # A copy of original content\n note_dict[\"content-raw\"] = elem.text\n elif elem.tag == \"resource\":\n resources.append(EverNoteLoader._parse_resource(elem))\n elif elem.tag == \"created\" or elem.tag == \"updated\":\n note_dict[elem.tag] = strptime(elem.text, \"%Y%m%dT%H%M%SZ\")\n elif elem.tag == \"note-attributes\":\n additional_attributes = EverNoteLoader._parse_note(\n elem, elem.tag\n ) # Recursively enter the note-attributes tag\n note_dict.update(additional_attributes)\n else:\n note_dict[elem.tag] = elem.text\n if len(resources) > 0:\n note_dict[\"resource\"] = resources\n return {add_prefix(key): value for key, value in note_dict.items()}\n @staticmethod\n def _parse_note_xml(xml_file: str) -> Iterator[Dict[str, Any]]:\n \"\"\"Parse Evernote xml.\"\"\"\n # Without huge_tree set to True, parser may complain about huge text node", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/evernote.html"}
+{"id": "fb61da65be09-3", "text": "# Without huge_tree set to True, parser may complain about huge text node\n # Try to recover, because there may be \" \", which will cause\n # \"XMLSyntaxError: Entity 'nbsp' not defined\"\n try:\n from lxml import etree\n except ImportError as e:\n logging.error(\n \"Could not import `lxml`. Although it is not a required package to use \"\n \"Langchain, using the EverNote loader requires `lxml`. Please install \"\n \"`lxml` via `pip install lxml` and try again.\"\n )\n raise e\n context = etree.iterparse(\n xml_file, encoding=\"utf-8\", strip_cdata=False, huge_tree=True, recover=True\n )\n for action, elem in context:\n if elem.tag == \"note\":\n yield EverNoteLoader._parse_note(elem)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/evernote.html"}
+{"id": "d294aef3ff24-0", "text": "Source code for langchain.document_loaders.twitter\n\"\"\"Twitter document loader.\"\"\"\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Sequence, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nif TYPE_CHECKING:\n import tweepy\n from tweepy import OAuth2BearerHandler, OAuthHandler\ndef _dependable_tweepy_import() -> tweepy:\n try:\n import tweepy\n except ImportError:\n raise ImportError(\n \"tweepy package not found, please install it with `pip install tweepy`\"\n )\n return tweepy\n[docs]class TwitterTweetLoader(BaseLoader):\n \"\"\"Twitter tweets loader.\n Read tweets of user twitter handle.\n First you need to go to\n `https://developer.twitter.com/en/docs/twitter-api\n /getting-started/getting-access-to-the-twitter-api`\n to get your token. And create a v2 version of the app.\n \"\"\"\n def __init__(\n self,\n auth_handler: Union[OAuthHandler, OAuth2BearerHandler],\n twitter_users: Sequence[str],\n number_tweets: Optional[int] = 100,\n ):\n self.auth = auth_handler\n self.twitter_users = twitter_users\n self.number_tweets = number_tweets\n[docs] def load(self) -> List[Document]:\n \"\"\"Load tweets.\"\"\"\n tweepy = _dependable_tweepy_import()\n api = tweepy.API(self.auth, parser=tweepy.parsers.JSONParser())\n results: List[Document] = []\n for username in self.twitter_users:\n tweets = api.user_timeline(screen_name=username, count=self.number_tweets)\n user = api.get_user(screen_name=username)", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/twitter.html"}
+{"id": "d294aef3ff24-1", "text": "user = api.get_user(screen_name=username)\n docs = self._format_tweets(tweets, user)\n results.extend(docs)\n return results\n def _format_tweets(\n self, tweets: List[Dict[str, Any]], user_info: dict\n ) -> Iterable[Document]:\n \"\"\"Format tweets into a string.\"\"\"\n for tweet in tweets:\n metadata = {\n \"created_at\": tweet[\"created_at\"],\n \"user_info\": user_info,\n }\n yield Document(\n page_content=tweet[\"text\"],\n metadata=metadata,\n )\n[docs] @classmethod\n def from_bearer_token(\n cls,\n oauth2_bearer_token: str,\n twitter_users: Sequence[str],\n number_tweets: Optional[int] = 100,\n ) -> TwitterTweetLoader:\n \"\"\"Create a TwitterTweetLoader from OAuth2 bearer token.\"\"\"\n tweepy = _dependable_tweepy_import()\n auth = tweepy.OAuth2BearerHandler(oauth2_bearer_token)\n return cls(\n auth_handler=auth,\n twitter_users=twitter_users,\n number_tweets=number_tweets,\n )\n[docs] @classmethod\n def from_secrets(\n cls,\n access_token: str,\n access_token_secret: str,\n consumer_key: str,\n consumer_secret: str,\n twitter_users: Sequence[str],\n number_tweets: Optional[int] = 100,\n ) -> TwitterTweetLoader:\n \"\"\"Create a TwitterTweetLoader from access tokens and secrets.\"\"\"\n tweepy = _dependable_tweepy_import()\n auth = tweepy.OAuthHandler(\n access_token=access_token,\n access_token_secret=access_token_secret,", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/twitter.html"}
+{"id": "d294aef3ff24-2", "text": "access_token=access_token,\n access_token_secret=access_token_secret,\n consumer_key=consumer_key,\n consumer_secret=consumer_secret,\n )\n return cls(\n auth_handler=auth,\n twitter_users=twitter_users,\n number_tweets=number_tweets,\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/twitter.html"}
+{"id": "947d77b7e87a-0", "text": "Source code for langchain.document_loaders.azlyrics\n\"\"\"Loader that loads AZLyrics.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.web_base import WebBaseLoader\n[docs]class AZLyricsLoader(WebBaseLoader):\n \"\"\"Loader that loads AZLyrics webpages.\"\"\"\n[docs] def load(self) -> List[Document]:\n \"\"\"Load webpage.\"\"\"\n soup = self.scrape()\n title = soup.title.text\n lyrics = soup.find_all(\"div\", {\"class\": \"\"})[2].text\n text = title + lyrics\n metadata = {\"source\": self.web_path}\n return [Document(page_content=text, metadata=metadata)]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/azlyrics.html"}
+{"id": "68c5d2580ba5-0", "text": "Source code for langchain.document_loaders.spreedly\n\"\"\"Loader that fetches data from Spreedly API.\"\"\"\nimport json\nimport urllib.request\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import stringify_dict\nSPREEDLY_ENDPOINTS = {\n \"gateways_options\": \"https://core.spreedly.com/v1/gateways_options.json\",\n \"gateways\": \"https://core.spreedly.com/v1/gateways.json\",\n \"receivers_options\": \"https://core.spreedly.com/v1/receivers_options.json\",\n \"receivers\": \"https://core.spreedly.com/v1/receivers.json\",\n \"payment_methods\": \"https://core.spreedly.com/v1/payment_methods.json\",\n \"certificates\": \"https://core.spreedly.com/v1/certificates.json\",\n \"transactions\": \"https://core.spreedly.com/v1/transactions.json\",\n \"environments\": \"https://core.spreedly.com/v1/environments.json\",\n}\n[docs]class SpreedlyLoader(BaseLoader):\n def __init__(self, access_token: str, resource: str) -> None:\n self.access_token = access_token\n self.resource = resource\n self.headers = {\n \"Authorization\": f\"Bearer {self.access_token}\",\n \"Accept\": \"application/json\",\n }\n def _make_request(self, url: str) -> List[Document]:\n request = urllib.request.Request(url, headers=self.headers)\n with urllib.request.urlopen(request) as response:\n json_data = json.loads(response.read().decode())\n text = stringify_dict(json_data)\n metadata = {\"source\": url}", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/spreedly.html"}
+{"id": "68c5d2580ba5-1", "text": "text = stringify_dict(json_data)\n metadata = {\"source\": url}\n return [Document(page_content=text, metadata=metadata)]\n def _get_resource(self) -> List[Document]:\n endpoint = SPREEDLY_ENDPOINTS.get(self.resource)\n if endpoint is None:\n return []\n return self._make_request(endpoint)\n[docs] def load(self) -> List[Document]:\n return self._get_resource()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/spreedly.html"}
+{"id": "7c5100ee3692-0", "text": "Source code for langchain.document_loaders.markdown\n\"\"\"Loader that loads Markdown files.\"\"\"\nfrom typing import List\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class UnstructuredMarkdownLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load markdown files.\"\"\"\n def _get_elements(self) -> List:\n from unstructured.__version__ import __version__ as __unstructured_version__\n from unstructured.partition.md import partition_md\n # NOTE(MthwRobinson) - enables the loader to work when you're using pre-release\n # versions of unstructured like 0.4.17-dev1\n _unstructured_version = __unstructured_version__.split(\"-\")[0]\n unstructured_version = tuple([int(x) for x in _unstructured_version.split(\".\")])\n if unstructured_version < (0, 4, 16):\n raise ValueError(\n f\"You are on unstructured version {__unstructured_version__}. \"\n \"Partitioning markdown files is only supported in unstructured>=0.4.16.\"\n )\n return partition_md(filename=self.file_path, **self.unstructured_kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/markdown.html"}
+{"id": "8303d78970b9-0", "text": "Source code for langchain.document_loaders.iugu\n\"\"\"Loader that fetches data from IUGU\"\"\"\nimport json\nimport urllib.request\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import get_from_env, stringify_dict\nIUGU_ENDPOINTS = {\n \"invoices\": \"https://api.iugu.com/v1/invoices\",\n \"customers\": \"https://api.iugu.com/v1/customers\",\n \"charges\": \"https://api.iugu.com/v1/charges\",\n \"subscriptions\": \"https://api.iugu.com/v1/subscriptions\",\n \"plans\": \"https://api.iugu.com/v1/plans\",\n}\n[docs]class IuguLoader(BaseLoader):\n def __init__(self, resource: str, api_token: Optional[str] = None) -> None:\n self.resource = resource\n api_token = api_token or get_from_env(\"api_token\", \"IUGU_API_TOKEN\")\n self.headers = {\"Authorization\": f\"Bearer {api_token}\"}\n def _make_request(self, url: str) -> List[Document]:\n request = urllib.request.Request(url, headers=self.headers)\n with urllib.request.urlopen(request) as response:\n json_data = json.loads(response.read().decode())\n text = stringify_dict(json_data)\n metadata = {\"source\": url}\n return [Document(page_content=text, metadata=metadata)]\n def _get_resource(self) -> List[Document]:\n endpoint = IUGU_ENDPOINTS.get(self.resource)\n if endpoint is None:\n return []\n return self._make_request(endpoint)\n[docs] def load(self) -> List[Document]:\n return self._get_resource()\nBy Harrison Chase", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/iugu.html"}
+{"id": "8303d78970b9-1", "text": "return self._get_resource()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/iugu.html"}
+{"id": "f5a2602fee97-0", "text": "Source code for langchain.document_loaders.web_base\n\"\"\"Web base loader class.\"\"\"\nimport asyncio\nimport logging\nimport warnings\nfrom typing import Any, Dict, List, Optional, Union\nimport aiohttp\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\ndefault_header_template = {\n \"User-Agent\": \"\",\n \"Accept\": \"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*\"\n \";q=0.8\",\n \"Accept-Language\": \"en-US,en;q=0.5\",\n \"Referer\": \"https://www.google.com/\",\n \"DNT\": \"1\",\n \"Connection\": \"keep-alive\",\n \"Upgrade-Insecure-Requests\": \"1\",\n}\ndef _build_metadata(soup: Any, url: str) -> dict:\n \"\"\"Build metadata from BeautifulSoup output.\"\"\"\n metadata = {\"source\": url}\n if title := soup.find(\"title\"):\n metadata[\"title\"] = title.get_text()\n if description := soup.find(\"meta\", attrs={\"name\": \"description\"}):\n metadata[\"description\"] = description.get(\"content\", None)\n if html := soup.find(\"html\"):\n metadata[\"language\"] = html.get(\"lang\", None)\n return metadata\n[docs]class WebBaseLoader(BaseLoader):\n \"\"\"Loader that uses urllib and beautiful soup to load webpages.\"\"\"\n web_paths: List[str]\n requests_per_second: int = 2\n \"\"\"Max number of concurrent requests to make.\"\"\"\n default_parser: str = \"html.parser\"\n \"\"\"Default parser to use for BeautifulSoup.\"\"\"\n requests_kwargs: Dict[str, Any] = {}\n \"\"\"kwargs for requests\"\"\"\n def __init__(", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/web_base.html"}
+{"id": "f5a2602fee97-1", "text": "\"\"\"kwargs for requests\"\"\"\n def __init__(\n self, web_path: Union[str, List[str]], header_template: Optional[dict] = None\n ):\n \"\"\"Initialize with webpage path.\"\"\"\n # TODO: Deprecate web_path in favor of web_paths, and remove this\n # left like this because there are a number of loaders that expect single\n # urls\n if isinstance(web_path, str):\n self.web_paths = [web_path]\n elif isinstance(web_path, List):\n self.web_paths = web_path\n self.session = requests.Session()\n try:\n import bs4 # noqa:F401\n except ImportError:\n raise ValueError(\n \"bs4 package not found, please install it with \" \"`pip install bs4`\"\n )\n headers = header_template or default_header_template\n if not headers.get(\"User-Agent\"):\n try:\n from fake_useragent import UserAgent\n headers[\"User-Agent\"] = UserAgent().random\n except ImportError:\n logger.info(\n \"fake_useragent not found, using default user agent.\"\n \"To get a realistic header for requests, \"\n \"`pip install fake_useragent`.\"\n )\n self.session.headers = dict(headers)\n @property\n def web_path(self) -> str:\n if len(self.web_paths) > 1:\n raise ValueError(\"Multiple webpaths found.\")\n return self.web_paths[0]\n async def _fetch(\n self, url: str, retries: int = 3, cooldown: int = 2, backoff: float = 1.5\n ) -> str:\n async with aiohttp.ClientSession() as session:\n for i in range(retries):\n try:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/web_base.html"}
+{"id": "f5a2602fee97-2", "text": "for i in range(retries):\n try:\n async with session.get(\n url, headers=self.session.headers\n ) as response:\n return await response.text()\n except aiohttp.ClientConnectionError as e:\n if i == retries - 1:\n raise\n else:\n logger.warning(\n f\"Error fetching {url} with attempt \"\n f\"{i + 1}/{retries}: {e}. Retrying...\"\n )\n await asyncio.sleep(cooldown * backoff**i)\n raise ValueError(\"retry count exceeded\")\n async def _fetch_with_rate_limit(\n self, url: str, semaphore: asyncio.Semaphore\n ) -> str:\n async with semaphore:\n return await self._fetch(url)\n[docs] async def fetch_all(self, urls: List[str]) -> Any:\n \"\"\"Fetch all urls concurrently with rate limiting.\"\"\"\n semaphore = asyncio.Semaphore(self.requests_per_second)\n tasks = []\n for url in urls:\n task = asyncio.ensure_future(self._fetch_with_rate_limit(url, semaphore))\n tasks.append(task)\n try:\n from tqdm.asyncio import tqdm_asyncio\n return await tqdm_asyncio.gather(\n *tasks, desc=\"Fetching pages\", ascii=True, mininterval=1\n )\n except ImportError:\n warnings.warn(\"For better logging of progress, `pip install tqdm`\")\n return await asyncio.gather(*tasks)\n @staticmethod\n def _check_parser(parser: str) -> None:\n \"\"\"Check that parser is valid for bs4.\"\"\"\n valid_parsers = [\"html.parser\", \"lxml\", \"xml\", \"lxml-xml\", \"html5lib\"]\n if parser not in valid_parsers:\n raise ValueError(", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/web_base.html"}
+{"id": "f5a2602fee97-3", "text": "if parser not in valid_parsers:\n raise ValueError(\n \"`parser` must be one of \" + \", \".join(valid_parsers) + \".\"\n )\n[docs] def scrape_all(self, urls: List[str], parser: Union[str, None] = None) -> List[Any]:\n \"\"\"Fetch all urls, then return soups for all results.\"\"\"\n from bs4 import BeautifulSoup\n results = asyncio.run(self.fetch_all(urls))\n final_results = []\n for i, result in enumerate(results):\n url = urls[i]\n if parser is None:\n if url.endswith(\".xml\"):\n parser = \"xml\"\n else:\n parser = self.default_parser\n self._check_parser(parser)\n final_results.append(BeautifulSoup(result, parser))\n return final_results\n def _scrape(self, url: str, parser: Union[str, None] = None) -> Any:\n from bs4 import BeautifulSoup\n if parser is None:\n if url.endswith(\".xml\"):\n parser = \"xml\"\n else:\n parser = self.default_parser\n self._check_parser(parser)\n html_doc = self.session.get(url, **self.requests_kwargs)\n html_doc.encoding = html_doc.apparent_encoding\n return BeautifulSoup(html_doc.text, parser)\n[docs] def scrape(self, parser: Union[str, None] = None) -> Any:\n \"\"\"Scrape data from webpage and return it in BeautifulSoup format.\"\"\"\n if parser is None:\n parser = self.default_parser\n return self._scrape(self.web_path, parser)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load text from the url(s) in web_path.\"\"\"\n docs = []\n for path in self.web_paths:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/web_base.html"}
+{"id": "f5a2602fee97-4", "text": "docs = []\n for path in self.web_paths:\n soup = self._scrape(path)\n text = soup.get_text()\n metadata = _build_metadata(soup, path)\n docs.append(Document(page_content=text, metadata=metadata))\n return docs\n[docs] def aload(self) -> List[Document]:\n \"\"\"Load text from the urls in web_path async into Documents.\"\"\"\n results = self.scrape_all(self.web_paths)\n docs = []\n for i in range(len(results)):\n soup = results[i]\n text = soup.get_text()\n metadata = _build_metadata(soup, self.web_paths[i])\n docs.append(Document(page_content=text, metadata=metadata))\n return docs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/web_base.html"}
+{"id": "812ee41a02cb-0", "text": "Source code for langchain.document_loaders.youtube\n\"\"\"Loader that loads YouTube transcript.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional, Sequence, Union\nfrom urllib.parse import parse_qs, urlparse\nfrom pydantic import root_validator\nfrom pydantic.dataclasses import dataclass\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\nSCOPES = [\"https://www.googleapis.com/auth/youtube.readonly\"]\n[docs]@dataclass\nclass GoogleApiClient:\n \"\"\"A Generic Google Api Client.\n To use, you should have the ``google_auth_oauthlib,youtube_transcript_api,google``\n python package installed.\n As the google api expects credentials you need to set up a google account and\n register your Service. \"https://developers.google.com/docs/api/quickstart/python\"\n Example:\n .. code-block:: python\n from langchain.document_loaders import GoogleApiClient\n google_api_client = GoogleApiClient(\n service_account_path=Path(\"path_to_your_sec_file.json\")\n )\n \"\"\"\n credentials_path: Path = Path.home() / \".credentials\" / \"credentials.json\"\n service_account_path: Path = Path.home() / \".credentials\" / \"credentials.json\"\n token_path: Path = Path.home() / \".credentials\" / \"token.json\"\n def __post_init__(self) -> None:\n self.creds = self._load_credentials()\n[docs] @root_validator\n def validate_channel_or_videoIds_is_set(\n cls, values: Dict[str, Any]\n ) -> Dict[str, Any]:\n \"\"\"Validate that either folder_id or document_ids is set, but not both.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"}
+{"id": "812ee41a02cb-1", "text": "\"\"\"Validate that either folder_id or document_ids is set, but not both.\"\"\"\n if not values.get(\"credentials_path\") and not values.get(\n \"service_account_path\"\n ):\n raise ValueError(\"Must specify either channel_name or video_ids\")\n return values\n def _load_credentials(self) -> Any:\n \"\"\"Load credentials.\"\"\"\n # Adapted from https://developers.google.com/drive/api/v3/quickstart/python\n try:\n from google.auth.transport.requests import Request\n from google.oauth2 import service_account\n from google.oauth2.credentials import Credentials\n from google_auth_oauthlib.flow import InstalledAppFlow\n from youtube_transcript_api import YouTubeTranscriptApi # noqa: F401\n except ImportError:\n raise ImportError(\n \"You must run\"\n \"`pip install --upgrade \"\n \"google-api-python-client google-auth-httplib2 \"\n \"google-auth-oauthlib \"\n \"youtube-transcript-api` \"\n \"to use the Google Drive loader\"\n )\n creds = None\n if self.service_account_path.exists():\n return service_account.Credentials.from_service_account_file(\n str(self.service_account_path)\n )\n if self.token_path.exists():\n creds = Credentials.from_authorized_user_file(str(self.token_path), SCOPES)\n if not creds or not creds.valid:\n if creds and creds.expired and creds.refresh_token:\n creds.refresh(Request())\n else:\n flow = InstalledAppFlow.from_client_secrets_file(\n str(self.credentials_path), SCOPES\n )\n creds = flow.run_local_server(port=0)\n with open(self.token_path, \"w\") as token:\n token.write(creds.to_json())\n return creds", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"}
+{"id": "812ee41a02cb-2", "text": "token.write(creds.to_json())\n return creds\nALLOWED_SCHEMAS = {\"http\", \"https\"}\nALLOWED_NETLOCK = {\n \"youtu.be\",\n \"m.youtube.com\",\n \"youtube.com\",\n \"www.youtube.com\",\n \"www.youtube-nocookie.com\",\n \"vid.plus\",\n}\ndef _parse_video_id(url: str) -> Optional[str]:\n \"\"\"Parse a youtube url and return the video id if valid, otherwise None.\"\"\"\n parsed_url = urlparse(url)\n if parsed_url.scheme not in ALLOWED_SCHEMAS:\n return None\n if parsed_url.netloc not in ALLOWED_NETLOCK:\n return None\n path = parsed_url.path\n if path.endswith(\"/watch\"):\n query = parsed_url.query\n parsed_query = parse_qs(query)\n if \"v\" in parsed_query:\n ids = parsed_query[\"v\"]\n video_id = ids if isinstance(ids, str) else ids[0]\n else:\n return None\n else:\n path = parsed_url.path.lstrip(\"/\")\n video_id = path.split(\"/\")[-1]\n if len(video_id) != 11: # Video IDs are 11 characters long\n return None\n return video_id\n[docs]class YoutubeLoader(BaseLoader):\n \"\"\"Loader that loads Youtube transcripts.\"\"\"\n def __init__(\n self,\n video_id: str,\n add_video_info: bool = False,\n language: Union[str, Sequence[str]] = \"en\",\n translation: str = \"en\",\n continue_on_failure: bool = False,\n ):\n \"\"\"Initialize with YouTube video ID.\"\"\"\n self.video_id = video_id\n self.add_video_info = add_video_info\n self.language = language", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"}
+{"id": "812ee41a02cb-3", "text": "self.add_video_info = add_video_info\n self.language = language\n if isinstance(language, str):\n self.language = [language]\n else:\n self.language = language\n self.translation = translation\n self.continue_on_failure = continue_on_failure\n[docs] @staticmethod\n def extract_video_id(youtube_url: str) -> str:\n \"\"\"Extract video id from common YT urls.\"\"\"\n video_id = _parse_video_id(youtube_url)\n if not video_id:\n raise ValueError(\n f\"Could not determine the video ID for the URL {youtube_url}\"\n )\n return video_id\n[docs] @classmethod\n def from_youtube_url(cls, youtube_url: str, **kwargs: Any) -> YoutubeLoader:\n \"\"\"Given youtube URL, load video.\"\"\"\n video_id = cls.extract_video_id(youtube_url)\n return cls(video_id, **kwargs)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n from youtube_transcript_api import (\n NoTranscriptFound,\n TranscriptsDisabled,\n YouTubeTranscriptApi,\n )\n except ImportError:\n raise ImportError(\n \"Could not import youtube_transcript_api python package. \"\n \"Please install it with `pip install youtube-transcript-api`.\"\n )\n metadata = {\"source\": self.video_id}\n if self.add_video_info:\n # Get more video meta info\n # Such as title, description, thumbnail url, publish_date\n video_info = self._get_video_info()\n metadata.update(video_info)\n try:\n transcript_list = YouTubeTranscriptApi.list_transcripts(self.video_id)\n except TranscriptsDisabled:\n return []\n try:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"}
+{"id": "812ee41a02cb-4", "text": "except TranscriptsDisabled:\n return []\n try:\n transcript = transcript_list.find_transcript(self.language)\n except NoTranscriptFound:\n en_transcript = transcript_list.find_transcript([\"en\"])\n transcript = en_transcript.translate(self.translation)\n transcript_pieces = transcript.fetch()\n transcript = \" \".join([t[\"text\"].strip(\" \") for t in transcript_pieces])\n return [Document(page_content=transcript, metadata=metadata)]\n def _get_video_info(self) -> dict:\n \"\"\"Get important video information.\n Components are:\n - title\n - description\n - thumbnail url,\n - publish_date\n - channel_author\n - and more.\n \"\"\"\n try:\n from pytube import YouTube\n except ImportError:\n raise ImportError(\n \"Could not import pytube python package. \"\n \"Please install it with `pip install pytube`.\"\n )\n yt = YouTube(f\"https://www.youtube.com/watch?v={self.video_id}\")\n video_info = {\n \"title\": yt.title or \"Unknown\",\n \"description\": yt.description or \"Unknown\",\n \"view_count\": yt.views or 0,\n \"thumbnail_url\": yt.thumbnail_url or \"Unknown\",\n \"publish_date\": yt.publish_date.strftime(\"%Y-%m-%d %H:%M:%S\")\n if yt.publish_date\n else \"Unknown\",\n \"length\": yt.length or 0,\n \"author\": yt.author or \"Unknown\",\n }\n return video_info\n[docs]@dataclass\nclass GoogleApiYoutubeLoader(BaseLoader):\n \"\"\"Loader that loads all Videos from a Channel\n To use, you should have the ``googleapiclient,youtube_transcript_api``", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"}
+{"id": "812ee41a02cb-5", "text": "To use, you should have the ``googleapiclient,youtube_transcript_api``\n python package installed.\n As the service needs a google_api_client, you first have to initialize\n the GoogleApiClient.\n Additionally you have to either provide a channel name or a list of videoids\n \"https://developers.google.com/docs/api/quickstart/python\"\n Example:\n .. code-block:: python\n from langchain.document_loaders import GoogleApiClient\n from langchain.document_loaders import GoogleApiYoutubeLoader\n google_api_client = GoogleApiClient(\n service_account_path=Path(\"path_to_your_sec_file.json\")\n )\n loader = GoogleApiYoutubeLoader(\n google_api_client=google_api_client,\n channel_name = \"CodeAesthetic\"\n )\n load.load()\n \"\"\"\n google_api_client: GoogleApiClient\n channel_name: Optional[str] = None\n video_ids: Optional[List[str]] = None\n add_video_info: bool = True\n captions_language: str = \"en\"\n continue_on_failure: bool = False\n def __post_init__(self) -> None:\n self.youtube_client = self._build_youtube_client(self.google_api_client.creds)\n def _build_youtube_client(self, creds: Any) -> Any:\n try:\n from googleapiclient.discovery import build\n from youtube_transcript_api import YouTubeTranscriptApi # noqa: F401\n except ImportError:\n raise ImportError(\n \"You must run\"\n \"`pip install --upgrade \"\n \"google-api-python-client google-auth-httplib2 \"\n \"google-auth-oauthlib \"\n \"youtube-transcript-api` \"\n \"to use the Google Drive loader\"\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"}
+{"id": "812ee41a02cb-6", "text": "\"to use the Google Drive loader\"\n )\n return build(\"youtube\", \"v3\", credentials=creds)\n[docs] @root_validator\n def validate_channel_or_videoIds_is_set(\n cls, values: Dict[str, Any]\n ) -> Dict[str, Any]:\n \"\"\"Validate that either folder_id or document_ids is set, but not both.\"\"\"\n if not values.get(\"channel_name\") and not values.get(\"video_ids\"):\n raise ValueError(\"Must specify either channel_name or video_ids\")\n return values\n def _get_transcripe_for_video_id(self, video_id: str) -> str:\n from youtube_transcript_api import NoTranscriptFound, YouTubeTranscriptApi\n transcript_list = YouTubeTranscriptApi.list_transcripts(video_id)\n try:\n transcript = transcript_list.find_transcript([self.captions_language])\n except NoTranscriptFound:\n for available_transcript in transcript_list:\n transcript = available_transcript.translate(self.captions_language)\n continue\n transcript_pieces = transcript.fetch()\n return \" \".join([t[\"text\"].strip(\" \") for t in transcript_pieces])\n def _get_document_for_video_id(self, video_id: str, **kwargs: Any) -> Document:\n captions = self._get_transcripe_for_video_id(video_id)\n video_response = (\n self.youtube_client.videos()\n .list(\n part=\"id,snippet\",\n id=video_id,\n )\n .execute()\n )\n return Document(\n page_content=captions,\n metadata=video_response.get(\"items\")[0],\n )\n def _get_channel_id(self, channel_name: str) -> str:\n request = self.youtube_client.search().list(", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"}
+{"id": "812ee41a02cb-7", "text": "request = self.youtube_client.search().list(\n part=\"id\",\n q=channel_name,\n type=\"channel\",\n maxResults=1, # we only need one result since channel names are unique\n )\n response = request.execute()\n channel_id = response[\"items\"][0][\"id\"][\"channelId\"]\n return channel_id\n def _get_document_for_channel(self, channel: str, **kwargs: Any) -> List[Document]:\n try:\n from youtube_transcript_api import (\n NoTranscriptFound,\n TranscriptsDisabled,\n )\n except ImportError:\n raise ImportError(\n \"You must run\"\n \"`pip install --upgrade \"\n \"youtube-transcript-api` \"\n \"to use the youtube loader\"\n )\n channel_id = self._get_channel_id(channel)\n request = self.youtube_client.search().list(\n part=\"id,snippet\",\n channelId=channel_id,\n maxResults=50, # adjust this value to retrieve more or fewer videos\n )\n video_ids = []\n while request is not None:\n response = request.execute()\n # Add each video ID to the list\n for item in response[\"items\"]:\n if not item[\"id\"].get(\"videoId\"):\n continue\n meta_data = {\"videoId\": item[\"id\"][\"videoId\"]}\n if self.add_video_info:\n item[\"snippet\"].pop(\"thumbnails\")\n meta_data.update(item[\"snippet\"])\n try:\n page_content = self._get_transcripe_for_video_id(\n item[\"id\"][\"videoId\"]\n )\n video_ids.append(\n Document(\n page_content=page_content,\n metadata=meta_data,\n )\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"}
+{"id": "812ee41a02cb-8", "text": "metadata=meta_data,\n )\n )\n except (TranscriptsDisabled, NoTranscriptFound) as e:\n if self.continue_on_failure:\n logger.error(\n \"Error fetching transscript \"\n + f\" {item['id']['videoId']}, exception: {e}\"\n )\n else:\n raise e\n pass\n request = self.youtube_client.search().list_next(request, response)\n return video_ids\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n document_list = []\n if self.channel_name:\n document_list.extend(self._get_document_for_channel(self.channel_name))\n elif self.video_ids:\n document_list.extend(\n [\n self._get_document_for_video_id(video_id)\n for video_id in self.video_ids\n ]\n )\n else:\n raise ValueError(\"Must specify either channel_name or video_ids\")\n return document_list\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"}
+{"id": "d13488dbf3a6-0", "text": "Source code for langchain.document_loaders.onedrive\n\"\"\"Loader that loads data from OneDrive\"\"\"\nfrom __future__ import annotations\nimport logging\nimport os\nimport tempfile\nfrom enum import Enum\nfrom pathlib import Path\nfrom typing import TYPE_CHECKING, Dict, List, Optional, Type, Union\nfrom pydantic import BaseModel, BaseSettings, Field, FilePath, SecretStr\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.onedrive_file import OneDriveFileLoader\nif TYPE_CHECKING:\n from O365 import Account\n from O365.drive import Drive, Folder\nSCOPES = [\"offline_access\", \"Files.Read.All\"]\nlogger = logging.getLogger(__name__)\nclass _OneDriveSettings(BaseSettings):\n client_id: str = Field(..., env=\"O365_CLIENT_ID\")\n client_secret: SecretStr = Field(..., env=\"O365_CLIENT_SECRET\")\n class Config:\n env_prefix = \"\"\n case_sentive = False\n env_file = \".env\"\nclass _OneDriveTokenStorage(BaseSettings):\n token_path: FilePath = Field(Path.home() / \".credentials\" / \"o365_token.txt\")\nclass _FileType(str, Enum):\n DOC = \"doc\"\n DOCX = \"docx\"\n PDF = \"pdf\"\nclass _SupportedFileTypes(BaseModel):\n file_types: List[_FileType]\n def fetch_mime_types(self) -> Dict[str, str]:\n mime_types_mapping = {}\n for file_type in self.file_types:\n if file_type.value == \"doc\":\n mime_types_mapping[file_type.value] = \"application/msword\"\n elif file_type.value == \"docx\":\n mime_types_mapping[\n file_type.value", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/onedrive.html"}
+{"id": "d13488dbf3a6-1", "text": "mime_types_mapping[\n file_type.value\n ] = \"application/vnd.openxmlformats-officedocument.wordprocessingml.document\" # noqa: E501\n elif file_type.value == \"pdf\":\n mime_types_mapping[file_type.value] = \"application/pdf\"\n return mime_types_mapping\n[docs]class OneDriveLoader(BaseLoader, BaseModel):\n settings: _OneDriveSettings = Field(default_factory=_OneDriveSettings)\n drive_id: str = Field(...)\n folder_path: Optional[str] = None\n object_ids: Optional[List[str]] = None\n auth_with_token: bool = False\n def _auth(self) -> Type[Account]:\n \"\"\"\n Authenticates the OneDrive API client using the specified\n authentication method and returns the Account object.\n Returns:\n Type[Account]: The authenticated Account object.\n \"\"\"\n try:\n from O365 import FileSystemTokenBackend\n except ImportError:\n raise ImportError(\n \"O365 package not found, please install it with `pip install o365`\"\n )\n if self.auth_with_token:\n token_storage = _OneDriveTokenStorage()\n token_path = token_storage.token_path\n token_backend = FileSystemTokenBackend(\n token_path=token_path.parent, token_filename=token_path.name\n )\n account = Account(\n credentials=(\n self.settings.client_id,\n self.settings.client_secret.get_secret_value(),\n ),\n scopes=SCOPES,\n token_backend=token_backend,\n **{\"raise_http_errors\": False},\n )\n else:\n token_backend = FileSystemTokenBackend(\n token_path=Path.home() / \".credentials\"\n )\n account = Account(\n credentials=(\n self.settings.client_id,", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/onedrive.html"}
+{"id": "d13488dbf3a6-2", "text": ")\n account = Account(\n credentials=(\n self.settings.client_id,\n self.settings.client_secret.get_secret_value(),\n ),\n scopes=SCOPES,\n token_backend=token_backend,\n **{\"raise_http_errors\": False},\n )\n # make the auth\n account.authenticate()\n return account\n def _get_folder_from_path(self, drive: Type[Drive]) -> Union[Folder, Drive]:\n \"\"\"\n Returns the folder or drive object located at the\n specified path relative to the given drive.\n Args:\n drive (Type[Drive]): The root drive from which the folder path is relative.\n Returns:\n Union[Folder, Drive]: The folder or drive object\n located at the specified path.\n Raises:\n FileNotFoundError: If the path does not exist.\n \"\"\"\n subfolder_drive = drive\n if self.folder_path is None:\n return subfolder_drive\n subfolders = [f for f in self.folder_path.split(\"/\") if f != \"\"]\n if len(subfolders) == 0:\n return subfolder_drive\n items = subfolder_drive.get_items()\n for subfolder in subfolders:\n try:\n subfolder_drive = list(filter(lambda x: subfolder in x.name, items))[0]\n items = subfolder_drive.get_items()\n except (IndexError, AttributeError):\n raise FileNotFoundError(\"Path {} not exist.\".format(self.folder_path))\n return subfolder_drive\n def _load_from_folder(self, folder: Type[Folder]) -> List[Document]:\n \"\"\"\n Loads all supported document files from the specified folder\n and returns a list of Document objects.\n Args:\n folder (Type[Folder]): The folder object to load the documents from.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/onedrive.html"}
+{"id": "d13488dbf3a6-3", "text": "folder (Type[Folder]): The folder object to load the documents from.\n Returns:\n List[Document]: A list of Document objects representing\n the loaded documents.\n \"\"\"\n docs = []\n file_types = _SupportedFileTypes(file_types=[\"doc\", \"docx\", \"pdf\"])\n file_mime_types = file_types.fetch_mime_types()\n items = folder.get_items()\n with tempfile.TemporaryDirectory() as temp_dir:\n file_path = f\"{temp_dir}\"\n os.makedirs(os.path.dirname(file_path), exist_ok=True)\n for file in items:\n if file.is_file:\n if file.mime_type in list(file_mime_types.values()):\n loader = OneDriveFileLoader(file=file)\n docs.extend(loader.load())\n return docs\n def _load_from_object_ids(self, drive: Type[Drive]) -> List[Document]:\n \"\"\"\n Loads all supported document files from the specified OneDrive\n drive based on their object IDs and returns a list\n of Document objects.\n Args:\n drive (Type[Drive]): The OneDrive drive object\n to load the documents from.\n Returns:\n List[Document]: A list of Document objects representing\n the loaded documents.\n \"\"\"\n docs = []\n file_types = _SupportedFileTypes(file_types=[\"doc\", \"docx\", \"pdf\"])\n file_mime_types = file_types.fetch_mime_types()\n with tempfile.TemporaryDirectory() as temp_dir:\n file_path = f\"{temp_dir}\"\n os.makedirs(os.path.dirname(file_path), exist_ok=True)\n for object_id in self.object_ids if self.object_ids else [\"\"]:\n file = drive.get_item(object_id)\n if not file:\n logging.warning(\n \"There isn't a file with \"", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/onedrive.html"}
+{"id": "d13488dbf3a6-4", "text": "logging.warning(\n \"There isn't a file with \"\n f\"object_id {object_id} in drive {drive}.\"\n )\n continue\n if file.is_file:\n if file.mime_type in list(file_mime_types.values()):\n loader = OneDriveFileLoader(file=file)\n docs.extend(loader.load())\n return docs\n[docs] def load(self) -> List[Document]:\n \"\"\"\n Loads all supported document files from the specified OneDrive drive a\n nd returns a list of Document objects.\n Returns:\n List[Document]: A list of Document objects\n representing the loaded documents.\n Raises:\n ValueError: If the specified drive ID\n does not correspond to a drive in the OneDrive storage.\n \"\"\"\n account = self._auth()\n storage = account.storage()\n drive = storage.get_drive(self.drive_id)\n docs: List[Document] = []\n if not drive:\n raise ValueError(f\"There isn't a drive with id {self.drive_id}.\")\n if self.folder_path:\n folder = self._get_folder_from_path(drive=drive)\n docs.extend(self._load_from_folder(folder=folder))\n elif self.object_ids:\n docs.extend(self._load_from_object_ids(drive=drive))\n return docs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/onedrive.html"}
+{"id": "1c482ce05450-0", "text": "Source code for langchain.document_loaders.github\nfrom abc import ABC\nfrom datetime import datetime\nfrom typing import Dict, Iterator, List, Literal, Optional, Union\nimport requests\nfrom pydantic import BaseModel, root_validator, validator\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import get_from_dict_or_env\nclass BaseGitHubLoader(BaseLoader, BaseModel, ABC):\n \"\"\"Load issues of a GitHub repository.\"\"\"\n repo: str\n \"\"\"Name of repository\"\"\"\n access_token: str\n \"\"\"Personal access token - see https://github.com/settings/tokens?type=beta\"\"\"\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that access token exists in environment.\"\"\"\n values[\"access_token\"] = get_from_dict_or_env(\n values, \"access_token\", \"GITHUB_PERSONAL_ACCESS_TOKEN\"\n )\n return values\n @property\n def headers(self) -> Dict[str, str]:\n return {\n \"Accept\": \"application/vnd.github+json\",\n \"Authorization\": f\"Bearer {self.access_token}\",\n }\n[docs]class GitHubIssuesLoader(BaseGitHubLoader):\n include_prs: bool = True\n \"\"\"If True include Pull Requests in results, otherwise ignore them.\"\"\"\n milestone: Union[int, Literal[\"*\", \"none\"], None] = None\n \"\"\"If integer is passed, it should be a milestone's number field.\n If the string '*' is passed, issues with any milestone are accepted.\n If the string 'none' is passed, issues without milestones are returned.\n \"\"\"\n state: Optional[Literal[\"open\", \"closed\", \"all\"]] = None", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/github.html"}
+{"id": "1c482ce05450-1", "text": "state: Optional[Literal[\"open\", \"closed\", \"all\"]] = None\n \"\"\"Filter on issue state. Can be one of: 'open', 'closed', 'all'.\"\"\"\n assignee: Optional[str] = None\n \"\"\"Filter on assigned user. Pass 'none' for no user and '*' for any user.\"\"\"\n creator: Optional[str] = None\n \"\"\"Filter on the user that created the issue.\"\"\"\n mentioned: Optional[str] = None\n \"\"\"Filter on a user that's mentioned in the issue.\"\"\"\n labels: Optional[List[str]] = None\n \"\"\"Label names to filter one. Example: bug,ui,@high.\"\"\"\n sort: Optional[Literal[\"created\", \"updated\", \"comments\"]] = None\n \"\"\"What to sort results by. Can be one of: 'created', 'updated', 'comments'.\n Default is 'created'.\"\"\"\n direction: Optional[Literal[\"asc\", \"desc\"]] = None\n \"\"\"The direction to sort the results by. Can be one of: 'asc', 'desc'.\"\"\"\n since: Optional[str] = None\n \"\"\"Only show notifications updated after the given time.\n This is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.\"\"\"\n @validator(\"since\")\n def validate_since(cls, v: Optional[str]) -> Optional[str]:\n if v:\n try:\n datetime.strptime(v, \"%Y-%m-%dT%H:%M:%SZ\")\n except ValueError:\n raise ValueError(\n \"Invalid value for 'since'. Expected a date string in \"\n f\"YYYY-MM-DDTHH:MM:SSZ format. Received: {v}\"\n )\n return v\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/github.html"}
+{"id": "1c482ce05450-2", "text": "[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"\n Get issues of a GitHub repository.\n Returns:\n A list of Documents with attributes:\n - page_content\n - metadata\n - url\n - title\n - creator\n - created_at\n - last_update_time\n - closed_time\n - number of comments\n - state\n - labels\n - assignee\n - assignees\n - milestone\n - locked\n - number\n - is_pull_request\n \"\"\"\n url: Optional[str] = self.url\n while url:\n response = requests.get(url, headers=self.headers)\n response.raise_for_status()\n issues = response.json()\n for issue in issues:\n doc = self.parse_issue(issue)\n if not self.include_prs and doc.metadata[\"is_pull_request\"]:\n continue\n yield doc\n if response.links and response.links.get(\"next\"):\n url = response.links[\"next\"][\"url\"]\n else:\n url = None\n[docs] def load(self) -> List[Document]:\n \"\"\"\n Get issues of a GitHub repository.\n Returns:\n A list of Documents with attributes:\n - page_content\n - metadata\n - url\n - title\n - creator\n - created_at\n - last_update_time\n - closed_time\n - number of comments\n - state\n - labels\n - assignee\n - assignees\n - milestone\n - locked\n - number\n - is_pull_request\n \"\"\"\n return list(self.lazy_load())\n[docs] def parse_issue(self, issue: dict) -> Document:\n \"\"\"Create Document objects from a list of GitHub issues.\"\"\"\n metadata = {", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/github.html"}
+{"id": "1c482ce05450-3", "text": "\"\"\"Create Document objects from a list of GitHub issues.\"\"\"\n metadata = {\n \"url\": issue[\"html_url\"],\n \"title\": issue[\"title\"],\n \"creator\": issue[\"user\"][\"login\"],\n \"created_at\": issue[\"created_at\"],\n \"comments\": issue[\"comments\"],\n \"state\": issue[\"state\"],\n \"labels\": [label[\"name\"] for label in issue[\"labels\"]],\n \"assignee\": issue[\"assignee\"][\"login\"] if issue[\"assignee\"] else None,\n \"milestone\": issue[\"milestone\"][\"title\"] if issue[\"milestone\"] else None,\n \"locked\": issue[\"locked\"],\n \"number\": issue[\"number\"],\n \"is_pull_request\": \"pull_request\" in issue,\n }\n content = issue[\"body\"] if issue[\"body\"] is not None else \"\"\n return Document(page_content=content, metadata=metadata)\n @property\n def query_params(self) -> str:\n labels = \",\".join(self.labels) if self.labels else self.labels\n query_params_dict = {\n \"milestone\": self.milestone,\n \"state\": self.state,\n \"assignee\": self.assignee,\n \"creator\": self.creator,\n \"mentioned\": self.mentioned,\n \"labels\": labels,\n \"sort\": self.sort,\n \"direction\": self.direction,\n \"since\": self.since,\n }\n query_params_list = [\n f\"{k}={v}\" for k, v in query_params_dict.items() if v is not None\n ]\n query_params = \"&\".join(query_params_list)\n return query_params\n @property\n def url(self) -> str:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/github.html"}
+{"id": "1c482ce05450-4", "text": "return query_params\n @property\n def url(self) -> str:\n return f\"https://api.github.com/repos/{self.repo}/issues?{self.query_params}\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/github.html"}
+{"id": "4d50e4acbb0c-0", "text": "Source code for langchain.document_loaders.mastodon\n\"\"\"Mastodon document loader.\"\"\"\nfrom __future__ import annotations\nimport os\nfrom typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Sequence\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nif TYPE_CHECKING:\n import mastodon\ndef _dependable_mastodon_import() -> mastodon:\n try:\n import mastodon\n except ImportError:\n raise ValueError(\n \"Mastodon.py package not found, \"\n \"please install it with `pip install Mastodon.py`\"\n )\n return mastodon\n[docs]class MastodonTootsLoader(BaseLoader):\n \"\"\"Mastodon toots loader.\"\"\"\n def __init__(\n self,\n mastodon_accounts: Sequence[str],\n number_toots: Optional[int] = 100,\n exclude_replies: bool = False,\n access_token: Optional[str] = None,\n api_base_url: str = \"https://mastodon.social\",\n ):\n \"\"\"Instantiate Mastodon toots loader.\n Args:\n mastodon_accounts: The list of Mastodon accounts to query.\n number_toots: How many toots to pull for each account.\n exclude_replies: Whether to exclude reply toots from the load.\n access_token: An access token if toots are loaded as a Mastodon app. Can\n also be specified via the environment variables \"MASTODON_ACCESS_TOKEN\".\n api_base_url: A Mastodon API base URL to talk to, if not using the default.\n \"\"\"\n mastodon = _dependable_mastodon_import()\n access_token = access_token or os.environ.get(\"MASTODON_ACCESS_TOKEN\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/mastodon.html"}
+{"id": "4d50e4acbb0c-1", "text": "access_token = access_token or os.environ.get(\"MASTODON_ACCESS_TOKEN\")\n self.api = mastodon.Mastodon(\n access_token=access_token, api_base_url=api_base_url\n )\n self.mastodon_accounts = mastodon_accounts\n self.number_toots = number_toots\n self.exclude_replies = exclude_replies\n[docs] def load(self) -> List[Document]:\n \"\"\"Load toots into documents.\"\"\"\n results: List[Document] = []\n for account in self.mastodon_accounts:\n user = self.api.account_lookup(account)\n toots = self.api.account_statuses(\n user.id,\n only_media=False,\n pinned=False,\n exclude_replies=self.exclude_replies,\n exclude_reblogs=True,\n limit=self.number_toots,\n )\n docs = self._format_toots(toots, user)\n results.extend(docs)\n return results\n def _format_toots(\n self, toots: List[Dict[str, Any]], user_info: dict\n ) -> Iterable[Document]:\n \"\"\"Format toots into documents.\n Adding user info, and selected toot fields into the metadata.\n \"\"\"\n for toot in toots:\n metadata = {\n \"created_at\": toot[\"created_at\"],\n \"user_info\": user_info,\n \"is_reply\": toot[\"in_reply_to_id\"] is not None,\n }\n yield Document(\n page_content=toot[\"content\"],\n metadata=metadata,\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/mastodon.html"}
+{"id": "a0377a67acd3-0", "text": "Source code for langchain.document_loaders.roam\n\"\"\"Loader that loads Roam directory dump.\"\"\"\nfrom pathlib import Path\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class RoamLoader(BaseLoader):\n \"\"\"Loader that loads Roam files from disk.\"\"\"\n def __init__(self, path: str):\n \"\"\"Initialize with path.\"\"\"\n self.file_path = path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n ps = list(Path(self.file_path).glob(\"**/*.md\"))\n docs = []\n for p in ps:\n with open(p) as f:\n text = f.read()\n metadata = {\"source\": str(p)}\n docs.append(Document(page_content=text, metadata=metadata))\n return docs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/roam.html"}
+{"id": "345a2e6799cc-0", "text": "Source code for langchain.document_loaders.rtf\n\"\"\"Loader that loads rich text files.\"\"\"\nfrom typing import Any, List\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n satisfies_min_unstructured_version,\n)\n[docs]class UnstructuredRTFLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load rtf files.\"\"\"\n def __init__(\n self, file_path: str, mode: str = \"single\", **unstructured_kwargs: Any\n ):\n min_unstructured_version = \"0.5.12\"\n if not satisfies_min_unstructured_version(min_unstructured_version):\n raise ValueError(\n \"Partitioning rtf files is only supported in \"\n f\"unstructured>={min_unstructured_version}.\"\n )\n super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.rtf import partition_rtf\n return partition_rtf(filename=self.file_path, **self.unstructured_kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/rtf.html"}
+{"id": "0915ac30fae9-0", "text": "Source code for langchain.document_loaders.word_document\n\"\"\"Loader that loads word documents.\"\"\"\nimport os\nimport tempfile\nfrom abc import ABC\nfrom typing import List\nfrom urllib.parse import urlparse\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class Docx2txtLoader(BaseLoader, ABC):\n \"\"\"Loads a DOCX with docx2txt and chunks at character level.\n Defaults to check for local file, but if the file is a web path, it will download it\n to a temporary file, and use that, then clean up the temporary file after completion\n \"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path.\"\"\"\n self.file_path = file_path\n if \"~\" in self.file_path:\n self.file_path = os.path.expanduser(self.file_path)\n # If the file is a web path, download it to a temporary file, and use that\n if not os.path.isfile(self.file_path) and self._is_valid_url(self.file_path):\n r = requests.get(self.file_path)\n if r.status_code != 200:\n raise ValueError(\n \"Check the url of your file; returned status code %s\"\n % r.status_code\n )\n self.web_path = self.file_path\n self.temp_file = tempfile.NamedTemporaryFile()\n self.temp_file.write(r.content)\n self.file_path = self.temp_file.name\n elif not os.path.isfile(self.file_path):\n raise ValueError(\"File path %s is not a valid file or url\" % self.file_path)\n def __del__(self) -> None:\n if hasattr(self, \"temp_file\"):\n self.temp_file.close()", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/word_document.html"}
+{"id": "0915ac30fae9-1", "text": "if hasattr(self, \"temp_file\"):\n self.temp_file.close()\n[docs] def load(self) -> List[Document]:\n \"\"\"Load given path as single page.\"\"\"\n import docx2txt\n return [\n Document(\n page_content=docx2txt.process(self.file_path),\n metadata={\"source\": self.file_path},\n )\n ]\n @staticmethod\n def _is_valid_url(url: str) -> bool:\n \"\"\"Check if the url is valid.\"\"\"\n parsed = urlparse(url)\n return bool(parsed.netloc) and bool(parsed.scheme)\n[docs]class UnstructuredWordDocumentLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load word documents.\"\"\"\n def _get_elements(self) -> List:\n from unstructured.__version__ import __version__ as __unstructured_version__\n from unstructured.file_utils.filetype import FileType, detect_filetype\n unstructured_version = tuple(\n [int(x) for x in __unstructured_version__.split(\".\")]\n )\n # NOTE(MthwRobinson) - magic will raise an import error if the libmagic\n # system dependency isn't installed. If it's not installed, we'll just\n # check the file extension\n try:\n import magic # noqa: F401\n is_doc = detect_filetype(self.file_path) == FileType.DOC\n except ImportError:\n _, extension = os.path.splitext(str(self.file_path))\n is_doc = extension == \".doc\"\n if is_doc and unstructured_version < (0, 4, 11):\n raise ValueError(\n f\"You are on unstructured version {__unstructured_version__}. \"", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/word_document.html"}
+{"id": "0915ac30fae9-2", "text": "f\"You are on unstructured version {__unstructured_version__}. \"\n \"Partitioning .doc files is only supported in unstructured>=0.4.11. \"\n \"Please upgrade the unstructured package and try again.\"\n )\n if is_doc:\n from unstructured.partition.doc import partition_doc\n return partition_doc(filename=self.file_path, **self.unstructured_kwargs)\n else:\n from unstructured.partition.docx import partition_docx\n return partition_docx(filename=self.file_path, **self.unstructured_kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/word_document.html"}
+{"id": "203b6eb4af45-0", "text": "Source code for langchain.document_loaders.facebook_chat\n\"\"\"Loader that loads Facebook chat json dump.\"\"\"\nimport datetime\nimport json\nfrom pathlib import Path\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\ndef concatenate_rows(row: dict) -> str:\n \"\"\"Combine message information in a readable format ready to be used.\"\"\"\n sender = row[\"sender_name\"]\n text = row[\"content\"]\n date = datetime.datetime.fromtimestamp(row[\"timestamp_ms\"] / 1000).strftime(\n \"%Y-%m-%d %H:%M:%S\"\n )\n return f\"{sender} on {date}: {text}\\n\\n\"\n[docs]class FacebookChatLoader(BaseLoader):\n \"\"\"Loader that loads Facebook messages json directory dump.\"\"\"\n def __init__(self, path: str):\n \"\"\"Initialize with path.\"\"\"\n self.file_path = path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n p = Path(self.file_path)\n with open(p, encoding=\"utf8\") as f:\n d = json.load(f)\n text = \"\".join(\n concatenate_rows(message)\n for message in d[\"messages\"]\n if message.get(\"content\") and isinstance(message[\"content\"], str)\n )\n metadata = {\"source\": str(p)}\n return [Document(page_content=text, metadata=metadata)]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/facebook_chat.html"}
+{"id": "78bfa57d15bc-0", "text": "Source code for langchain.document_loaders.figma\n\"\"\"Loader that loads Figma files json dump.\"\"\"\nimport json\nimport urllib.request\nfrom typing import Any, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import stringify_dict\n[docs]class FigmaFileLoader(BaseLoader):\n \"\"\"Loader that loads Figma file json.\"\"\"\n def __init__(self, access_token: str, ids: str, key: str):\n \"\"\"Initialize with access token, ids, and key.\"\"\"\n self.access_token = access_token\n self.ids = ids\n self.key = key\n def _construct_figma_api_url(self) -> str:\n api_url = \"https://api.figma.com/v1/files/%s/nodes?ids=%s\" % (\n self.key,\n self.ids,\n )\n return api_url\n def _get_figma_file(self) -> Any:\n \"\"\"Get Figma file from Figma REST API.\"\"\"\n headers = {\"X-Figma-Token\": self.access_token}\n request = urllib.request.Request(\n self._construct_figma_api_url(), headers=headers\n )\n with urllib.request.urlopen(request) as response:\n json_data = json.loads(response.read().decode())\n return json_data\n[docs] def load(self) -> List[Document]:\n \"\"\"Load file\"\"\"\n data = self._get_figma_file()\n text = stringify_dict(data)\n metadata = {\"source\": self._construct_figma_api_url()}\n return [Document(page_content=text, metadata=metadata)]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/figma.html"}
+{"id": "c9a66137cdca-0", "text": "Source code for langchain.document_loaders.srt\n\"\"\"Loader for .srt (subtitle) files.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class SRTLoader(BaseLoader):\n \"\"\"Loader for .srt (subtitle) files.\"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path.\"\"\"\n try:\n import pysrt # noqa:F401\n except ImportError:\n raise ImportError(\n \"package `pysrt` not found, please install it with `pip install pysrt`\"\n )\n self.file_path = file_path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load using pysrt file.\"\"\"\n import pysrt\n parsed_info = pysrt.open(self.file_path)\n text = \" \".join([t.text for t in parsed_info])\n metadata = {\"source\": self.file_path}\n return [Document(page_content=text, metadata=metadata)]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/srt.html"}
+{"id": "9649f6d70c86-0", "text": "Source code for langchain.document_loaders.discord\n\"\"\"Load from Discord chat dump\"\"\"\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nif TYPE_CHECKING:\n import pandas as pd\n[docs]class DiscordChatLoader(BaseLoader):\n \"\"\"Load Discord chat logs.\"\"\"\n def __init__(self, chat_log: pd.DataFrame, user_id_col: str = \"ID\"):\n \"\"\"Initialize with a Pandas DataFrame containing chat logs.\"\"\"\n if not isinstance(chat_log, pd.DataFrame):\n raise ValueError(\n f\"Expected chat_log to be a pd.DataFrame, got {type(chat_log)}\"\n )\n self.chat_log = chat_log\n self.user_id_col = user_id_col\n[docs] def load(self) -> List[Document]:\n \"\"\"Load all chat messages.\"\"\"\n result = []\n for _, row in self.chat_log.iterrows():\n user_id = row[self.user_id_col]\n metadata = row.to_dict()\n metadata.pop(self.user_id_col)\n result.append(Document(page_content=user_id, metadata=metadata))\n return result\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/discord.html"}
+{"id": "c7d903d9e8b9-0", "text": "Source code for langchain.document_loaders.dataframe\n\"\"\"Load from Dataframe object\"\"\"\nfrom typing import Any, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class DataFrameLoader(BaseLoader):\n \"\"\"Load Pandas DataFrames.\"\"\"\n def __init__(self, data_frame: Any, page_content_column: str = \"text\"):\n \"\"\"Initialize with dataframe object.\"\"\"\n import pandas as pd\n if not isinstance(data_frame, pd.DataFrame):\n raise ValueError(\n f\"Expected data_frame to be a pd.DataFrame, got {type(data_frame)}\"\n )\n self.data_frame = data_frame\n self.page_content_column = page_content_column\n[docs] def load(self) -> List[Document]:\n \"\"\"Load from the dataframe.\"\"\"\n result = []\n # For very large dataframes, this needs to yield instead of building a list\n # but that would require chaging return type to a generator for BaseLoader\n # and all its subclasses, which is a bigger refactor. Marking as future TODO.\n # This change will allow us to extend this to Spark and Dask dataframes.\n for _, row in self.data_frame.iterrows():\n text = row[self.page_content_column]\n metadata = row.to_dict()\n metadata.pop(self.page_content_column)\n result.append(Document(page_content=text, metadata=metadata))\n return result\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/dataframe.html"}
+{"id": "e6661aa831fc-0", "text": "Source code for langchain.document_loaders.html\n\"\"\"Loader that uses unstructured to load HTML files.\"\"\"\nfrom typing import List\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class UnstructuredHTMLLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load HTML files.\"\"\"\n def _get_elements(self) -> List:\n from unstructured.partition.html import partition_html\n return partition_html(filename=self.file_path, **self.unstructured_kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/html.html"}
+{"id": "888be621d9e9-0", "text": "Source code for langchain.document_loaders.reddit\n\"\"\"Reddit document loader.\"\"\"\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Iterable, List, Optional, Sequence\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nif TYPE_CHECKING:\n import praw\ndef _dependable_praw_import() -> praw:\n try:\n import praw\n except ImportError:\n raise ValueError(\n \"praw package not found, please install it with `pip install praw`\"\n )\n return praw\n[docs]class RedditPostsLoader(BaseLoader):\n \"\"\"Reddit posts loader.\n Read posts on a subreddit.\n First you need to go to\n https://www.reddit.com/prefs/apps/\n and create your application\n \"\"\"\n def __init__(\n self,\n client_id: str,\n client_secret: str,\n user_agent: str,\n search_queries: Sequence[str],\n mode: str,\n categories: Sequence[str] = [\"new\"],\n number_posts: Optional[int] = 10,\n ):\n self.client_id = client_id\n self.client_secret = client_secret\n self.user_agent = user_agent\n self.search_queries = search_queries\n self.mode = mode\n self.categories = categories\n self.number_posts = number_posts\n[docs] def load(self) -> List[Document]:\n \"\"\"Load reddits.\"\"\"\n praw = _dependable_praw_import()\n reddit = praw.Reddit(\n client_id=self.client_id,\n client_secret=self.client_secret,\n user_agent=self.user_agent,\n )\n results: List[Document] = []\n if self.mode == \"subreddit\":\n for search_query in self.search_queries:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/reddit.html"}
+{"id": "888be621d9e9-1", "text": "if self.mode == \"subreddit\":\n for search_query in self.search_queries:\n for category in self.categories:\n docs = self._subreddit_posts_loader(\n search_query=search_query, category=category, reddit=reddit\n )\n results.extend(docs)\n elif self.mode == \"username\":\n for search_query in self.search_queries:\n for category in self.categories:\n docs = self._user_posts_loader(\n search_query=search_query, category=category, reddit=reddit\n )\n results.extend(docs)\n else:\n raise ValueError(\n \"mode not correct, please enter 'username' or 'subreddit' as mode\"\n )\n return results\n def _subreddit_posts_loader(\n self, search_query: str, category: str, reddit: praw.reddit.Reddit\n ) -> Iterable[Document]:\n subreddit = reddit.subreddit(search_query)\n method = getattr(subreddit, category)\n cat_posts = method(limit=self.number_posts)\n \"\"\"Format reddit posts into a string.\"\"\"\n for post in cat_posts:\n metadata = {\n \"post_subreddit\": post.subreddit_name_prefixed,\n \"post_category\": category,\n \"post_title\": post.title,\n \"post_score\": post.score,\n \"post_id\": post.id,\n \"post_url\": post.url,\n \"post_author\": post.author,\n }\n yield Document(\n page_content=post.selftext,\n metadata=metadata,\n )\n def _user_posts_loader(\n self, search_query: str, category: str, reddit: praw.reddit.Reddit\n ) -> Iterable[Document]:\n user = reddit.redditor(search_query)\n method = getattr(user.submissions, category)", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/reddit.html"}
+{"id": "888be621d9e9-2", "text": "method = getattr(user.submissions, category)\n cat_posts = method(limit=self.number_posts)\n \"\"\"Format reddit posts into a string.\"\"\"\n for post in cat_posts:\n metadata = {\n \"post_subreddit\": post.subreddit_name_prefixed,\n \"post_category\": category,\n \"post_title\": post.title,\n \"post_score\": post.score,\n \"post_id\": post.id,\n \"post_url\": post.url,\n \"post_author\": post.author,\n }\n yield Document(\n page_content=post.selftext,\n metadata=metadata,\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/reddit.html"}
+{"id": "80a19e9a566b-0", "text": "Source code for langchain.document_loaders.duckdb_loader\nfrom typing import Dict, List, Optional, cast\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class DuckDBLoader(BaseLoader):\n \"\"\"Loads a query result from DuckDB into a list of documents.\n Each document represents one row of the result. The `page_content_columns`\n are written into the `page_content` of the document. The `metadata_columns`\n are written into the `metadata` of the document. By default, all columns\n are written into the `page_content` and none into the `metadata`.\n \"\"\"\n def __init__(\n self,\n query: str,\n database: str = \":memory:\",\n read_only: bool = False,\n config: Optional[Dict[str, str]] = None,\n page_content_columns: Optional[List[str]] = None,\n metadata_columns: Optional[List[str]] = None,\n ):\n self.query = query\n self.database = database\n self.read_only = read_only\n self.config = config or {}\n self.page_content_columns = page_content_columns\n self.metadata_columns = metadata_columns\n[docs] def load(self) -> List[Document]:\n try:\n import duckdb\n except ImportError:\n raise ImportError(\n \"Could not import duckdb python package. \"\n \"Please install it with `pip install duckdb`.\"\n )\n docs = []\n with duckdb.connect(\n database=self.database, read_only=self.read_only, config=self.config\n ) as con:\n query_result = con.execute(self.query)\n results = query_result.fetchall()\n description = cast(list, query_result.description)", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/duckdb_loader.html"}
+{"id": "80a19e9a566b-1", "text": "results = query_result.fetchall()\n description = cast(list, query_result.description)\n field_names = [c[0] for c in description]\n if self.page_content_columns is None:\n page_content_columns = field_names\n else:\n page_content_columns = self.page_content_columns\n if self.metadata_columns is None:\n metadata_columns = []\n else:\n metadata_columns = self.metadata_columns\n for result in results:\n page_content = \"\\n\".join(\n f\"{column}: {result[field_names.index(column)]}\"\n for column in page_content_columns\n )\n metadata = {\n column: result[field_names.index(column)]\n for column in metadata_columns\n }\n doc = Document(page_content=page_content, metadata=metadata)\n docs.append(doc)\n return docs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/duckdb_loader.html"}
+{"id": "02af49231f7e-0", "text": "Source code for langchain.document_loaders.python\nimport tokenize\nfrom langchain.document_loaders.text import TextLoader\n[docs]class PythonLoader(TextLoader):\n \"\"\"\n Load Python files, respecting any non-default encoding if specified.\n \"\"\"\n def __init__(self, file_path: str):\n with open(file_path, \"rb\") as f:\n encoding, _ = tokenize.detect_encoding(f.readline)\n super().__init__(file_path=file_path, encoding=encoding)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/python.html"}
+{"id": "0af245e9aea1-0", "text": "Source code for langchain.document_loaders.blockchain\nimport os\nimport re\nimport time\nfrom enum import Enum\nfrom typing import List, Optional\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nclass BlockchainType(Enum):\n ETH_MAINNET = \"eth-mainnet\"\n ETH_GOERLI = \"eth-goerli\"\n POLYGON_MAINNET = \"polygon-mainnet\"\n POLYGON_MUMBAI = \"polygon-mumbai\"\n[docs]class BlockchainDocumentLoader(BaseLoader):\n \"\"\"Loads elements from a blockchain smart contract into Langchain documents.\n The supported blockchains are: Ethereum mainnet, Ethereum Goerli testnet,\n Polygon mainnet, and Polygon Mumbai testnet.\n If no BlockchainType is specified, the default is Ethereum mainnet.\n The Loader uses the Alchemy API to interact with the blockchain.\n ALCHEMY_API_KEY environment variable must be set to use this loader.\n The API returns 100 NFTs per request and can be paginated using the\n startToken parameter.\n If get_all_tokens is set to True, the loader will get all tokens\n on the contract. Note that for contracts with a large number of tokens,\n this may take a long time (e.g. 10k tokens is 100 requests).\n Default value is false for this reason.\n The max_execution_time (sec) can be set to limit the execution time\n of the loader.\n Future versions of this loader can:\n - Support additional Alchemy APIs (e.g. getTransactions, etc.)\n - Support additional blockain APIs (e.g. Infura, Opensea, etc.)\n \"\"\"\n def __init__(\n self,\n contract_address: str,", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/blockchain.html"}
+{"id": "0af245e9aea1-1", "text": "\"\"\"\n def __init__(\n self,\n contract_address: str,\n blockchainType: BlockchainType = BlockchainType.ETH_MAINNET,\n api_key: str = \"docs-demo\",\n startToken: str = \"\",\n get_all_tokens: bool = False,\n max_execution_time: Optional[int] = None,\n ):\n self.contract_address = contract_address\n self.blockchainType = blockchainType.value\n self.api_key = os.environ.get(\"ALCHEMY_API_KEY\") or api_key\n self.startToken = startToken\n self.get_all_tokens = get_all_tokens\n self.max_execution_time = max_execution_time\n if not self.api_key:\n raise ValueError(\"Alchemy API key not provided.\")\n if not re.match(r\"^0x[a-fA-F0-9]{40}$\", self.contract_address):\n raise ValueError(f\"Invalid contract address {self.contract_address}\")\n[docs] def load(self) -> List[Document]:\n result = []\n current_start_token = self.startToken\n start_time = time.time()\n while True:\n url = (\n f\"https://{self.blockchainType}.g.alchemy.com/nft/v2/\"\n f\"{self.api_key}/getNFTsForCollection?withMetadata=\"\n f\"True&contractAddress={self.contract_address}\"\n f\"&startToken={current_start_token}\"\n )\n response = requests.get(url)\n if response.status_code != 200:\n raise ValueError(\n f\"Request failed with status code {response.status_code}\"\n )\n items = response.json()[\"nfts\"]\n if not items:\n break\n for item in items:\n content = str(item)\n tokenId = item[\"id\"][\"tokenId\"]\n metadata = {", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/blockchain.html"}
+{"id": "0af245e9aea1-2", "text": "tokenId = item[\"id\"][\"tokenId\"]\n metadata = {\n \"source\": self.contract_address,\n \"blockchain\": self.blockchainType,\n \"tokenId\": tokenId,\n }\n result.append(Document(page_content=content, metadata=metadata))\n # exit after the first API call if get_all_tokens is False\n if not self.get_all_tokens:\n break\n # get the start token for the next API call from the last item in array\n current_start_token = self._get_next_tokenId(result[-1].metadata[\"tokenId\"])\n if (\n self.max_execution_time is not None\n and (time.time() - start_time) > self.max_execution_time\n ):\n raise RuntimeError(\"Execution time exceeded the allowed time limit.\")\n if not result:\n raise ValueError(\n f\"No NFTs found for contract address {self.contract_address}\"\n )\n return result\n # add one to the tokenId, ensuring the correct tokenId format is used\n def _get_next_tokenId(self, tokenId: str) -> str:\n value_type = self._detect_value_type(tokenId)\n if value_type == \"hex_0x\":\n value_int = int(tokenId, 16)\n elif value_type == \"hex_0xbf\":\n value_int = int(tokenId[2:], 16)\n else:\n value_int = int(tokenId)\n result = value_int + 1\n if value_type == \"hex_0x\":\n return \"0x\" + format(result, \"0\" + str(len(tokenId) - 2) + \"x\")\n elif value_type == \"hex_0xbf\":", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/blockchain.html"}
+{"id": "0af245e9aea1-3", "text": "elif value_type == \"hex_0xbf\":\n return \"0xbf\" + format(result, \"0\" + str(len(tokenId) - 4) + \"x\")\n else:\n return str(result)\n # A smart contract can use different formats for the tokenId\n @staticmethod\n def _detect_value_type(tokenId: str) -> str:\n if isinstance(tokenId, int):\n return \"int\"\n elif tokenId.startswith(\"0x\"):\n return \"hex_0x\"\n elif tokenId.startswith(\"0xbf\"):\n return \"hex_0xbf\"\n else:\n return \"hex_0xbf\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/blockchain.html"}
+{"id": "129623f3f7bd-0", "text": "Source code for langchain.document_loaders.azure_blob_storage_container\n\"\"\"Loading logic for loading documents from an Azure Blob Storage container.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.azure_blob_storage_file import (\n AzureBlobStorageFileLoader,\n)\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class AzureBlobStorageContainerLoader(BaseLoader):\n \"\"\"Loading logic for loading documents from Azure Blob Storage.\"\"\"\n def __init__(self, conn_str: str, container: str, prefix: str = \"\"):\n \"\"\"Initialize with connection string, container and blob prefix.\"\"\"\n self.conn_str = conn_str\n self.container = container\n self.prefix = prefix\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n from azure.storage.blob import ContainerClient\n except ImportError as exc:\n raise ValueError(\n \"Could not import azure storage blob python package. \"\n \"Please install it with `pip install azure-storage-blob`.\"\n ) from exc\n container = ContainerClient.from_connection_string(\n conn_str=self.conn_str, container_name=self.container\n )\n docs = []\n blob_list = container.list_blobs(name_starts_with=self.prefix)\n for blob in blob_list:\n loader = AzureBlobStorageFileLoader(\n self.conn_str, self.container, blob.name # type: ignore\n )\n docs.extend(loader.load())\n return docs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/azure_blob_storage_container.html"}
+{"id": "1395d09fda06-0", "text": "Source code for langchain.document_loaders.bilibili\nimport json\nimport re\nimport warnings\nfrom typing import List, Tuple\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class BiliBiliLoader(BaseLoader):\n \"\"\"Loader that loads bilibili transcripts.\"\"\"\n def __init__(self, video_urls: List[str]):\n \"\"\"Initialize with bilibili url.\"\"\"\n self.video_urls = video_urls\n[docs] def load(self) -> List[Document]:\n \"\"\"Load from bilibili url.\"\"\"\n results = []\n for url in self.video_urls:\n transcript, video_info = self._get_bilibili_subs_and_info(url)\n doc = Document(page_content=transcript, metadata=video_info)\n results.append(doc)\n return results\n def _get_bilibili_subs_and_info(self, url: str) -> Tuple[str, dict]:\n try:\n from bilibili_api import sync, video\n except ImportError:\n raise ValueError(\n \"requests package not found, please install it with \"\n \"`pip install bilibili-api-python`\"\n )\n bvid = re.search(r\"BV\\w+\", url)\n if bvid is not None:\n v = video.Video(bvid=bvid.group())\n else:\n aid = re.search(r\"av[0-9]+\", url)\n if aid is not None:\n try:\n v = video.Video(aid=int(aid.group()[2:]))\n except AttributeError:\n raise ValueError(f\"{url} is not bilibili url.\")\n else:\n raise ValueError(f\"{url} is not bilibili url.\")\n video_info = sync(v.get_info())", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/bilibili.html"}
+{"id": "1395d09fda06-1", "text": "video_info = sync(v.get_info())\n video_info.update({\"url\": url})\n # Get subtitle url\n subtitle = video_info.pop(\"subtitle\")\n sub_list = subtitle[\"list\"]\n if sub_list:\n sub_url = sub_list[0][\"subtitle_url\"]\n result = requests.get(sub_url)\n raw_sub_titles = json.loads(result.content)[\"body\"]\n raw_transcript = \" \".join([c[\"content\"] for c in raw_sub_titles])\n raw_transcript_with_meta_info = (\n f\"Video Title: {video_info['title']},\"\n f\"description: {video_info['desc']}\\n\\n\"\n f\"Transcript: {raw_transcript}\"\n )\n return raw_transcript_with_meta_info, video_info\n else:\n raw_transcript = \"\"\n warnings.warn(\n f\"\"\"\n No subtitles found for video: {url}.\n Return Empty transcript.\n \"\"\"\n )\n return raw_transcript, video_info\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/bilibili.html"}
+{"id": "6caf23a890b9-0", "text": "Source code for langchain.document_loaders.docugami\n\"\"\"Loader that loads processed documents from Docugami.\"\"\"\nimport io\nimport logging\nimport os\nimport re\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Mapping, Optional, Sequence, Union\nimport requests\nfrom pydantic import BaseModel, root_validator\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nTD_NAME = \"{http://www.w3.org/1999/xhtml}td\"\nTABLE_NAME = \"{http://www.w3.org/1999/xhtml}table\"\nXPATH_KEY = \"xpath\"\nDOCUMENT_ID_KEY = \"id\"\nDOCUMENT_NAME_KEY = \"name\"\nSTRUCTURE_KEY = \"structure\"\nTAG_KEY = \"tag\"\nPROJECTS_KEY = \"projects\"\nDEFAULT_API_ENDPOINT = \"https://api.docugami.com/v1preview1\"\nlogger = logging.getLogger(__name__)\n[docs]class DocugamiLoader(BaseLoader, BaseModel):\n \"\"\"Loader that loads processed docs from Docugami.\n To use, you should have the ``lxml`` python package installed.\n \"\"\"\n api: str = DEFAULT_API_ENDPOINT\n access_token: Optional[str] = os.environ.get(\"DOCUGAMI_API_KEY\")\n docset_id: Optional[str]\n document_ids: Optional[Sequence[str]]\n file_paths: Optional[Sequence[Union[Path, str]]]\n min_chunk_size: int = 32 # appended to the next chunk to avoid over-chunking\n @root_validator\n def validate_local_or_remote(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Validate that either local file paths are given, or remote API docset ID.\"\"\"\n if values.get(\"file_paths\") and values.get(\"docset_id\"):", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"}
+{"id": "6caf23a890b9-1", "text": "if values.get(\"file_paths\") and values.get(\"docset_id\"):\n raise ValueError(\"Cannot specify both file_paths and remote API docset_id\")\n if not values.get(\"file_paths\") and not values.get(\"docset_id\"):\n raise ValueError(\"Must specify either file_paths or remote API docset_id\")\n if values.get(\"docset_id\") and not values.get(\"access_token\"):\n raise ValueError(\"Must specify access token if using remote API docset_id\")\n return values\n def _parse_dgml(\n self, document: Mapping, content: bytes, doc_metadata: Optional[Mapping] = None\n ) -> List[Document]:\n \"\"\"Parse a single DGML document into a list of Documents.\"\"\"\n try:\n from lxml import etree\n except ImportError:\n raise ImportError(\n \"Could not import lxml python package. \"\n \"Please install it with `pip install lxml`.\"\n )\n # helpers\n def _xpath_qname_for_chunk(chunk: Any) -> str:\n \"\"\"Get the xpath qname for a chunk.\"\"\"\n qname = f\"{chunk.prefix}:{chunk.tag.split('}')[-1]}\"\n parent = chunk.getparent()\n if parent is not None:\n doppelgangers = [x for x in parent if x.tag == chunk.tag]\n if len(doppelgangers) > 1:\n idx_of_self = doppelgangers.index(chunk)\n qname = f\"{qname}[{idx_of_self + 1}]\"\n return qname\n def _xpath_for_chunk(chunk: Any) -> str:\n \"\"\"Get the xpath for a chunk.\"\"\"\n ancestor_chain = chunk.xpath(\"ancestor-or-self::*\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"}
+{"id": "6caf23a890b9-2", "text": "ancestor_chain = chunk.xpath(\"ancestor-or-self::*\")\n return \"/\" + \"/\".join(_xpath_qname_for_chunk(x) for x in ancestor_chain)\n def _structure_value(node: Any) -> str:\n \"\"\"Get the structure value for a node.\"\"\"\n structure = (\n \"table\"\n if node.tag == TABLE_NAME\n else node.attrib[\"structure\"]\n if \"structure\" in node.attrib\n else None\n )\n return structure\n def _is_structural(node: Any) -> bool:\n \"\"\"Check if a node is structural.\"\"\"\n return _structure_value(node) is not None\n def _is_heading(node: Any) -> bool:\n \"\"\"Check if a node is a heading.\"\"\"\n structure = _structure_value(node)\n return structure is not None and structure.lower().startswith(\"h\")\n def _get_text(node: Any) -> str:\n \"\"\"Get the text of a node.\"\"\"\n return \" \".join(node.itertext()).strip()\n def _has_structural_descendant(node: Any) -> bool:\n \"\"\"Check if a node has a structural descendant.\"\"\"\n for child in node:\n if _is_structural(child) or _has_structural_descendant(child):\n return True\n return False\n def _leaf_structural_nodes(node: Any) -> List:\n \"\"\"Get the leaf structural nodes of a node.\"\"\"\n if _is_structural(node) and not _has_structural_descendant(node):\n return [node]\n else:\n leaf_nodes = []\n for child in node:\n leaf_nodes.extend(_leaf_structural_nodes(child))\n return leaf_nodes\n def _create_doc(node: Any, text: str) -> Document:\n \"\"\"Create a Document from a node and text.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"}
+{"id": "6caf23a890b9-3", "text": "\"\"\"Create a Document from a node and text.\"\"\"\n metadata = {\n XPATH_KEY: _xpath_for_chunk(node),\n DOCUMENT_ID_KEY: document[\"id\"],\n DOCUMENT_NAME_KEY: document[\"name\"],\n STRUCTURE_KEY: node.attrib.get(\"structure\", \"\"),\n TAG_KEY: re.sub(r\"\\{.*\\}\", \"\", node.tag),\n }\n if doc_metadata:\n metadata.update(doc_metadata)\n return Document(\n page_content=text,\n metadata=metadata,\n )\n # parse the tree and return chunks\n tree = etree.parse(io.BytesIO(content))\n root = tree.getroot()\n chunks: List[Document] = []\n prev_small_chunk_text = None\n for node in _leaf_structural_nodes(root):\n text = _get_text(node)\n if prev_small_chunk_text:\n text = prev_small_chunk_text + \" \" + text\n prev_small_chunk_text = None\n if _is_heading(node) or len(text) < self.min_chunk_size:\n # Save headings or other small chunks to be appended to the next chunk\n prev_small_chunk_text = text\n else:\n chunks.append(_create_doc(node, text))\n if prev_small_chunk_text and len(chunks) > 0:\n # small chunk at the end left over, just append to last chunk\n chunks[-1].page_content += \" \" + prev_small_chunk_text\n return chunks\n def _document_details_for_docset_id(self, docset_id: str) -> List[Dict]:\n \"\"\"Gets all document details for the given docset ID\"\"\"\n url = f\"{self.api}/docsets/{docset_id}/documents\"\n all_documents = []\n while url:\n response = requests.get(\n url,", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"}
+{"id": "6caf23a890b9-4", "text": "while url:\n response = requests.get(\n url,\n headers={\"Authorization\": f\"Bearer {self.access_token}\"},\n )\n if response.ok:\n data = response.json()\n all_documents.extend(data[\"documents\"])\n url = data.get(\"next\", None)\n else:\n raise Exception(\n f\"Failed to download {url} (status: {response.status_code})\"\n )\n return all_documents\n def _project_details_for_docset_id(self, docset_id: str) -> List[Dict]:\n \"\"\"Gets all project details for the given docset ID\"\"\"\n url = f\"{self.api}/projects?docset.id={docset_id}\"\n all_projects = []\n while url:\n response = requests.request(\n \"GET\",\n url,\n headers={\"Authorization\": f\"Bearer {self.access_token}\"},\n data={},\n )\n if response.ok:\n data = response.json()\n all_projects.extend(data[\"projects\"])\n url = data.get(\"next\", None)\n else:\n raise Exception(\n f\"Failed to download {url} (status: {response.status_code})\"\n )\n return all_projects\n def _metadata_for_project(self, project: Dict) -> Dict:\n \"\"\"Gets project metadata for all files\"\"\"\n project_id = project.get(\"id\")\n url = f\"{self.api}/projects/{project_id}/artifacts/latest\"\n all_artifacts = []\n while url:\n response = requests.request(\n \"GET\",\n url,\n headers={\"Authorization\": f\"Bearer {self.access_token}\"},\n data={},\n )\n if response.ok:\n data = response.json()", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"}
+{"id": "6caf23a890b9-5", "text": "data={},\n )\n if response.ok:\n data = response.json()\n all_artifacts.extend(data[\"artifacts\"])\n url = data.get(\"next\", None)\n else:\n raise Exception(\n f\"Failed to download {url} (status: {response.status_code})\"\n )\n per_file_metadata = {}\n for artifact in all_artifacts:\n artifact_name = artifact.get(\"name\")\n artifact_url = artifact.get(\"url\")\n artifact_doc = artifact.get(\"document\")\n if artifact_name == f\"{project_id}.xml\" and artifact_url and artifact_doc:\n doc_id = artifact_doc[\"id\"]\n metadata: Dict = {}\n # the evaluated XML for each document is named after the project\n response = requests.request(\n \"GET\",\n f\"{artifact_url}/content\",\n headers={\"Authorization\": f\"Bearer {self.access_token}\"},\n data={},\n )\n if response.ok:\n try:\n from lxml import etree\n except ImportError:\n raise ImportError(\n \"Could not import lxml python package. \"\n \"Please install it with `pip install lxml`.\"\n )\n artifact_tree = etree.parse(io.BytesIO(response.content))\n artifact_root = artifact_tree.getroot()\n ns = artifact_root.nsmap\n entries = artifact_root.xpath(\"//wp:Entry\", namespaces=ns)\n for entry in entries:\n heading = entry.xpath(\"./wp:Heading\", namespaces=ns)[0].text\n value = \" \".join(\n entry.xpath(\"./wp:Value\", namespaces=ns)[0].itertext()\n ).strip()\n metadata[heading] = value\n per_file_metadata[doc_id] = metadata\n else:\n raise Exception(", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"}
+{"id": "6caf23a890b9-6", "text": "per_file_metadata[doc_id] = metadata\n else:\n raise Exception(\n f\"Failed to download {artifact_url}/content \"\n + \"(status: {response.status_code})\"\n )\n return per_file_metadata\n def _load_chunks_for_document(\n self, docset_id: str, document: Dict, doc_metadata: Optional[Dict] = None\n ) -> List[Document]:\n \"\"\"Load chunks for a document.\"\"\"\n document_id = document[\"id\"]\n url = f\"{self.api}/docsets/{docset_id}/documents/{document_id}/dgml\"\n response = requests.request(\n \"GET\",\n url,\n headers={\"Authorization\": f\"Bearer {self.access_token}\"},\n data={},\n )\n if response.ok:\n return self._parse_dgml(document, response.content, doc_metadata)\n else:\n raise Exception(\n f\"Failed to download {url} (status: {response.status_code})\"\n )\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n chunks: List[Document] = []\n if self.access_token and self.docset_id:\n # remote mode\n _document_details = self._document_details_for_docset_id(self.docset_id)\n if self.document_ids:\n _document_details = [\n d for d in _document_details if d[\"id\"] in self.document_ids\n ]\n _project_details = self._project_details_for_docset_id(self.docset_id)\n combined_project_metadata = {}\n if _project_details:\n # if there are any projects for this docset, load project metadata\n for project in _project_details:\n metadata = self._metadata_for_project(project)", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"}
+{"id": "6caf23a890b9-7", "text": "for project in _project_details:\n metadata = self._metadata_for_project(project)\n combined_project_metadata.update(metadata)\n for doc in _document_details:\n doc_metadata = combined_project_metadata.get(doc[\"id\"])\n chunks += self._load_chunks_for_document(\n self.docset_id, doc, doc_metadata\n )\n elif self.file_paths:\n # local mode (for integration testing, or pre-downloaded XML)\n for path in self.file_paths:\n path = Path(path)\n with open(path, \"rb\") as file:\n chunks += self._parse_dgml(\n {\n DOCUMENT_ID_KEY: path.name,\n DOCUMENT_NAME_KEY: path.name,\n },\n file.read(),\n )\n return chunks\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"}
+{"id": "1d6b119288bc-0", "text": "Source code for langchain.document_loaders.s3_file\n\"\"\"Loading logic for loading documents from an s3 file.\"\"\"\nimport os\nimport tempfile\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class S3FileLoader(BaseLoader):\n \"\"\"Loading logic for loading documents from s3.\"\"\"\n def __init__(self, bucket: str, key: str):\n \"\"\"Initialize with bucket and key name.\"\"\"\n self.bucket = bucket\n self.key = key\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n import boto3\n except ImportError:\n raise ImportError(\n \"Could not import `boto3` python package. \"\n \"Please install it with `pip install boto3`.\"\n )\n s3 = boto3.client(\"s3\")\n with tempfile.TemporaryDirectory() as temp_dir:\n file_path = f\"{temp_dir}/{self.key}\"\n os.makedirs(os.path.dirname(file_path), exist_ok=True)\n s3.download_file(self.bucket, self.key, file_path)\n loader = UnstructuredFileLoader(file_path)\n return loader.load()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/s3_file.html"}
+{"id": "509ec1e4afc2-0", "text": "Source code for langchain.document_loaders.directory\n\"\"\"Loading logic for loading documents from a directory.\"\"\"\nimport concurrent\nimport logging\nfrom pathlib import Path\nfrom typing import Any, List, Optional, Type, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.html_bs import BSHTMLLoader\nfrom langchain.document_loaders.text import TextLoader\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\nFILE_LOADER_TYPE = Union[\n Type[UnstructuredFileLoader], Type[TextLoader], Type[BSHTMLLoader]\n]\nlogger = logging.getLogger(__name__)\ndef _is_visible(p: Path) -> bool:\n parts = p.parts\n for _p in parts:\n if _p.startswith(\".\"):\n return False\n return True\n[docs]class DirectoryLoader(BaseLoader):\n \"\"\"Loading logic for loading documents from a directory.\"\"\"\n def __init__(\n self,\n path: str,\n glob: str = \"**/[!.]*\",\n silent_errors: bool = False,\n load_hidden: bool = False,\n loader_cls: FILE_LOADER_TYPE = UnstructuredFileLoader,\n loader_kwargs: Union[dict, None] = None,\n recursive: bool = False,\n show_progress: bool = False,\n use_multithreading: bool = False,\n max_concurrency: int = 4,\n ):\n \"\"\"Initialize with path to directory and how to glob over it.\"\"\"\n if loader_kwargs is None:\n loader_kwargs = {}\n self.path = path\n self.glob = glob\n self.load_hidden = load_hidden\n self.loader_cls = loader_cls\n self.loader_kwargs = loader_kwargs\n self.silent_errors = silent_errors", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/directory.html"}
+{"id": "509ec1e4afc2-1", "text": "self.loader_kwargs = loader_kwargs\n self.silent_errors = silent_errors\n self.recursive = recursive\n self.show_progress = show_progress\n self.use_multithreading = use_multithreading\n self.max_concurrency = max_concurrency\n[docs] def load_file(\n self, item: Path, path: Path, docs: List[Document], pbar: Optional[Any]\n ) -> None:\n if item.is_file():\n if _is_visible(item.relative_to(path)) or self.load_hidden:\n try:\n sub_docs = self.loader_cls(str(item), **self.loader_kwargs).load()\n docs.extend(sub_docs)\n except Exception as e:\n if self.silent_errors:\n logger.warning(e)\n else:\n raise e\n finally:\n if pbar:\n pbar.update(1)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n p = Path(self.path)\n if not p.exists():\n raise FileNotFoundError(f\"Directory not found: '{self.path}'\")\n if not p.is_dir():\n raise ValueError(f\"Expected directory, got file: '{self.path}'\")\n docs: List[Document] = []\n items = list(p.rglob(self.glob) if self.recursive else p.glob(self.glob))\n pbar = None\n if self.show_progress:\n try:\n from tqdm import tqdm\n pbar = tqdm(total=len(items))\n except ImportError as e:\n logger.warning(\n \"To log the progress of DirectoryLoader you need to install tqdm, \"\n \"`pip install tqdm`\"\n )\n if self.silent_errors:\n logger.warning(e)\n else:\n raise e", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/directory.html"}
+{"id": "509ec1e4afc2-2", "text": "logger.warning(e)\n else:\n raise e\n if self.use_multithreading:\n with concurrent.futures.ThreadPoolExecutor(\n max_workers=self.max_concurrency\n ) as executor:\n executor.map(lambda i: self.load_file(i, p, docs, pbar), items)\n else:\n for i in items:\n self.load_file(i, p, docs, pbar)\n if pbar:\n pbar.close()\n return docs\n#\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/directory.html"}
+{"id": "87fd5e8a03ad-0", "text": "Source code for langchain.document_loaders.psychic\n\"\"\"Loader that loads documents from Psychic.dev.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class PsychicLoader(BaseLoader):\n \"\"\"Loader that loads documents from Psychic.dev.\"\"\"\n def __init__(self, api_key: str, connector_id: str, connection_id: str):\n \"\"\"Initialize with API key, connector id, and connection id.\"\"\"\n try:\n from psychicapi import ConnectorId, Psychic # noqa: F401\n except ImportError:\n raise ImportError(\n \"`psychicapi` package not found, please run `pip install psychicapi`\"\n )\n self.psychic = Psychic(secret_key=api_key)\n self.connector_id = ConnectorId(connector_id)\n self.connection_id = connection_id\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n psychic_docs = self.psychic.get_documents(self.connector_id, self.connection_id)\n return [\n Document(\n page_content=doc[\"content\"],\n metadata={\"title\": doc[\"title\"], \"source\": doc[\"uri\"]},\n )\n for doc in psychic_docs\n ]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/psychic.html"}
+{"id": "a758edb01970-0", "text": "Source code for langchain.document_loaders.csv_loader\nimport csv\nfrom typing import Any, Dict, List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n validate_unstructured_version,\n)\n[docs]class CSVLoader(BaseLoader):\n \"\"\"Loads a CSV file into a list of documents.\n Each document represents one row of the CSV file. Every row is converted into a\n key/value pair and outputted to a new line in the document's page_content.\n The source for each document loaded from csv is set to the value of the\n `file_path` argument for all doucments by default.\n You can override this by setting the `source_column` argument to the\n name of a column in the CSV file.\n The source of each document will then be set to the value of the column\n with the name specified in `source_column`.\n Output Example:\n .. code-block:: txt\n column1: value1\n column2: value2\n column3: value3\n \"\"\"\n def __init__(\n self,\n file_path: str,\n source_column: Optional[str] = None,\n csv_args: Optional[Dict] = None,\n encoding: Optional[str] = None,\n ):\n self.file_path = file_path\n self.source_column = source_column\n self.encoding = encoding\n self.csv_args = csv_args or {}\n[docs] def load(self) -> List[Document]:\n \"\"\"Load data into document objects.\"\"\"\n docs = []\n with open(self.file_path, newline=\"\", encoding=self.encoding) as csvfile:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/csv_loader.html"}
+{"id": "a758edb01970-1", "text": "with open(self.file_path, newline=\"\", encoding=self.encoding) as csvfile:\n csv_reader = csv.DictReader(csvfile, **self.csv_args) # type: ignore\n for i, row in enumerate(csv_reader):\n content = \"\\n\".join(f\"{k.strip()}: {v.strip()}\" for k, v in row.items())\n try:\n source = (\n row[self.source_column]\n if self.source_column is not None\n else self.file_path\n )\n except KeyError:\n raise ValueError(\n f\"Source column '{self.source_column}' not found in CSV file.\"\n )\n metadata = {\"source\": source, \"row\": i}\n doc = Document(page_content=content, metadata=metadata)\n docs.append(doc)\n return docs\n[docs]class UnstructuredCSVLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load CSV files.\"\"\"\n def __init__(\n self, file_path: str, mode: str = \"single\", **unstructured_kwargs: Any\n ):\n validate_unstructured_version(min_unstructured_version=\"0.6.8\")\n super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.csv import partition_csv\n return partition_csv(filename=self.file_path, **self.unstructured_kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/csv_loader.html"}
+{"id": "17366eba7f1d-0", "text": "Source code for langchain.document_loaders.tomarkdown\n\"\"\"Loader that loads HTML to markdown using 2markdown.\"\"\"\nfrom __future__ import annotations\nfrom typing import Iterator, List\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class ToMarkdownLoader(BaseLoader):\n \"\"\"Loader that loads HTML to markdown using 2markdown.\"\"\"\n def __init__(self, url: str, api_key: str):\n \"\"\"Initialize with url and api key.\"\"\"\n self.url = url\n self.api_key = api_key\n[docs] def lazy_load(\n self,\n ) -> Iterator[Document]:\n \"\"\"Lazily load the file.\"\"\"\n response = requests.post(\n \"https://2markdown.com/api/2md\",\n headers={\"X-Api-Key\": self.api_key},\n json={\"url\": self.url},\n )\n text = response.json()[\"article\"]\n metadata = {\"source\": self.url}\n yield Document(page_content=text, metadata=metadata)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load file.\"\"\"\n return list(self.lazy_load())\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/tomarkdown.html"}
+{"id": "c21bda986cb7-0", "text": "Source code for langchain.document_loaders.snowflake_loader\nfrom __future__ import annotations\nfrom typing import Any, Dict, Iterator, List, Optional, Tuple\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class SnowflakeLoader(BaseLoader):\n \"\"\"Loads a query result from Snowflake into a list of documents.\n Each document represents one row of the result. The `page_content_columns`\n are written into the `page_content` of the document. The `metadata_columns`\n are written into the `metadata` of the document. By default, all columns\n are written into the `page_content` and none into the `metadata`.\n \"\"\"\n def __init__(\n self,\n query: str,\n user: str,\n password: str,\n account: str,\n warehouse: str,\n role: str,\n database: str,\n schema: str,\n parameters: Optional[Dict[str, Any]] = None,\n page_content_columns: Optional[List[str]] = None,\n metadata_columns: Optional[List[str]] = None,\n ):\n \"\"\"Initialize Snowflake document loader.\n Args:\n query: The query to run in Snowflake.\n user: Snowflake user.\n password: Snowflake password.\n account: Snowflake account.\n warehouse: Snowflake warehouse.\n role: Snowflake role.\n database: Snowflake database\n schema: Snowflake schema\n page_content_columns: Optional. Columns written to Document `page_content`.\n metadata_columns: Optional. Columns written to Document `metadata`.\n \"\"\"\n self.query = query\n self.user = user\n self.password = password\n self.account = account\n self.warehouse = warehouse", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/snowflake_loader.html"}
+{"id": "c21bda986cb7-1", "text": "self.password = password\n self.account = account\n self.warehouse = warehouse\n self.role = role\n self.database = database\n self.schema = schema\n self.parameters = parameters\n self.page_content_columns = (\n page_content_columns if page_content_columns is not None else [\"*\"]\n )\n self.metadata_columns = metadata_columns if metadata_columns is not None else []\n def _execute_query(self) -> List[Dict[str, Any]]:\n try:\n import snowflake.connector\n except ImportError as ex:\n raise ValueError(\n \"Could not import snowflake-connector-python package. \"\n \"Please install it with `pip install snowflake-connector-python`.\"\n ) from ex\n conn = snowflake.connector.connect(\n user=self.user,\n password=self.password,\n account=self.account,\n warehouse=self.warehouse,\n role=self.role,\n database=self.database,\n schema=self.schema,\n parameters=self.parameters,\n )\n try:\n cur = conn.cursor()\n cur.execute(\"USE DATABASE \" + self.database)\n cur.execute(\"USE SCHEMA \" + self.schema)\n cur.execute(self.query, self.parameters)\n query_result = cur.fetchall()\n column_names = [column[0] for column in cur.description]\n query_result = [dict(zip(column_names, row)) for row in query_result]\n except Exception as e:\n print(f\"An error occurred: {e}\")\n query_result = []\n finally:\n cur.close()\n return query_result\n def _get_columns(\n self, query_result: List[Dict[str, Any]]\n ) -> Tuple[List[str], List[str]]:\n page_content_columns = (", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/snowflake_loader.html"}
+{"id": "c21bda986cb7-2", "text": ") -> Tuple[List[str], List[str]]:\n page_content_columns = (\n self.page_content_columns if self.page_content_columns else []\n )\n metadata_columns = self.metadata_columns if self.metadata_columns else []\n if page_content_columns is None and query_result:\n page_content_columns = list(query_result[0].keys())\n if metadata_columns is None:\n metadata_columns = []\n return page_content_columns or [], metadata_columns\n[docs] def lazy_load(self) -> Iterator[Document]:\n query_result = self._execute_query()\n if isinstance(query_result, Exception):\n print(f\"An error occurred during the query: {query_result}\")\n return []\n page_content_columns, metadata_columns = self._get_columns(query_result)\n if \"*\" in page_content_columns:\n page_content_columns = list(query_result[0].keys())\n for row in query_result:\n page_content = \"\\n\".join(\n f\"{k}: {v}\" for k, v in row.items() if k in page_content_columns\n )\n metadata = {k: v for k, v in row.items() if k in metadata_columns}\n doc = Document(page_content=page_content, metadata=metadata)\n yield doc\n[docs] def load(self) -> List[Document]:\n \"\"\"Load data into document objects.\"\"\"\n return list(self.lazy_load())\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/snowflake_loader.html"}
+{"id": "3d8c857a005e-0", "text": "Source code for langchain.document_loaders.whatsapp_chat\nimport re\nfrom pathlib import Path\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\ndef concatenate_rows(date: str, sender: str, text: str) -> str:\n \"\"\"Combine message information in a readable format ready to be used.\"\"\"\n return f\"{sender} on {date}: {text}\\n\\n\"\n[docs]class WhatsAppChatLoader(BaseLoader):\n \"\"\"Loader that loads WhatsApp messages text file.\"\"\"\n def __init__(self, path: str):\n \"\"\"Initialize with path.\"\"\"\n self.file_path = path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n p = Path(self.file_path)\n text_content = \"\"\n with open(p, encoding=\"utf8\") as f:\n lines = f.readlines()\n message_line_regex = r\"\"\"\n \\[?\n (\n \\d{1,2}\n [\\/.]\n \\d{1,2}\n [\\/.]\n \\d{2,4}\n ,\\s\n \\d{1,2}\n :\\d{2}\n (?:\n :\\d{2}\n )?\n (?:[ _](?:AM|PM))?\n )\n \\]?\n [\\s-]*\n ([~\\w\\s]+)\n [:]+\n \\s\n (.+)\n \"\"\"\n for line in lines:\n result = re.match(message_line_regex, line.strip(), flags=re.VERBOSE)\n if result:\n date, sender, text = result.groups()\n text_content += concatenate_rows(date, sender, text)", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/whatsapp_chat.html"}
+{"id": "3d8c857a005e-1", "text": "text_content += concatenate_rows(date, sender, text)\n metadata = {\"source\": str(p)}\n return [Document(page_content=text_content, metadata=metadata)]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/whatsapp_chat.html"}
+{"id": "3d3b3951834a-0", "text": "Source code for langchain.document_loaders.readthedocs\n\"\"\"Loader that loads ReadTheDocs documentation directory dump.\"\"\"\nfrom pathlib import Path\nfrom typing import Any, List, Optional, Tuple, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class ReadTheDocsLoader(BaseLoader):\n \"\"\"Loader that loads ReadTheDocs documentation directory dump.\"\"\"\n def __init__(\n self,\n path: Union[str, Path],\n encoding: Optional[str] = None,\n errors: Optional[str] = None,\n custom_html_tag: Optional[Tuple[str, dict]] = None,\n **kwargs: Optional[Any]\n ):\n \"\"\"\n Initialize ReadTheDocsLoader\n The loader loops over all files under `path` and extract the actual content of\n the files by retrieving main html tags. Default main html tags include\n ``, <`div role=\"main>`, and ``. You\n can also define your own html tags by passing custom_html_tag, e.g.\n `(\"div\", \"class=main\")`. The loader iterates html tags with the order of\n custom html tags (if exists) and default html tags. If any of the tags is not\n empty, the loop will break and retrieve the content out of that tag.\n Args:\n path: The location of pulled readthedocs folder.\n encoding: The encoding with which to open the documents.\n errors: Specifies how encoding and decoding errors are to be handled\u2014this\n cannot be used in binary mode.\n custom_html_tag: Optional custom html tag to retrieve the content from\n files.\n \"\"\"\n try:\n from bs4 import BeautifulSoup\n except ImportError:\n raise ImportError(", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/readthedocs.html"}
+{"id": "3d3b3951834a-1", "text": "from bs4 import BeautifulSoup\n except ImportError:\n raise ImportError(\n \"Could not import python packages. \"\n \"Please install it with `pip install beautifulsoup4`. \"\n )\n try:\n _ = BeautifulSoup(\n \"Parser builder library test.\", **kwargs\n )\n except Exception as e:\n raise ValueError(\"Parsing kwargs do not appear valid\") from e\n self.file_path = Path(path)\n self.encoding = encoding\n self.errors = errors\n self.custom_html_tag = custom_html_tag\n self.bs_kwargs = kwargs\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n docs = []\n for p in self.file_path.rglob(\"*\"):\n if p.is_dir():\n continue\n with open(p, encoding=self.encoding, errors=self.errors) as f:\n text = self._clean_data(f.read())\n metadata = {\"source\": str(p)}\n docs.append(Document(page_content=text, metadata=metadata))\n return docs\n def _clean_data(self, data: str) -> str:\n from bs4 import BeautifulSoup\n soup = BeautifulSoup(data, **self.bs_kwargs)\n # default tags\n html_tags = [\n (\"div\", {\"role\": \"main\"}),\n (\"main\", {\"id\": \"main-content\"}),\n ]\n if self.custom_html_tag is not None:\n html_tags.append(self.custom_html_tag)\n text = None\n # reversed order. check the custom one first\n for tag, attrs in html_tags[::-1]:\n text = soup.find(tag, attrs)\n # if found, break\n if text is not None:\n break\n if text is not None:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/readthedocs.html"}
+{"id": "3d3b3951834a-2", "text": "if text is not None:\n break\n if text is not None:\n text = text.get_text()\n else:\n text = \"\"\n # trim empty lines\n return \"\\n\".join([t for t in text.split(\"\\n\") if t])\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/readthedocs.html"}
+{"id": "93a65e460a62-0", "text": "Source code for langchain.document_loaders.telegram\n\"\"\"Loader that loads Telegram chat json dump.\"\"\"\nfrom __future__ import annotations\nimport asyncio\nimport json\nfrom pathlib import Path\nfrom typing import TYPE_CHECKING, Dict, List, Optional, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\nif TYPE_CHECKING:\n import pandas as pd\n from telethon.hints import EntityLike\ndef concatenate_rows(row: dict) -> str:\n \"\"\"Combine message information in a readable format ready to be used.\"\"\"\n date = row[\"date\"]\n sender = row[\"from\"]\n text = row[\"text\"]\n return f\"{sender} on {date}: {text}\\n\\n\"\n[docs]class TelegramChatFileLoader(BaseLoader):\n \"\"\"Loader that loads Telegram chat json directory dump.\"\"\"\n def __init__(self, path: str):\n \"\"\"Initialize with path.\"\"\"\n self.file_path = path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n p = Path(self.file_path)\n with open(p, encoding=\"utf8\") as f:\n d = json.load(f)\n text = \"\".join(\n concatenate_rows(message)\n for message in d[\"messages\"]\n if message[\"type\"] == \"message\" and isinstance(message[\"text\"], str)\n )\n metadata = {\"source\": str(p)}\n return [Document(page_content=text, metadata=metadata)]\ndef text_to_docs(text: Union[str, List[str]]) -> List[Document]:\n \"\"\"Converts a string or list of strings to a list of Documents with metadata.\"\"\"\n if isinstance(text, str):\n # Take a single string as one page", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/telegram.html"}
+{"id": "93a65e460a62-1", "text": "if isinstance(text, str):\n # Take a single string as one page\n text = [text]\n page_docs = [Document(page_content=page) for page in text]\n # Add page numbers as metadata\n for i, doc in enumerate(page_docs):\n doc.metadata[\"page\"] = i + 1\n # Split pages into chunks\n doc_chunks = []\n for doc in page_docs:\n text_splitter = RecursiveCharacterTextSplitter(\n chunk_size=800,\n separators=[\"\\n\\n\", \"\\n\", \".\", \"!\", \"?\", \",\", \" \", \"\"],\n chunk_overlap=20,\n )\n chunks = text_splitter.split_text(doc.page_content)\n for i, chunk in enumerate(chunks):\n doc = Document(\n page_content=chunk, metadata={\"page\": doc.metadata[\"page\"], \"chunk\": i}\n )\n # Add sources a metadata\n doc.metadata[\"source\"] = f\"{doc.metadata['page']}-{doc.metadata['chunk']}\"\n doc_chunks.append(doc)\n return doc_chunks\n[docs]class TelegramChatApiLoader(BaseLoader):\n \"\"\"Loader that loads Telegram chat json directory dump.\"\"\"\n def __init__(\n self,\n chat_entity: Optional[EntityLike] = None,\n api_id: Optional[int] = None,\n api_hash: Optional[str] = None,\n username: Optional[str] = None,\n file_path: str = \"telegram_data.json\",\n ):\n \"\"\"Initialize with API parameters.\"\"\"\n self.chat_entity = chat_entity\n self.api_id = api_id\n self.api_hash = api_hash\n self.username = username\n self.file_path = file_path\n[docs] async def fetch_data_from_telegram(self) -> None:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/telegram.html"}
+{"id": "93a65e460a62-2", "text": "[docs] async def fetch_data_from_telegram(self) -> None:\n \"\"\"Fetch data from Telegram API and save it as a JSON file.\"\"\"\n from telethon.sync import TelegramClient\n data = []\n async with TelegramClient(self.username, self.api_id, self.api_hash) as client:\n async for message in client.iter_messages(self.chat_entity):\n is_reply = message.reply_to is not None\n reply_to_id = message.reply_to.reply_to_msg_id if is_reply else None\n data.append(\n {\n \"sender_id\": message.sender_id,\n \"text\": message.text,\n \"date\": message.date.isoformat(),\n \"message.id\": message.id,\n \"is_reply\": is_reply,\n \"reply_to_id\": reply_to_id,\n }\n )\n with open(self.file_path, \"w\", encoding=\"utf-8\") as f:\n json.dump(data, f, ensure_ascii=False, indent=4)\n def _get_message_threads(self, data: pd.DataFrame) -> dict:\n \"\"\"Create a dictionary of message threads from the given data.\n Args:\n data (pd.DataFrame): A DataFrame containing the conversation \\\n data with columns:\n - message.sender_id\n - text\n - date\n - message.id\n - is_reply\n - reply_to_id\n Returns:\n dict: A dictionary where the key is the parent message ID and \\\n the value is a list of message IDs in ascending order.\n \"\"\"\n def find_replies(parent_id: int, reply_data: pd.DataFrame) -> List[int]:\n \"\"\"\n Recursively find all replies to a given parent message ID.\n Args:\n parent_id (int): The parent message ID.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/telegram.html"}
+{"id": "93a65e460a62-3", "text": "Args:\n parent_id (int): The parent message ID.\n reply_data (pd.DataFrame): A DataFrame containing reply messages.\n Returns:\n list: A list of message IDs that are replies to the parent message ID.\n \"\"\"\n # Find direct replies to the parent message ID\n direct_replies = reply_data[reply_data[\"reply_to_id\"] == parent_id][\n \"message.id\"\n ].tolist()\n # Recursively find replies to the direct replies\n all_replies = []\n for reply_id in direct_replies:\n all_replies += [reply_id] + find_replies(reply_id, reply_data)\n return all_replies\n # Filter out parent messages\n parent_messages = data[~data[\"is_reply\"]]\n # Filter out reply messages and drop rows with NaN in 'reply_to_id'\n reply_messages = data[data[\"is_reply\"]].dropna(subset=[\"reply_to_id\"])\n # Convert 'reply_to_id' to integer\n reply_messages[\"reply_to_id\"] = reply_messages[\"reply_to_id\"].astype(int)\n # Create a dictionary of message threads with parent message IDs as keys and \\\n # lists of reply message IDs as values\n message_threads = {\n parent_id: [parent_id] + find_replies(parent_id, reply_messages)\n for parent_id in parent_messages[\"message.id\"]\n }\n return message_threads\n def _combine_message_texts(\n self, message_threads: Dict[int, List[int]], data: pd.DataFrame\n ) -> str:\n \"\"\"\n Combine the message texts for each parent message ID based \\\n on the list of message threads.\n Args:\n message_threads (dict): A dictionary where the key is the parent message \\", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/telegram.html"}
+{"id": "93a65e460a62-4", "text": "message_threads (dict): A dictionary where the key is the parent message \\\n ID and the value is a list of message IDs in ascending order.\n data (pd.DataFrame): A DataFrame containing the conversation data:\n - message.sender_id\n - text\n - date\n - message.id\n - is_reply\n - reply_to_id\n Returns:\n str: A combined string of message texts sorted by date.\n \"\"\"\n combined_text = \"\"\n # Iterate through sorted parent message IDs\n for parent_id, message_ids in message_threads.items():\n # Get the message texts for the message IDs and sort them by date\n message_texts = (\n data[data[\"message.id\"].isin(message_ids)]\n .sort_values(by=\"date\")[\"text\"]\n .tolist()\n )\n message_texts = [str(elem) for elem in message_texts]\n # Combine the message texts\n combined_text += \" \".join(message_texts) + \".\\n\"\n return combined_text.strip()\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n if self.chat_entity is not None:\n try:\n import nest_asyncio\n nest_asyncio.apply()\n asyncio.run(self.fetch_data_from_telegram())\n except ImportError:\n raise ImportError(\n \"\"\"`nest_asyncio` package not found.\n please install with `pip install nest_asyncio`\n \"\"\"\n )\n p = Path(self.file_path)\n with open(p, encoding=\"utf8\") as f:\n d = json.load(f)\n try:\n import pandas as pd\n except ImportError:\n raise ImportError(\n \"\"\"`pandas` package not found. \n please install with `pip install pandas`\n \"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/telegram.html"}
+{"id": "93a65e460a62-5", "text": "please install with `pip install pandas`\n \"\"\"\n )\n normalized_messages = pd.json_normalize(d)\n df = pd.DataFrame(normalized_messages)\n message_threads = self._get_message_threads(df)\n combined_texts = self._combine_message_texts(message_threads, df)\n return text_to_docs(combined_texts)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/telegram.html"}
+{"id": "f813bd6f1dda-0", "text": "Source code for langchain.document_loaders.arxiv\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utilities.arxiv import ArxivAPIWrapper\n[docs]class ArxivLoader(BaseLoader):\n \"\"\"Loads a query result from arxiv.org into a list of Documents.\n Each document represents one Document.\n The loader converts the original PDF format into the text.\n \"\"\"\n def __init__(\n self,\n query: str,\n load_max_docs: Optional[int] = 100,\n load_all_available_meta: Optional[bool] = False,\n ):\n self.query = query\n self.load_max_docs = load_max_docs\n self.load_all_available_meta = load_all_available_meta\n[docs] def load(self) -> List[Document]:\n arxiv_client = ArxivAPIWrapper(\n load_max_docs=self.load_max_docs,\n load_all_available_meta=self.load_all_available_meta,\n )\n docs = arxiv_client.load(self.query)\n return docs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/arxiv.html"}
+{"id": "b31758952c65-0", "text": "Source code for langchain.document_loaders.trello\n\"\"\"Loader that loads cards from Trello\"\"\"\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Any, List, Literal, Optional, Tuple\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import get_from_env\nif TYPE_CHECKING:\n from trello import Board, Card, TrelloClient\n[docs]class TrelloLoader(BaseLoader):\n \"\"\"Trello loader. Reads all cards from a Trello board.\"\"\"\n def __init__(\n self,\n client: TrelloClient,\n board_name: str,\n *,\n include_card_name: bool = True,\n include_comments: bool = True,\n include_checklist: bool = True,\n card_filter: Literal[\"closed\", \"open\", \"all\"] = \"all\",\n extra_metadata: Tuple[str, ...] = (\"due_date\", \"labels\", \"list\", \"closed\"),\n ):\n \"\"\"Initialize Trello loader.\n Args:\n client: Trello API client.\n board_name: The name of the Trello board.\n include_card_name: Whether to include the name of the card in the document.\n include_comments: Whether to include the comments on the card in the\n document.\n include_checklist: Whether to include the checklist on the card in the\n document.\n card_filter: Filter on card status. Valid values are \"closed\", \"open\",\n \"all\".\n extra_metadata: List of additional metadata fields to include as document\n metadata.Valid values are \"due_date\", \"labels\", \"list\", \"closed\".\n \"\"\"\n self.client = client\n self.board_name = board_name\n self.include_card_name = include_card_name", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/trello.html"}
+{"id": "b31758952c65-1", "text": "self.board_name = board_name\n self.include_card_name = include_card_name\n self.include_comments = include_comments\n self.include_checklist = include_checklist\n self.extra_metadata = extra_metadata\n self.card_filter = card_filter\n[docs] @classmethod\n def from_credentials(\n cls,\n board_name: str,\n *,\n api_key: Optional[str] = None,\n token: Optional[str] = None,\n **kwargs: Any,\n ) -> TrelloLoader:\n \"\"\"Convenience constructor that builds TrelloClient init param for you.\n Args:\n board_name: The name of the Trello board.\n api_key: Trello API key. Can also be specified as environment variable\n TRELLO_API_KEY.\n token: Trello token. Can also be specified as environment variable\n TRELLO_TOKEN.\n include_card_name: Whether to include the name of the card in the document.\n include_comments: Whether to include the comments on the card in the\n document.\n include_checklist: Whether to include the checklist on the card in the\n document.\n card_filter: Filter on card status. Valid values are \"closed\", \"open\",\n \"all\".\n extra_metadata: List of additional metadata fields to include as document\n metadata.Valid values are \"due_date\", \"labels\", \"list\", \"closed\".\n \"\"\"\n try:\n from trello import TrelloClient # type: ignore\n except ImportError as ex:\n raise ImportError(\n \"Could not import trello python package. \"\n \"Please install it with `pip install py-trello`.\"\n ) from ex\n api_key = api_key or get_from_env(\"api_key\", \"TRELLO_API_KEY\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/trello.html"}
+{"id": "b31758952c65-2", "text": "token = token or get_from_env(\"token\", \"TRELLO_TOKEN\")\n client = TrelloClient(api_key=api_key, token=token)\n return cls(client, board_name, **kwargs)\n[docs] def load(self) -> List[Document]:\n \"\"\"Loads all cards from the specified Trello board.\n You can filter the cards, metadata and text included by using the optional\n parameters.\n Returns:\n A list of documents, one for each card in the board.\n \"\"\"\n try:\n from bs4 import BeautifulSoup # noqa: F401\n except ImportError as ex:\n raise ImportError(\n \"`beautifulsoup4` package not found, please run\"\n \" `pip install beautifulsoup4`\"\n ) from ex\n board = self._get_board()\n # Create a dictionary with the list IDs as keys and the list names as values\n list_dict = {list_item.id: list_item.name for list_item in board.list_lists()}\n # Get Cards on the board\n cards = board.get_cards(card_filter=self.card_filter)\n return [self._card_to_doc(card, list_dict) for card in cards]\n def _get_board(self) -> Board:\n # Find the first board with a matching name\n board = next(\n (b for b in self.client.list_boards() if b.name == self.board_name), None\n )\n if not board:\n raise ValueError(f\"Board `{self.board_name}` not found.\")\n return board\n def _card_to_doc(self, card: Card, list_dict: dict) -> Document:\n from bs4 import BeautifulSoup # type: ignore\n text_content = \"\"\n if self.include_card_name:\n text_content = card.name + \"\\n\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/trello.html"}
+{"id": "b31758952c65-3", "text": "if self.include_card_name:\n text_content = card.name + \"\\n\"\n if card.description.strip():\n text_content += BeautifulSoup(card.description, \"lxml\").get_text()\n if self.include_checklist:\n # Get all the checklist items on the card\n for checklist in card.checklists:\n if checklist.items:\n items = [\n f\"{item['name']}:{item['state']}\" for item in checklist.items\n ]\n text_content += f\"\\n{checklist.name}\\n\" + \"\\n\".join(items)\n if self.include_comments:\n # Get all the comments on the card\n comments = [\n BeautifulSoup(comment[\"data\"][\"text\"], \"lxml\").get_text()\n for comment in card.comments\n ]\n text_content += \"Comments:\" + \"\\n\".join(comments)\n # Default metadata fields\n metadata = {\n \"title\": card.name,\n \"id\": card.id,\n \"url\": card.url,\n }\n # Extra metadata fields. Card object is not subscriptable.\n if \"labels\" in self.extra_metadata:\n metadata[\"labels\"] = [label.name for label in card.labels]\n if \"list\" in self.extra_metadata:\n if card.list_id in list_dict:\n metadata[\"list\"] = list_dict[card.list_id]\n if \"closed\" in self.extra_metadata:\n metadata[\"closed\"] = card.closed\n if \"due_date\" in self.extra_metadata:\n metadata[\"due_date\"] = card.due_date\n return Document(page_content=text_content, metadata=metadata)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/trello.html"}
+{"id": "8f1a44b8eada-0", "text": "Source code for langchain.document_loaders.bigquery\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nif TYPE_CHECKING:\n from google.auth.credentials import Credentials\n[docs]class BigQueryLoader(BaseLoader):\n \"\"\"Loads a query result from BigQuery into a list of documents.\n Each document represents one row of the result. The `page_content_columns`\n are written into the `page_content` of the document. The `metadata_columns`\n are written into the `metadata` of the document. By default, all columns\n are written into the `page_content` and none into the `metadata`.\n \"\"\"\n def __init__(\n self,\n query: str,\n project: Optional[str] = None,\n page_content_columns: Optional[List[str]] = None,\n metadata_columns: Optional[List[str]] = None,\n credentials: Optional[Credentials] = None,\n ):\n \"\"\"Initialize BigQuery document loader.\n Args:\n query: The query to run in BigQuery.\n project: Optional. The project to run the query in.\n page_content_columns: Optional. The columns to write into the `page_content`\n of the document.\n metadata_columns: Optional. The columns to write into the `metadata` of the\n document.\n credentials : google.auth.credentials.Credentials, optional\n Credentials for accessing Google APIs. Use this parameter to override\n default credentials, such as to use Compute Engine\n (`google.auth.compute_engine.Credentials`) or Service Account\n (`google.oauth2.service_account.Credentials`) credentials directly.\n \"\"\"\n self.query = query\n self.project = project\n self.page_content_columns = page_content_columns", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/bigquery.html"}
+{"id": "8f1a44b8eada-1", "text": "self.project = project\n self.page_content_columns = page_content_columns\n self.metadata_columns = metadata_columns\n self.credentials = credentials\n[docs] def load(self) -> List[Document]:\n try:\n from google.cloud import bigquery\n except ImportError as ex:\n raise ValueError(\n \"Could not import google-cloud-bigquery python package. \"\n \"Please install it with `pip install google-cloud-bigquery`.\"\n ) from ex\n bq_client = bigquery.Client(credentials=self.credentials, project=self.project)\n query_result = bq_client.query(self.query).result()\n docs: List[Document] = []\n page_content_columns = self.page_content_columns\n metadata_columns = self.metadata_columns\n if page_content_columns is None:\n page_content_columns = [column.name for column in query_result.schema]\n if metadata_columns is None:\n metadata_columns = []\n for row in query_result:\n page_content = \"\\n\".join(\n f\"{k}: {v}\" for k, v in row.items() if k in page_content_columns\n )\n metadata = {k: v for k, v in row.items() if k in metadata_columns}\n doc = Document(page_content=page_content, metadata=metadata)\n docs.append(doc)\n return docs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/bigquery.html"}
+{"id": "4ae9ecbe54a4-0", "text": "Source code for langchain.document_loaders.excel\n\"\"\"Loader that loads Microsoft Excel files.\"\"\"\nfrom typing import Any, List\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n validate_unstructured_version,\n)\n[docs]class UnstructuredExcelLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load Microsoft Excel files.\"\"\"\n def __init__(\n self, file_path: str, mode: str = \"single\", **unstructured_kwargs: Any\n ):\n validate_unstructured_version(min_unstructured_version=\"0.6.7\")\n super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.xlsx import partition_xlsx\n return partition_xlsx(filename=self.file_path, **self.unstructured_kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/excel.html"}
+{"id": "97fdf384f11b-0", "text": "Source code for langchain.document_loaders.html_bs\n\"\"\"Loader that uses bs4 to load HTML files, enriching metadata with page title.\"\"\"\nimport logging\nfrom typing import Dict, List, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\n[docs]class BSHTMLLoader(BaseLoader):\n \"\"\"Loader that uses beautiful soup to parse HTML files.\"\"\"\n def __init__(\n self,\n file_path: str,\n open_encoding: Union[str, None] = None,\n bs_kwargs: Union[dict, None] = None,\n get_text_separator: str = \"\",\n ) -> None:\n \"\"\"Initialise with path, and optionally, file encoding to use, and any kwargs\n to pass to the BeautifulSoup object.\"\"\"\n try:\n import bs4 # noqa:F401\n except ImportError:\n raise ValueError(\n \"beautifulsoup4 package not found, please install it with \"\n \"`pip install beautifulsoup4`\"\n )\n self.file_path = file_path\n self.open_encoding = open_encoding\n if bs_kwargs is None:\n bs_kwargs = {\"features\": \"lxml\"}\n self.bs_kwargs = bs_kwargs\n self.get_text_separator = get_text_separator\n[docs] def load(self) -> List[Document]:\n from bs4 import BeautifulSoup\n \"\"\"Load HTML document into document objects.\"\"\"\n with open(self.file_path, \"r\", encoding=self.open_encoding) as f:\n soup = BeautifulSoup(f, **self.bs_kwargs)\n text = soup.get_text(self.get_text_separator)\n if soup.title:\n title = str(soup.title.string)\n else:\n title = \"\"\n metadata: Dict[str, Union[str, None]] = {", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/html_bs.html"}
+{"id": "97fdf384f11b-1", "text": "title = \"\"\n metadata: Dict[str, Union[str, None]] = {\n \"source\": self.file_path,\n \"title\": title,\n }\n return [Document(page_content=text, metadata=metadata)]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/html_bs.html"}
+{"id": "18c10dc308e8-0", "text": "Source code for langchain.document_loaders.bibtex\nimport logging\nimport re\nfrom pathlib import Path\nfrom typing import Any, Iterator, List, Mapping, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utilities.bibtex import BibtexparserWrapper\nlogger = logging.getLogger(__name__)\n[docs]class BibtexLoader(BaseLoader):\n \"\"\"Loads a bibtex file into a list of Documents.\n Each document represents one entry from the bibtex file.\n If a PDF file is present in the `file` bibtex field, the original PDF\n is loaded into the document text. If no such file entry is present,\n the `abstract` field is used instead.\n \"\"\"\n def __init__(\n self,\n file_path: str,\n *,\n parser: Optional[BibtexparserWrapper] = None,\n max_docs: Optional[int] = None,\n max_content_chars: Optional[int] = 4_000,\n load_extra_metadata: bool = False,\n file_pattern: str = r\"[^:]+\\.pdf\",\n ):\n \"\"\"Initialize the BibtexLoader.\n Args:\n file_path: Path to the bibtex file.\n max_docs: Max number of associated documents to load. Use -1 means\n no limit.\n \"\"\"\n self.file_path = file_path\n self.parser = parser or BibtexparserWrapper()\n self.max_docs = max_docs\n self.max_content_chars = max_content_chars\n self.load_extra_metadata = load_extra_metadata\n self.file_regex = re.compile(file_pattern)\n def _load_entry(self, entry: Mapping[str, Any]) -> Optional[Document]:\n import fitz", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/bibtex.html"}
+{"id": "18c10dc308e8-1", "text": "import fitz\n parent_dir = Path(self.file_path).parent\n # regex is useful for Zotero flavor bibtex files\n file_names = self.file_regex.findall(entry.get(\"file\", \"\"))\n if not file_names:\n return None\n texts: List[str] = []\n for file_name in file_names:\n try:\n with fitz.open(parent_dir / file_name) as f:\n texts.extend(page.get_text() for page in f)\n except FileNotFoundError as e:\n logger.debug(e)\n content = \"\\n\".join(texts) or entry.get(\"abstract\", \"\")\n if self.max_content_chars:\n content = content[: self.max_content_chars]\n metadata = self.parser.get_metadata(entry, load_extra=self.load_extra_metadata)\n return Document(\n page_content=content,\n metadata=metadata,\n )\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"Load bibtex file using bibtexparser and get the article texts plus the\n article metadata.\n See https://bibtexparser.readthedocs.io/en/master/\n Returns:\n a list of documents with the document.page_content in text format\n \"\"\"\n try:\n import fitz # noqa: F401\n except ImportError:\n raise ImportError(\n \"PyMuPDF package not found, please install it with \"\n \"`pip install pymupdf`\"\n )\n entries = self.parser.load_bibtex_entries(self.file_path)\n if self.max_docs:\n entries = entries[: self.max_docs]\n for entry in entries:\n doc = self._load_entry(entry)\n if doc:\n yield doc\n[docs] def load(self) -> List[Document]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/bibtex.html"}
+{"id": "18c10dc308e8-2", "text": "yield doc\n[docs] def load(self) -> List[Document]:\n \"\"\"Load bibtex file documents from the given bibtex file path.\n See https://bibtexparser.readthedocs.io/en/master/\n Args:\n file_path: the path to the bibtex file\n Returns:\n a list of documents with the document.page_content in text format\n \"\"\"\n return list(self.lazy_load())\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/document_loaders/bibtex.html"}
+{"id": "13e625cde2e9-0", "text": "Source code for langchain.tools.plugin\nfrom __future__ import annotations\nimport json\nfrom typing import Optional, Type\nimport requests\nimport yaml\nfrom pydantic import BaseModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nclass ApiConfig(BaseModel):\n type: str\n url: str\n has_user_authentication: Optional[bool] = False\nclass AIPlugin(BaseModel):\n \"\"\"AI Plugin Definition.\"\"\"\n schema_version: str\n name_for_model: str\n name_for_human: str\n description_for_model: str\n description_for_human: str\n auth: Optional[dict] = None\n api: ApiConfig\n logo_url: Optional[str]\n contact_email: Optional[str]\n legal_info_url: Optional[str]\n @classmethod\n def from_url(cls, url: str) -> AIPlugin:\n \"\"\"Instantiate AIPlugin from a URL.\"\"\"\n response = requests.get(url).json()\n return cls(**response)\ndef marshal_spec(txt: str) -> dict:\n \"\"\"Convert the yaml or json serialized spec to a dict.\"\"\"\n try:\n return json.loads(txt)\n except json.JSONDecodeError:\n return yaml.safe_load(txt)\nclass AIPluginToolSchema(BaseModel):\n \"\"\"AIPLuginToolSchema.\"\"\"\n tool_input: Optional[str] = \"\"\n[docs]class AIPluginTool(BaseTool):\n plugin: AIPlugin\n api_spec: str\n args_schema: Type[AIPluginToolSchema] = AIPluginToolSchema\n[docs] @classmethod\n def from_plugin_url(cls, url: str) -> AIPluginTool:\n plugin = AIPlugin.from_url(url)\n description = (", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/plugin.html"}
+{"id": "13e625cde2e9-1", "text": "plugin = AIPlugin.from_url(url)\n description = (\n f\"Call this tool to get the OpenAPI spec (and usage guide) \"\n f\"for interacting with the {plugin.name_for_human} API. \"\n f\"You should only call this ONCE! What is the \"\n f\"{plugin.name_for_human} API useful for? \"\n ) + plugin.description_for_human\n open_api_spec_str = requests.get(plugin.api.url).text\n open_api_spec = marshal_spec(open_api_spec_str)\n api_spec = (\n f\"Usage Guide: {plugin.description_for_model}\\n\\n\"\n f\"OpenAPI Spec: {open_api_spec}\"\n )\n return cls(\n name=plugin.name_for_model,\n description=description,\n plugin=plugin,\n api_spec=api_spec,\n )\n def _run(\n self,\n tool_input: Optional[str] = \"\",\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.api_spec\n async def _arun(\n self,\n tool_input: Optional[str] = None,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n return self.api_spec\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/plugin.html"}
+{"id": "f52ef237b5c8-0", "text": "Source code for langchain.tools.ifttt\n\"\"\"From https://github.com/SidU/teams-langchain-js/wiki/Connecting-IFTTT-Services.\n# Creating a webhook\n- Go to https://ifttt.com/create\n# Configuring the \"If This\"\n- Click on the \"If This\" button in the IFTTT interface.\n- Search for \"Webhooks\" in the search bar.\n- Choose the first option for \"Receive a web request with a JSON payload.\"\n- Choose an Event Name that is specific to the service you plan to connect to.\nThis will make it easier for you to manage the webhook URL.\nFor example, if you're connecting to Spotify, you could use \"Spotify\" as your\nEvent Name.\n- Click the \"Create Trigger\" button to save your settings and create your webhook.\n# Configuring the \"Then That\"\n- Tap on the \"Then That\" button in the IFTTT interface.\n- Search for the service you want to connect, such as Spotify.\n- Choose an action from the service, such as \"Add track to a playlist\".\n- Configure the action by specifying the necessary details, such as the playlist name,\ne.g., \"Songs from AI\".\n- Reference the JSON Payload received by the Webhook in your action. For the Spotify\nscenario, choose \"{{JsonPayload}}\" as your search query.\n- Tap the \"Create Action\" button to save your action settings.\n- Once you have finished configuring your action, click the \"Finish\" button to\ncomplete the setup.\n- Congratulations! You have successfully connected the Webhook to the desired\nservice, and you're ready to start receiving data and triggering actions \ud83c\udf89\n# Finishing up\n- To get your webhook URL go to https://ifttt.com/maker_webhooks/settings", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/ifttt.html"}
+{"id": "f52ef237b5c8-1", "text": "- To get your webhook URL go to https://ifttt.com/maker_webhooks/settings\n- Copy the IFTTT key value from there. The URL is of the form\nhttps://maker.ifttt.com/use/YOUR_IFTTT_KEY. Grab the YOUR_IFTTT_KEY value.\n\"\"\"\nfrom typing import Optional\nimport requests\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\n[docs]class IFTTTWebhook(BaseTool):\n \"\"\"IFTTT Webhook.\n Args:\n name: name of the tool\n description: description of the tool\n url: url to hit with the json event.\n \"\"\"\n url: str\n def _run(\n self,\n tool_input: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n body = {\"this\": tool_input}\n response = requests.post(self.url, data=body)\n return response.text\n async def _arun(\n self,\n tool_input: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n raise NotImplementedError(\"Not implemented.\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/ifttt.html"}
+{"id": "46860caba4ff-0", "text": "Source code for langchain.tools.base\n\"\"\"Base implementation for tools or skills.\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom abc import ABC, abstractmethod\nfrom inspect import signature\nfrom typing import Any, Awaitable, Callable, Dict, Optional, Tuple, Type, Union\nfrom pydantic import (\n BaseModel,\n Extra,\n Field,\n create_model,\n root_validator,\n validate_arguments,\n)\nfrom pydantic.main import ModelMetaclass\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.callbacks.manager import (\n AsyncCallbackManager,\n AsyncCallbackManagerForToolRun,\n CallbackManager,\n CallbackManagerForToolRun,\n Callbacks,\n)\nclass SchemaAnnotationError(TypeError):\n \"\"\"Raised when 'args_schema' is missing or has an incorrect type annotation.\"\"\"\nclass ToolMetaclass(ModelMetaclass):\n \"\"\"Metaclass for BaseTool to ensure the provided args_schema\n doesn't silently ignored.\"\"\"\n def __new__(\n cls: Type[ToolMetaclass], name: str, bases: Tuple[Type, ...], dct: dict\n ) -> ToolMetaclass:\n \"\"\"Create the definition of the new tool class.\"\"\"\n schema_type: Optional[Type[BaseModel]] = dct.get(\"args_schema\")\n if schema_type is not None:\n schema_annotations = dct.get(\"__annotations__\", {})\n args_schema_type = schema_annotations.get(\"args_schema\", None)\n if args_schema_type is None or args_schema_type == BaseModel:\n # Throw errors for common mis-annotations.\n # TODO: Use get_args / get_origin and fully\n # specify valid annotations.\n typehint_mandate = \"\"\"\nclass ChildTool(BaseTool):\n ...\n args_schema: Type[BaseModel] = SchemaClass\n ...\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/base.html"}
+{"id": "46860caba4ff-1", "text": "...\n args_schema: Type[BaseModel] = SchemaClass\n ...\"\"\"\n raise SchemaAnnotationError(\n f\"Tool definition for {name} must include valid type annotations\"\n f\" for argument 'args_schema' to behave as expected.\\n\"\n f\"Expected annotation of 'Type[BaseModel]'\"\n f\" but got '{args_schema_type}'.\\n\"\n f\"Expected class looks like:\\n\"\n f\"{typehint_mandate}\"\n )\n # Pass through to Pydantic's metaclass\n return super().__new__(cls, name, bases, dct)\ndef _create_subset_model(\n name: str, model: BaseModel, field_names: list\n) -> Type[BaseModel]:\n \"\"\"Create a pydantic model with only a subset of model's fields.\"\"\"\n fields = {\n field_name: (\n model.__fields__[field_name].type_,\n model.__fields__[field_name].default,\n )\n for field_name in field_names\n if field_name in model.__fields__\n }\n return create_model(name, **fields) # type: ignore\ndef get_filtered_args(\n inferred_model: Type[BaseModel],\n func: Callable,\n) -> dict:\n \"\"\"Get the arguments from a function's signature.\"\"\"\n schema = inferred_model.schema()[\"properties\"]\n valid_keys = signature(func).parameters\n return {k: schema[k] for k in valid_keys if k != \"run_manager\"}\nclass _SchemaConfig:\n \"\"\"Configuration for the pydantic model.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\ndef create_schema_from_function(\n model_name: str,\n func: Callable,\n) -> Type[BaseModel]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/base.html"}
+{"id": "46860caba4ff-2", "text": "func: Callable,\n) -> Type[BaseModel]:\n \"\"\"Create a pydantic schema from a function's signature.\"\"\"\n validated = validate_arguments(func, config=_SchemaConfig) # type: ignore\n inferred_model = validated.model # type: ignore\n if \"run_manager\" in inferred_model.__fields__:\n del inferred_model.__fields__[\"run_manager\"]\n # Pydantic adds placeholder virtual fields we need to strip\n filtered_args = get_filtered_args(inferred_model, func)\n return _create_subset_model(\n f\"{model_name}Schema\", inferred_model, list(filtered_args)\n )\nclass ToolException(Exception):\n \"\"\"An optional exception that tool throws when execution error occurs.\n When this exception is thrown, the agent will not stop working,\n but will handle the exception according to the handle_tool_error\n variable of the tool, and the processing result will be returned\n to the agent as observation, and printed in red on the console.\n \"\"\"\n pass\n[docs]class BaseTool(ABC, BaseModel, metaclass=ToolMetaclass):\n \"\"\"Interface LangChain tools must implement.\"\"\"\n name: str\n \"\"\"The unique name of the tool that clearly communicates its purpose.\"\"\"\n description: str\n \"\"\"Used to tell the model how/when/why to use the tool.\n \n You can provide few-shot examples as a part of the description.\n \"\"\"\n args_schema: Optional[Type[BaseModel]] = None\n \"\"\"Pydantic model class to validate and parse the tool's input arguments.\"\"\"\n return_direct: bool = False\n \"\"\"Whether to return the tool's output directly. Setting this to True means\n \n that after the tool is called, the AgentExecutor will stop looping.\n \"\"\"\n verbose: bool = False", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/base.html"}
+{"id": "46860caba4ff-3", "text": "\"\"\"\n verbose: bool = False\n \"\"\"Whether to log the tool's progress.\"\"\"\n callbacks: Callbacks = Field(default=None, exclude=True)\n \"\"\"Callbacks to be called during tool execution.\"\"\"\n callback_manager: Optional[BaseCallbackManager] = Field(default=None, exclude=True)\n \"\"\"Deprecated. Please use callbacks instead.\"\"\"\n handle_tool_error: Optional[\n Union[bool, str, Callable[[ToolException], str]]\n ] = False\n \"\"\"Handle the content of the ToolException thrown.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def is_single_input(self) -> bool:\n \"\"\"Whether the tool only accepts a single input.\"\"\"\n keys = {k for k in self.args if k != \"kwargs\"}\n return len(keys) == 1\n @property\n def args(self) -> dict:\n if self.args_schema is not None:\n return self.args_schema.schema()[\"properties\"]\n else:\n schema = create_schema_from_function(self.name, self._run)\n return schema.schema()[\"properties\"]\n def _parse_input(\n self,\n tool_input: Union[str, Dict],\n ) -> Union[str, Dict[str, Any]]:\n \"\"\"Convert tool input to pydantic model.\"\"\"\n input_args = self.args_schema\n if isinstance(tool_input, str):\n if input_args is not None:\n key_ = next(iter(input_args.__fields__.keys()))\n input_args.validate({key_: tool_input})\n return tool_input\n else:\n if input_args is not None:\n result = input_args.parse_obj(tool_input)", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/base.html"}
+{"id": "46860caba4ff-4", "text": "if input_args is not None:\n result = input_args.parse_obj(tool_input)\n return {k: v for k, v in result.dict().items() if k in tool_input}\n return tool_input\n @root_validator()\n def raise_deprecation(cls, values: Dict) -> Dict:\n \"\"\"Raise deprecation warning if callback_manager is used.\"\"\"\n if values.get(\"callback_manager\") is not None:\n warnings.warn(\n \"callback_manager is deprecated. Please use callbacks instead.\",\n DeprecationWarning,\n )\n values[\"callbacks\"] = values.pop(\"callback_manager\", None)\n return values\n @abstractmethod\n def _run(\n self,\n *args: Any,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Use the tool.\n Add run_manager: Optional[CallbackManagerForToolRun] = None\n to child implementations to enable tracing,\n \"\"\"\n @abstractmethod\n async def _arun(\n self,\n *args: Any,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Use the tool asynchronously.\n Add run_manager: Optional[AsyncCallbackManagerForToolRun] = None\n to child implementations to enable tracing,\n \"\"\"\n def _to_args_and_kwargs(self, tool_input: Union[str, Dict]) -> Tuple[Tuple, Dict]:\n # For backwards compatibility, if run_input is a string,\n # pass as a positional argument.\n if isinstance(tool_input, str):\n return (tool_input,), {}\n else:\n return (), tool_input\n[docs] def run(\n self,\n tool_input: Union[str, Dict],\n verbose: Optional[bool] = None,", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/base.html"}
+{"id": "46860caba4ff-5", "text": "verbose: Optional[bool] = None,\n start_color: Optional[str] = \"green\",\n color: Optional[str] = \"green\",\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run the tool.\"\"\"\n parsed_input = self._parse_input(tool_input)\n if not self.verbose and verbose is not None:\n verbose_ = verbose\n else:\n verbose_ = self.verbose\n callback_manager = CallbackManager.configure(\n callbacks, self.callbacks, verbose=verbose_\n )\n # TODO: maybe also pass through run_manager is _run supports kwargs\n new_arg_supported = signature(self._run).parameters.get(\"run_manager\")\n run_manager = callback_manager.on_tool_start(\n {\"name\": self.name, \"description\": self.description},\n tool_input if isinstance(tool_input, str) else str(tool_input),\n color=start_color,\n **kwargs,\n )\n try:\n tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)\n observation = (\n self._run(*tool_args, run_manager=run_manager, **tool_kwargs)\n if new_arg_supported\n else self._run(*tool_args, **tool_kwargs)\n )\n except ToolException as e:\n if not self.handle_tool_error:\n run_manager.on_tool_error(e)\n raise e\n elif isinstance(self.handle_tool_error, bool):\n if e.args:\n observation = e.args[0]\n else:\n observation = \"Tool execution error\"\n elif isinstance(self.handle_tool_error, str):\n observation = self.handle_tool_error\n elif callable(self.handle_tool_error):\n observation = self.handle_tool_error(e)\n else:\n raise ValueError(", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/base.html"}
+{"id": "46860caba4ff-6", "text": "observation = self.handle_tool_error(e)\n else:\n raise ValueError(\n f\"Got unexpected type of `handle_tool_error`. Expected bool, str \"\n f\"or callable. Received: {self.handle_tool_error}\"\n )\n run_manager.on_tool_end(\n str(observation), color=\"red\", name=self.name, **kwargs\n )\n return observation\n except (Exception, KeyboardInterrupt) as e:\n run_manager.on_tool_error(e)\n raise e\n else:\n run_manager.on_tool_end(\n str(observation), color=color, name=self.name, **kwargs\n )\n return observation\n[docs] async def arun(\n self,\n tool_input: Union[str, Dict],\n verbose: Optional[bool] = None,\n start_color: Optional[str] = \"green\",\n color: Optional[str] = \"green\",\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run the tool asynchronously.\"\"\"\n parsed_input = self._parse_input(tool_input)\n if not self.verbose and verbose is not None:\n verbose_ = verbose\n else:\n verbose_ = self.verbose\n callback_manager = AsyncCallbackManager.configure(\n callbacks, self.callbacks, verbose=verbose_\n )\n new_arg_supported = signature(self._arun).parameters.get(\"run_manager\")\n run_manager = await callback_manager.on_tool_start(\n {\"name\": self.name, \"description\": self.description},\n tool_input if isinstance(tool_input, str) else str(tool_input),\n color=start_color,\n **kwargs,\n )\n try:\n # We then call the tool on the tool input to get an observation", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/base.html"}
+{"id": "46860caba4ff-7", "text": "try:\n # We then call the tool on the tool input to get an observation\n tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)\n observation = (\n await self._arun(*tool_args, run_manager=run_manager, **tool_kwargs)\n if new_arg_supported\n else await self._arun(*tool_args, **tool_kwargs)\n )\n except ToolException as e:\n if not self.handle_tool_error:\n await run_manager.on_tool_error(e)\n raise e\n elif isinstance(self.handle_tool_error, bool):\n if e.args:\n observation = e.args[0]\n else:\n observation = \"Tool execution error\"\n elif isinstance(self.handle_tool_error, str):\n observation = self.handle_tool_error\n elif callable(self.handle_tool_error):\n observation = self.handle_tool_error(e)\n else:\n raise ValueError(\n f\"Got unexpected type of `handle_tool_error`. Expected bool, str \"\n f\"or callable. Received: {self.handle_tool_error}\"\n )\n await run_manager.on_tool_end(\n str(observation), color=\"red\", name=self.name, **kwargs\n )\n return observation\n except (Exception, KeyboardInterrupt) as e:\n await run_manager.on_tool_error(e)\n raise e\n else:\n await run_manager.on_tool_end(\n str(observation), color=color, name=self.name, **kwargs\n )\n return observation\n def __call__(self, tool_input: str, callbacks: Callbacks = None) -> str:\n \"\"\"Make tool callable.\"\"\"\n return self.run(tool_input, callbacks=callbacks)\n[docs]class Tool(BaseTool):\n \"\"\"Tool that takes in function or coroutine directly.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/base.html"}
+{"id": "46860caba4ff-8", "text": "\"\"\"Tool that takes in function or coroutine directly.\"\"\"\n description: str = \"\"\n func: Callable[..., str]\n \"\"\"The function to run when the tool is called.\"\"\"\n coroutine: Optional[Callable[..., Awaitable[str]]] = None\n \"\"\"The asynchronous version of the function.\"\"\"\n @property\n def args(self) -> dict:\n \"\"\"The tool's input arguments.\"\"\"\n if self.args_schema is not None:\n return self.args_schema.schema()[\"properties\"]\n # For backwards compatibility, if the function signature is ambiguous,\n # assume it takes a single string input.\n return {\"tool_input\": {\"type\": \"string\"}}\n def _to_args_and_kwargs(self, tool_input: Union[str, Dict]) -> Tuple[Tuple, Dict]:\n \"\"\"Convert tool input to pydantic model.\"\"\"\n args, kwargs = super()._to_args_and_kwargs(tool_input)\n # For backwards compatibility. The tool must be run with a single input\n all_args = list(args) + list(kwargs.values())\n if len(all_args) != 1:\n raise ValueError(\n f\"Too many arguments to single-input tool {self.name}.\"\n f\" Args: {all_args}\"\n )\n return tuple(all_args), {}\n def _run(\n self,\n *args: Any,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Use the tool.\"\"\"\n new_argument_supported = signature(self.func).parameters.get(\"callbacks\")\n return (\n self.func(\n *args,\n callbacks=run_manager.get_child() if run_manager else None,\n **kwargs,\n )\n if new_argument_supported", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/base.html"}
+{"id": "46860caba4ff-9", "text": "**kwargs,\n )\n if new_argument_supported\n else self.func(*args, **kwargs)\n )\n async def _arun(\n self,\n *args: Any,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Use the tool asynchronously.\"\"\"\n if self.coroutine:\n new_argument_supported = signature(self.coroutine).parameters.get(\n \"callbacks\"\n )\n return (\n await self.coroutine(\n *args,\n callbacks=run_manager.get_child() if run_manager else None,\n **kwargs,\n )\n if new_argument_supported\n else await self.coroutine(*args, **kwargs)\n )\n raise NotImplementedError(\"Tool does not support async\")\n # TODO: this is for backwards compatibility, remove in future\n def __init__(\n self, name: str, func: Callable, description: str, **kwargs: Any\n ) -> None:\n \"\"\"Initialize tool.\"\"\"\n super(Tool, self).__init__(\n name=name, func=func, description=description, **kwargs\n )\n[docs] @classmethod\n def from_function(\n cls,\n func: Callable,\n name: str, # We keep these required to support backwards compatibility\n description: str,\n return_direct: bool = False,\n args_schema: Optional[Type[BaseModel]] = None,\n **kwargs: Any,\n ) -> Tool:\n \"\"\"Initialize tool from a function.\"\"\"\n return cls(\n name=name,\n func=func,\n description=description,\n return_direct=return_direct,\n args_schema=args_schema,\n **kwargs,", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/base.html"}
+{"id": "46860caba4ff-10", "text": "return_direct=return_direct,\n args_schema=args_schema,\n **kwargs,\n )\n[docs]class StructuredTool(BaseTool):\n \"\"\"Tool that can operate on any number of inputs.\"\"\"\n description: str = \"\"\n args_schema: Type[BaseModel] = Field(..., description=\"The tool schema.\")\n \"\"\"The input arguments' schema.\"\"\"\n func: Callable[..., Any]\n \"\"\"The function to run when the tool is called.\"\"\"\n coroutine: Optional[Callable[..., Awaitable[Any]]] = None\n \"\"\"The asynchronous version of the function.\"\"\"\n @property\n def args(self) -> dict:\n \"\"\"The tool's input arguments.\"\"\"\n return self.args_schema.schema()[\"properties\"]\n def _run(\n self,\n *args: Any,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Use the tool.\"\"\"\n new_argument_supported = signature(self.func).parameters.get(\"callbacks\")\n return (\n self.func(\n *args,\n callbacks=run_manager.get_child() if run_manager else None,\n **kwargs,\n )\n if new_argument_supported\n else self.func(*args, **kwargs)\n )\n async def _arun(\n self,\n *args: Any,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n if self.coroutine:\n new_argument_supported = signature(self.coroutine).parameters.get(\n \"callbacks\"\n )\n return (\n await self.coroutine(\n *args,", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/base.html"}
+{"id": "46860caba4ff-11", "text": ")\n return (\n await self.coroutine(\n *args,\n callbacks=run_manager.get_child() if run_manager else None,\n **kwargs,\n )\n if new_argument_supported\n else await self.coroutine(*args, **kwargs)\n )\n raise NotImplementedError(\"Tool does not support async\")\n[docs] @classmethod\n def from_function(\n cls,\n func: Callable,\n name: Optional[str] = None,\n description: Optional[str] = None,\n return_direct: bool = False,\n args_schema: Optional[Type[BaseModel]] = None,\n infer_schema: bool = True,\n **kwargs: Any,\n ) -> StructuredTool:\n name = name or func.__name__\n description = description or func.__doc__\n assert (\n description is not None\n ), \"Function must have a docstring if description not provided.\"\n # Description example:\n # search_api(query: str) - Searches the API for the query.\n description = f\"{name}{signature(func)} - {description.strip()}\"\n _args_schema = args_schema\n if _args_schema is None and infer_schema:\n _args_schema = create_schema_from_function(f\"{name}Schema\", func)\n return cls(\n name=name,\n func=func,\n args_schema=_args_schema,\n description=description,\n return_direct=return_direct,\n **kwargs,\n )\n[docs]def tool(\n *args: Union[str, Callable],\n return_direct: bool = False,\n args_schema: Optional[Type[BaseModel]] = None,\n infer_schema: bool = True,\n) -> Callable:", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/base.html"}
+{"id": "46860caba4ff-12", "text": "infer_schema: bool = True,\n) -> Callable:\n \"\"\"Make tools out of functions, can be used with or without arguments.\n Args:\n *args: The arguments to the tool.\n return_direct: Whether to return directly from the tool rather\n than continuing the agent loop.\n args_schema: optional argument schema for user to specify\n infer_schema: Whether to infer the schema of the arguments from\n the function's signature. This also makes the resultant tool\n accept a dictionary input to its `run()` function.\n Requires:\n - Function must be of type (str) -> str\n - Function must have a docstring\n Examples:\n .. code-block:: python\n @tool\n def search_api(query: str) -> str:\n # Searches the API for the query.\n return\n @tool(\"search\", return_direct=True)\n def search_api(query: str) -> str:\n # Searches the API for the query.\n return\n \"\"\"\n def _make_with_name(tool_name: str) -> Callable:\n def _make_tool(func: Callable) -> BaseTool:\n if infer_schema or args_schema is not None:\n return StructuredTool.from_function(\n func,\n name=tool_name,\n return_direct=return_direct,\n args_schema=args_schema,\n infer_schema=infer_schema,\n )\n # If someone doesn't want a schema applied, we must treat it as\n # a simple string->string function\n assert func.__doc__ is not None, \"Function must have a docstring\"\n return Tool(\n name=tool_name,\n func=func,\n description=f\"{tool_name} tool\",\n return_direct=return_direct,\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/base.html"}
+{"id": "46860caba4ff-13", "text": "return_direct=return_direct,\n )\n return _make_tool\n if len(args) == 1 and isinstance(args[0], str):\n # if the argument is a string, then we use the string as the tool name\n # Example usage: @tool(\"search\", return_direct=True)\n return _make_with_name(args[0])\n elif len(args) == 1 and callable(args[0]):\n # if the argument is a function, then we use the function name as the tool name\n # Example usage: @tool\n return _make_with_name(args[0].__name__)(args[0])\n elif len(args) == 0:\n # if there are no arguments, then we use the function name as the tool name\n # Example usage: @tool(return_direct=True)\n def _partial(func: Callable[[str], str]) -> BaseTool:\n return _make_with_name(func.__name__)(func)\n return _partial\n else:\n raise ValueError(\"Too many arguments for tool decorator\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/base.html"}
+{"id": "dcbc2da06604-0", "text": "Source code for langchain.tools.playwright.click\nfrom __future__ import annotations\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.playwright.base import BaseBrowserTool\nfrom langchain.tools.playwright.utils import (\n aget_current_page,\n get_current_page,\n)\nclass ClickToolInput(BaseModel):\n \"\"\"Input for ClickTool.\"\"\"\n selector: str = Field(..., description=\"CSS selector for the element to click\")\n[docs]class ClickTool(BaseBrowserTool):\n name: str = \"click_element\"\n description: str = \"Click on an element with the given CSS selector\"\n args_schema: Type[BaseModel] = ClickToolInput\n visible_only: bool = True\n \"\"\"Whether to consider only visible elements.\"\"\"\n playwright_strict: bool = False\n \"\"\"Whether to employ Playwright's strict mode when clicking on elements.\"\"\"\n playwright_timeout: float = 1_000\n \"\"\"Timeout (in ms) for Playwright to wait for element to be ready.\"\"\"\n def _selector_effective(self, selector: str) -> str:\n if not self.visible_only:\n return selector\n return f\"{selector} >> visible=1\"\n def _run(\n self,\n selector: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.sync_browser is None:\n raise ValueError(f\"Synchronous browser not provided to {self.name}\")\n page = get_current_page(self.sync_browser)\n # Navigate to the desired webpage before using this tool", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/playwright/click.html"}
+{"id": "dcbc2da06604-1", "text": "# Navigate to the desired webpage before using this tool\n selector_effective = self._selector_effective(selector=selector)\n from playwright.sync_api import TimeoutError as PlaywrightTimeoutError\n try:\n page.click(\n selector_effective,\n strict=self.playwright_strict,\n timeout=self.playwright_timeout,\n )\n except PlaywrightTimeoutError:\n return f\"Unable to click on element '{selector}'\"\n return f\"Clicked element '{selector}'\"\n async def _arun(\n self,\n selector: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.async_browser is None:\n raise ValueError(f\"Asynchronous browser not provided to {self.name}\")\n page = await aget_current_page(self.async_browser)\n # Navigate to the desired webpage before using this tool\n selector_effective = self._selector_effective(selector=selector)\n from playwright.async_api import TimeoutError as PlaywrightTimeoutError\n try:\n await page.click(\n selector_effective,\n strict=self.playwright_strict,\n timeout=self.playwright_timeout,\n )\n except PlaywrightTimeoutError:\n return f\"Unable to click on element '{selector}'\"\n return f\"Clicked element '{selector}'\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/playwright/click.html"}
+{"id": "61af0d0587da-0", "text": "Source code for langchain.tools.playwright.get_elements\nfrom __future__ import annotations\nimport json\nfrom typing import TYPE_CHECKING, List, Optional, Sequence, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.playwright.base import BaseBrowserTool\nfrom langchain.tools.playwright.utils import aget_current_page, get_current_page\nif TYPE_CHECKING:\n from playwright.async_api import Page as AsyncPage\n from playwright.sync_api import Page as SyncPage\nclass GetElementsToolInput(BaseModel):\n \"\"\"Input for GetElementsTool.\"\"\"\n selector: str = Field(\n ...,\n description=\"CSS selector, such as '*', 'div', 'p', 'a', #id, .classname\",\n )\n attributes: List[str] = Field(\n default_factory=lambda: [\"innerText\"],\n description=\"Set of attributes to retrieve for each element\",\n )\nasync def _aget_elements(\n page: AsyncPage, selector: str, attributes: Sequence[str]\n) -> List[dict]:\n \"\"\"Get elements matching the given CSS selector.\"\"\"\n elements = await page.query_selector_all(selector)\n results = []\n for element in elements:\n result = {}\n for attribute in attributes:\n if attribute == \"innerText\":\n val: Optional[str] = await element.inner_text()\n else:\n val = await element.get_attribute(attribute)\n if val is not None and val.strip() != \"\":\n result[attribute] = val\n if result:\n results.append(result)\n return results\ndef _get_elements(\n page: SyncPage, selector: str, attributes: Sequence[str]\n) -> List[dict]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/playwright/get_elements.html"}
+{"id": "61af0d0587da-1", "text": ") -> List[dict]:\n \"\"\"Get elements matching the given CSS selector.\"\"\"\n elements = page.query_selector_all(selector)\n results = []\n for element in elements:\n result = {}\n for attribute in attributes:\n if attribute == \"innerText\":\n val: Optional[str] = element.inner_text()\n else:\n val = element.get_attribute(attribute)\n if val is not None and val.strip() != \"\":\n result[attribute] = val\n if result:\n results.append(result)\n return results\n[docs]class GetElementsTool(BaseBrowserTool):\n name: str = \"get_elements\"\n description: str = (\n \"Retrieve elements in the current web page matching the given CSS selector\"\n )\n args_schema: Type[BaseModel] = GetElementsToolInput\n def _run(\n self,\n selector: str,\n attributes: Sequence[str] = [\"innerText\"],\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.sync_browser is None:\n raise ValueError(f\"Synchronous browser not provided to {self.name}\")\n page = get_current_page(self.sync_browser)\n # Navigate to the desired webpage before using this tool\n results = _get_elements(page, selector, attributes)\n return json.dumps(results, ensure_ascii=False)\n async def _arun(\n self,\n selector: str,\n attributes: Sequence[str] = [\"innerText\"],\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.async_browser is None:\n raise ValueError(f\"Asynchronous browser not provided to {self.name}\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/playwright/get_elements.html"}
+{"id": "61af0d0587da-2", "text": "raise ValueError(f\"Asynchronous browser not provided to {self.name}\")\n page = await aget_current_page(self.async_browser)\n # Navigate to the desired webpage before using this tool\n results = await _aget_elements(page, selector, attributes)\n return json.dumps(results, ensure_ascii=False)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/playwright/get_elements.html"}
+{"id": "a2aa35e494f2-0", "text": "Source code for langchain.tools.playwright.current_page\nfrom __future__ import annotations\nfrom typing import Optional, Type\nfrom pydantic import BaseModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.playwright.base import BaseBrowserTool\nfrom langchain.tools.playwright.utils import aget_current_page, get_current_page\n[docs]class CurrentWebPageTool(BaseBrowserTool):\n name: str = \"current_webpage\"\n description: str = \"Returns the URL of the current page\"\n args_schema: Type[BaseModel] = BaseModel\n def _run(\n self,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.sync_browser is None:\n raise ValueError(f\"Synchronous browser not provided to {self.name}\")\n page = get_current_page(self.sync_browser)\n return str(page.url)\n async def _arun(\n self,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.async_browser is None:\n raise ValueError(f\"Asynchronous browser not provided to {self.name}\")\n page = await aget_current_page(self.async_browser)\n return str(page.url)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/playwright/current_page.html"}
+{"id": "8002f2737ca6-0", "text": "Source code for langchain.tools.playwright.extract_hyperlinks\nfrom __future__ import annotations\nimport json\nfrom typing import TYPE_CHECKING, Any, Optional, Type\nfrom pydantic import BaseModel, Field, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.playwright.base import BaseBrowserTool\nfrom langchain.tools.playwright.utils import aget_current_page, get_current_page\nif TYPE_CHECKING:\n pass\nclass ExtractHyperlinksToolInput(BaseModel):\n \"\"\"Input for ExtractHyperlinksTool.\"\"\"\n absolute_urls: bool = Field(\n default=False,\n description=\"Return absolute URLs instead of relative URLs\",\n )\n[docs]class ExtractHyperlinksTool(BaseBrowserTool):\n \"\"\"Extract all hyperlinks on the page.\"\"\"\n name: str = \"extract_hyperlinks\"\n description: str = \"Extract all hyperlinks on the current webpage\"\n args_schema: Type[BaseModel] = ExtractHyperlinksToolInput\n @root_validator\n def check_bs_import(cls, values: dict) -> dict:\n \"\"\"Check that the arguments are valid.\"\"\"\n try:\n from bs4 import BeautifulSoup # noqa: F401\n except ImportError:\n raise ValueError(\n \"The 'beautifulsoup4' package is required to use this tool.\"\n \" Please install it with 'pip install beautifulsoup4'.\"\n )\n return values\n[docs] @staticmethod\n def scrape_page(page: Any, html_content: str, absolute_urls: bool) -> str:\n from urllib.parse import urljoin\n from bs4 import BeautifulSoup\n # Parse the HTML content with BeautifulSoup\n soup = BeautifulSoup(html_content, \"lxml\")\n # Find all the anchor elements and extract their href attributes", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/playwright/extract_hyperlinks.html"}
+{"id": "8002f2737ca6-1", "text": "# Find all the anchor elements and extract their href attributes\n anchors = soup.find_all(\"a\")\n if absolute_urls:\n base_url = page.url\n links = [urljoin(base_url, anchor.get(\"href\", \"\")) for anchor in anchors]\n else:\n links = [anchor.get(\"href\", \"\") for anchor in anchors]\n # Return the list of links as a JSON string\n return json.dumps(links)\n def _run(\n self,\n absolute_urls: bool = False,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.sync_browser is None:\n raise ValueError(f\"Synchronous browser not provided to {self.name}\")\n page = get_current_page(self.sync_browser)\n html_content = page.content()\n return self.scrape_page(page, html_content, absolute_urls)\n async def _arun(\n self,\n absolute_urls: bool = False,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n if self.async_browser is None:\n raise ValueError(f\"Asynchronous browser not provided to {self.name}\")\n page = await aget_current_page(self.async_browser)\n html_content = await page.content()\n return self.scrape_page(page, html_content, absolute_urls)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/playwright/extract_hyperlinks.html"}
+{"id": "87a702a7dc78-0", "text": "Source code for langchain.tools.playwright.navigate_back\nfrom __future__ import annotations\nfrom typing import Optional, Type\nfrom pydantic import BaseModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.playwright.base import BaseBrowserTool\nfrom langchain.tools.playwright.utils import (\n aget_current_page,\n get_current_page,\n)\n[docs]class NavigateBackTool(BaseBrowserTool):\n \"\"\"Navigate back to the previous page in the browser history.\"\"\"\n name: str = \"previous_webpage\"\n description: str = \"Navigate back to the previous page in the browser history\"\n args_schema: Type[BaseModel] = BaseModel\n def _run(self, run_manager: Optional[CallbackManagerForToolRun] = None) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.sync_browser is None:\n raise ValueError(f\"Synchronous browser not provided to {self.name}\")\n page = get_current_page(self.sync_browser)\n response = page.go_back()\n if response:\n return (\n f\"Navigated back to the previous page with URL '{response.url}'.\"\n f\" Status code {response.status}\"\n )\n else:\n return \"Unable to navigate back; no previous page in the history\"\n async def _arun(\n self,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.async_browser is None:\n raise ValueError(f\"Asynchronous browser not provided to {self.name}\")\n page = await aget_current_page(self.async_browser)\n response = await page.go_back()\n if response:\n return (", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/playwright/navigate_back.html"}
+{"id": "87a702a7dc78-1", "text": "response = await page.go_back()\n if response:\n return (\n f\"Navigated back to the previous page with URL '{response.url}'.\"\n f\" Status code {response.status}\"\n )\n else:\n return \"Unable to navigate back; no previous page in the history\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/playwright/navigate_back.html"}
+{"id": "12a4f868196b-0", "text": "Source code for langchain.tools.playwright.navigate\nfrom __future__ import annotations\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.playwright.base import BaseBrowserTool\nfrom langchain.tools.playwright.utils import (\n aget_current_page,\n get_current_page,\n)\nclass NavigateToolInput(BaseModel):\n \"\"\"Input for NavigateToolInput.\"\"\"\n url: str = Field(..., description=\"url to navigate to\")\n[docs]class NavigateTool(BaseBrowserTool):\n name: str = \"navigate_browser\"\n description: str = \"Navigate a browser to the specified URL\"\n args_schema: Type[BaseModel] = NavigateToolInput\n def _run(\n self,\n url: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.sync_browser is None:\n raise ValueError(f\"Synchronous browser not provided to {self.name}\")\n page = get_current_page(self.sync_browser)\n response = page.goto(url)\n status = response.status if response else \"unknown\"\n return f\"Navigating to {url} returned status code {status}\"\n async def _arun(\n self,\n url: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.async_browser is None:\n raise ValueError(f\"Asynchronous browser not provided to {self.name}\")\n page = await aget_current_page(self.async_browser)\n response = await page.goto(url)", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/playwright/navigate.html"}
+{"id": "12a4f868196b-1", "text": "response = await page.goto(url)\n status = response.status if response else \"unknown\"\n return f\"Navigating to {url} returned status code {status}\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/playwright/navigate.html"}
+{"id": "1045adbb2847-0", "text": "Source code for langchain.tools.playwright.extract_text\nfrom __future__ import annotations\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.playwright.base import BaseBrowserTool\nfrom langchain.tools.playwright.utils import aget_current_page, get_current_page\n[docs]class ExtractTextTool(BaseBrowserTool):\n name: str = \"extract_text\"\n description: str = \"Extract all the text on the current webpage\"\n args_schema: Type[BaseModel] = BaseModel\n @root_validator\n def check_acheck_bs_importrgs(cls, values: dict) -> dict:\n \"\"\"Check that the arguments are valid.\"\"\"\n try:\n from bs4 import BeautifulSoup # noqa: F401\n except ImportError:\n raise ValueError(\n \"The 'beautifulsoup4' package is required to use this tool.\"\n \" Please install it with 'pip install beautifulsoup4'.\"\n )\n return values\n def _run(self, run_manager: Optional[CallbackManagerForToolRun] = None) -> str:\n \"\"\"Use the tool.\"\"\"\n # Use Beautiful Soup since it's faster than looping through the elements\n from bs4 import BeautifulSoup\n if self.sync_browser is None:\n raise ValueError(f\"Synchronous browser not provided to {self.name}\")\n page = get_current_page(self.sync_browser)\n html_content = page.content()\n # Parse the HTML content with BeautifulSoup\n soup = BeautifulSoup(html_content, \"lxml\")\n return \" \".join(text for text in soup.stripped_strings)\n async def _arun(\n self, run_manager: Optional[AsyncCallbackManagerForToolRun] = None", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/playwright/extract_text.html"}
+{"id": "1045adbb2847-1", "text": "self, run_manager: Optional[AsyncCallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.async_browser is None:\n raise ValueError(f\"Asynchronous browser not provided to {self.name}\")\n # Use Beautiful Soup since it's faster than looping through the elements\n from bs4 import BeautifulSoup\n page = await aget_current_page(self.async_browser)\n html_content = await page.content()\n # Parse the HTML content with BeautifulSoup\n soup = BeautifulSoup(html_content, \"lxml\")\n return \" \".join(text for text in soup.stripped_strings)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/playwright/extract_text.html"}
+{"id": "e0a1a5bcc832-0", "text": "Source code for langchain.tools.azure_cognitive_services.text2speech\nfrom __future__ import annotations\nimport logging\nimport tempfile\nfrom typing import Any, Dict, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class AzureCogsText2SpeechTool(BaseTool):\n \"\"\"Tool that queries the Azure Cognitive Services Text2Speech API.\n In order to set this up, follow instructions at:\n https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-text-to-speech?pivots=programming-language-python\n \"\"\"\n azure_cogs_key: str = \"\" #: :meta private:\n azure_cogs_region: str = \"\" #: :meta private:\n speech_language: str = \"en-US\" #: :meta private:\n speech_config: Any #: :meta private:\n name = \"Azure Cognitive Services Text2Speech\"\n description = (\n \"A wrapper around Azure Cognitive Services Text2Speech. \"\n \"Useful for when you need to convert text to speech. \"\n )\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and endpoint exists in environment.\"\"\"\n azure_cogs_key = get_from_dict_or_env(\n values, \"azure_cogs_key\", \"AZURE_COGS_KEY\"\n )\n azure_cogs_region = get_from_dict_or_env(\n values, \"azure_cogs_region\", \"AZURE_COGS_REGION\"\n )\n try:", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/text2speech.html"}
+{"id": "e0a1a5bcc832-1", "text": ")\n try:\n import azure.cognitiveservices.speech as speechsdk\n values[\"speech_config\"] = speechsdk.SpeechConfig(\n subscription=azure_cogs_key, region=azure_cogs_region\n )\n except ImportError:\n raise ImportError(\n \"azure-cognitiveservices-speech is not installed. \"\n \"Run `pip install azure-cognitiveservices-speech` to install.\"\n )\n return values\n def _text2speech(self, text: str, speech_language: str) -> str:\n try:\n import azure.cognitiveservices.speech as speechsdk\n except ImportError:\n pass\n self.speech_config.speech_synthesis_language = speech_language\n speech_synthesizer = speechsdk.SpeechSynthesizer(\n speech_config=self.speech_config, audio_config=None\n )\n result = speech_synthesizer.speak_text(text)\n if result.reason == speechsdk.ResultReason.SynthesizingAudioCompleted:\n stream = speechsdk.AudioDataStream(result)\n with tempfile.NamedTemporaryFile(\n mode=\"wb\", suffix=\".wav\", delete=False\n ) as f:\n stream.save_to_wav_file(f.name)\n return f.name\n elif result.reason == speechsdk.ResultReason.Canceled:\n cancellation_details = result.cancellation_details\n logger.debug(f\"Speech synthesis canceled: {cancellation_details.reason}\")\n if cancellation_details.reason == speechsdk.CancellationReason.Error:\n raise RuntimeError(\n f\"Speech synthesis error: {cancellation_details.error_details}\"\n )\n return \"Speech synthesis canceled.\"\n else:\n return f\"Speech synthesis failed: {result.reason}\"\n def _run(\n self,\n query: str,", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/text2speech.html"}
+{"id": "e0a1a5bcc832-2", "text": "def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n try:\n speech_file = self._text2speech(query, self.speech_language)\n return speech_file\n except Exception as e:\n raise RuntimeError(f\"Error while running AzureCogsText2SpeechTool: {e}\")\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"AzureCogsText2SpeechTool does not support async\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/text2speech.html"}
+{"id": "dd9b6e2d9b95-0", "text": "Source code for langchain.tools.azure_cognitive_services.form_recognizer\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.azure_cognitive_services.utils import detect_file_src_type\nfrom langchain.tools.base import BaseTool\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class AzureCogsFormRecognizerTool(BaseTool):\n \"\"\"Tool that queries the Azure Cognitive Services Form Recognizer API.\n In order to set this up, follow instructions at:\n https://learn.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/quickstarts/get-started-sdks-rest-api?view=form-recog-3.0.0&pivots=programming-language-python\n \"\"\"\n azure_cogs_key: str = \"\" #: :meta private:\n azure_cogs_endpoint: str = \"\" #: :meta private:\n doc_analysis_client: Any #: :meta private:\n name = \"Azure Cognitive Services Form Recognizer\"\n description = (\n \"A wrapper around Azure Cognitive Services Form Recognizer. \"\n \"Useful for when you need to \"\n \"extract text, tables, and key-value pairs from documents. \"\n \"Input should be a url to a document.\"\n )\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and endpoint exists in environment.\"\"\"\n azure_cogs_key = get_from_dict_or_env(\n values, \"azure_cogs_key\", \"AZURE_COGS_KEY\"\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/form_recognizer.html"}
+{"id": "dd9b6e2d9b95-1", "text": ")\n azure_cogs_endpoint = get_from_dict_or_env(\n values, \"azure_cogs_endpoint\", \"AZURE_COGS_ENDPOINT\"\n )\n try:\n from azure.ai.formrecognizer import DocumentAnalysisClient\n from azure.core.credentials import AzureKeyCredential\n values[\"doc_analysis_client\"] = DocumentAnalysisClient(\n endpoint=azure_cogs_endpoint,\n credential=AzureKeyCredential(azure_cogs_key),\n )\n except ImportError:\n raise ImportError(\n \"azure-ai-formrecognizer is not installed. \"\n \"Run `pip install azure-ai-formrecognizer` to install.\"\n )\n return values\n def _parse_tables(self, tables: List[Any]) -> List[Any]:\n result = []\n for table in tables:\n rc, cc = table.row_count, table.column_count\n _table = [[\"\" for _ in range(cc)] for _ in range(rc)]\n for cell in table.cells:\n _table[cell.row_index][cell.column_index] = cell.content\n result.append(_table)\n return result\n def _parse_kv_pairs(self, kv_pairs: List[Any]) -> List[Any]:\n result = []\n for kv_pair in kv_pairs:\n key = kv_pair.key.content if kv_pair.key else \"\"\n value = kv_pair.value.content if kv_pair.value else \"\"\n result.append((key, value))\n return result\n def _document_analysis(self, document_path: str) -> Dict:\n document_src_type = detect_file_src_type(document_path)\n if document_src_type == \"local\":\n with open(document_path, \"rb\") as document:\n poller = self.doc_analysis_client.begin_analyze_document(\n \"prebuilt-document\", document\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/form_recognizer.html"}
+{"id": "dd9b6e2d9b95-2", "text": "\"prebuilt-document\", document\n )\n elif document_src_type == \"remote\":\n poller = self.doc_analysis_client.begin_analyze_document_from_url(\n \"prebuilt-document\", document_path\n )\n else:\n raise ValueError(f\"Invalid document path: {document_path}\")\n result = poller.result()\n res_dict = {}\n if result.content is not None:\n res_dict[\"content\"] = result.content\n if result.tables is not None:\n res_dict[\"tables\"] = self._parse_tables(result.tables)\n if result.key_value_pairs is not None:\n res_dict[\"key_value_pairs\"] = self._parse_kv_pairs(result.key_value_pairs)\n return res_dict\n def _format_document_analysis_result(self, document_analysis_result: Dict) -> str:\n formatted_result = []\n if \"content\" in document_analysis_result:\n formatted_result.append(\n f\"Content: {document_analysis_result['content']}\".replace(\"\\n\", \" \")\n )\n if \"tables\" in document_analysis_result:\n for i, table in enumerate(document_analysis_result[\"tables\"]):\n formatted_result.append(f\"Table {i}: {table}\".replace(\"\\n\", \" \"))\n if \"key_value_pairs\" in document_analysis_result:\n for kv_pair in document_analysis_result[\"key_value_pairs\"]:\n formatted_result.append(\n f\"{kv_pair[0]}: {kv_pair[1]}\".replace(\"\\n\", \" \")\n )\n return \"\\n\".join(formatted_result)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n try:", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/form_recognizer.html"}
+{"id": "dd9b6e2d9b95-3", "text": ") -> str:\n \"\"\"Use the tool.\"\"\"\n try:\n document_analysis_result = self._document_analysis(query)\n if not document_analysis_result:\n return \"No good document analysis result was found\"\n return self._format_document_analysis_result(document_analysis_result)\n except Exception as e:\n raise RuntimeError(f\"Error while running AzureCogsFormRecognizerTool: {e}\")\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"AzureCogsFormRecognizerTool does not support async\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/form_recognizer.html"}
+{"id": "1063015c849d-0", "text": "Source code for langchain.tools.azure_cognitive_services.image_analysis\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Dict, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.azure_cognitive_services.utils import detect_file_src_type\nfrom langchain.tools.base import BaseTool\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class AzureCogsImageAnalysisTool(BaseTool):\n \"\"\"Tool that queries the Azure Cognitive Services Image Analysis API.\n In order to set this up, follow instructions at:\n https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40\n \"\"\"\n azure_cogs_key: str = \"\" #: :meta private:\n azure_cogs_endpoint: str = \"\" #: :meta private:\n vision_service: Any #: :meta private:\n analysis_options: Any #: :meta private:\n name = \"Azure Cognitive Services Image Analysis\"\n description = (\n \"A wrapper around Azure Cognitive Services Image Analysis. \"\n \"Useful for when you need to analyze images. \"\n \"Input should be a url to an image.\"\n )\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and endpoint exists in environment.\"\"\"\n azure_cogs_key = get_from_dict_or_env(\n values, \"azure_cogs_key\", \"AZURE_COGS_KEY\"\n )\n azure_cogs_endpoint = get_from_dict_or_env(\n values, \"azure_cogs_endpoint\", \"AZURE_COGS_ENDPOINT\"\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/image_analysis.html"}
+{"id": "1063015c849d-1", "text": ")\n try:\n import azure.ai.vision as sdk\n values[\"vision_service\"] = sdk.VisionServiceOptions(\n endpoint=azure_cogs_endpoint, key=azure_cogs_key\n )\n values[\"analysis_options\"] = sdk.ImageAnalysisOptions()\n values[\"analysis_options\"].features = (\n sdk.ImageAnalysisFeature.CAPTION\n | sdk.ImageAnalysisFeature.OBJECTS\n | sdk.ImageAnalysisFeature.TAGS\n | sdk.ImageAnalysisFeature.TEXT\n )\n except ImportError:\n raise ImportError(\n \"azure-ai-vision is not installed. \"\n \"Run `pip install azure-ai-vision` to install.\"\n )\n return values\n def _image_analysis(self, image_path: str) -> Dict:\n try:\n import azure.ai.vision as sdk\n except ImportError:\n pass\n image_src_type = detect_file_src_type(image_path)\n if image_src_type == \"local\":\n vision_source = sdk.VisionSource(filename=image_path)\n elif image_src_type == \"remote\":\n vision_source = sdk.VisionSource(url=image_path)\n else:\n raise ValueError(f\"Invalid image path: {image_path}\")\n image_analyzer = sdk.ImageAnalyzer(\n self.vision_service, vision_source, self.analysis_options\n )\n result = image_analyzer.analyze()\n res_dict = {}\n if result.reason == sdk.ImageAnalysisResultReason.ANALYZED:\n if result.caption is not None:\n res_dict[\"caption\"] = result.caption.content\n if result.objects is not None:\n res_dict[\"objects\"] = [obj.name for obj in result.objects]\n if result.tags is not None:\n res_dict[\"tags\"] = [tag.name for tag in result.tags]", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/image_analysis.html"}
+{"id": "1063015c849d-2", "text": "res_dict[\"tags\"] = [tag.name for tag in result.tags]\n if result.text is not None:\n res_dict[\"text\"] = [line.content for line in result.text.lines]\n else:\n error_details = sdk.ImageAnalysisErrorDetails.from_result(result)\n raise RuntimeError(\n f\"Image analysis failed.\\n\"\n f\"Reason: {error_details.reason}\\n\"\n f\"Details: {error_details.message}\"\n )\n return res_dict\n def _format_image_analysis_result(self, image_analysis_result: Dict) -> str:\n formatted_result = []\n if \"caption\" in image_analysis_result:\n formatted_result.append(\"Caption: \" + image_analysis_result[\"caption\"])\n if (\n \"objects\" in image_analysis_result\n and len(image_analysis_result[\"objects\"]) > 0\n ):\n formatted_result.append(\n \"Objects: \" + \", \".join(image_analysis_result[\"objects\"])\n )\n if \"tags\" in image_analysis_result and len(image_analysis_result[\"tags\"]) > 0:\n formatted_result.append(\"Tags: \" + \", \".join(image_analysis_result[\"tags\"]))\n if \"text\" in image_analysis_result and len(image_analysis_result[\"text\"]) > 0:\n formatted_result.append(\"Text: \" + \", \".join(image_analysis_result[\"text\"]))\n return \"\\n\".join(formatted_result)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n try:\n image_analysis_result = self._image_analysis(query)\n if not image_analysis_result:\n return \"No good image analysis result was found\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/image_analysis.html"}
+{"id": "1063015c849d-3", "text": "if not image_analysis_result:\n return \"No good image analysis result was found\"\n return self._format_image_analysis_result(image_analysis_result)\n except Exception as e:\n raise RuntimeError(f\"Error while running AzureCogsImageAnalysisTool: {e}\")\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"AzureCogsImageAnalysisTool does not support async\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/image_analysis.html"}
+{"id": "7752e2802b18-0", "text": "Source code for langchain.tools.azure_cognitive_services.speech2text\nfrom __future__ import annotations\nimport logging\nimport time\nfrom typing import Any, Dict, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.azure_cognitive_services.utils import (\n detect_file_src_type,\n download_audio_from_url,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class AzureCogsSpeech2TextTool(BaseTool):\n \"\"\"Tool that queries the Azure Cognitive Services Speech2Text API.\n In order to set this up, follow instructions at:\n https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-speech-to-text?pivots=programming-language-python\n \"\"\"\n azure_cogs_key: str = \"\" #: :meta private:\n azure_cogs_region: str = \"\" #: :meta private:\n speech_language: str = \"en-US\" #: :meta private:\n speech_config: Any #: :meta private:\n name = \"Azure Cognitive Services Speech2Text\"\n description = (\n \"A wrapper around Azure Cognitive Services Speech2Text. \"\n \"Useful for when you need to transcribe audio to text. \"\n \"Input should be a url to an audio file.\"\n )\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and endpoint exists in environment.\"\"\"\n azure_cogs_key = get_from_dict_or_env(\n values, \"azure_cogs_key\", \"AZURE_COGS_KEY\"\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/speech2text.html"}
+{"id": "7752e2802b18-1", "text": ")\n azure_cogs_region = get_from_dict_or_env(\n values, \"azure_cogs_region\", \"AZURE_COGS_REGION\"\n )\n try:\n import azure.cognitiveservices.speech as speechsdk\n values[\"speech_config\"] = speechsdk.SpeechConfig(\n subscription=azure_cogs_key, region=azure_cogs_region\n )\n except ImportError:\n raise ImportError(\n \"azure-cognitiveservices-speech is not installed. \"\n \"Run `pip install azure-cognitiveservices-speech` to install.\"\n )\n return values\n def _continuous_recognize(self, speech_recognizer: Any) -> str:\n done = False\n text = \"\"\n def stop_cb(evt: Any) -> None:\n \"\"\"callback that stop continuous recognition\"\"\"\n speech_recognizer.stop_continuous_recognition_async()\n nonlocal done\n done = True\n def retrieve_cb(evt: Any) -> None:\n \"\"\"callback that retrieves the intermediate recognition results\"\"\"\n nonlocal text\n text += evt.result.text\n # retrieve text on recognized events\n speech_recognizer.recognized.connect(retrieve_cb)\n # stop continuous recognition on either session stopped or canceled events\n speech_recognizer.session_stopped.connect(stop_cb)\n speech_recognizer.canceled.connect(stop_cb)\n # Start continuous speech recognition\n speech_recognizer.start_continuous_recognition_async()\n while not done:\n time.sleep(0.5)\n return text\n def _speech2text(self, audio_path: str, speech_language: str) -> str:\n try:\n import azure.cognitiveservices.speech as speechsdk\n except ImportError:\n pass", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/speech2text.html"}
+{"id": "7752e2802b18-2", "text": "except ImportError:\n pass\n audio_src_type = detect_file_src_type(audio_path)\n if audio_src_type == \"local\":\n audio_config = speechsdk.AudioConfig(filename=audio_path)\n elif audio_src_type == \"remote\":\n tmp_audio_path = download_audio_from_url(audio_path)\n audio_config = speechsdk.AudioConfig(filename=tmp_audio_path)\n else:\n raise ValueError(f\"Invalid audio path: {audio_path}\")\n self.speech_config.speech_recognition_language = speech_language\n speech_recognizer = speechsdk.SpeechRecognizer(self.speech_config, audio_config)\n return self._continuous_recognize(speech_recognizer)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n try:\n text = self._speech2text(query, self.speech_language)\n return text\n except Exception as e:\n raise RuntimeError(f\"Error while running AzureCogsSpeech2TextTool: {e}\")\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"AzureCogsSpeech2TextTool does not support async\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/speech2text.html"}
+{"id": "ae81af94f2c4-0", "text": "Source code for langchain.tools.metaphor_search.tool\n\"\"\"Tool for the Metaphor search API.\"\"\"\nfrom typing import Dict, List, Optional, Union\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.metaphor_search import MetaphorSearchAPIWrapper\n[docs]class MetaphorSearchResults(BaseTool):\n \"\"\"Tool that has capability to query the Metaphor Search API and get back json.\"\"\"\n name = \"Metaphor Search Results JSON\"\n description = (\n \"A wrapper around Metaphor Search. \"\n \"Input should be a Metaphor-optimized query. \"\n \"Output is a JSON array of the query results\"\n )\n api_wrapper: MetaphorSearchAPIWrapper\n def _run(\n self,\n query: str,\n num_results: int,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> Union[List[Dict], str]:\n \"\"\"Use the tool.\"\"\"\n try:\n return self.api_wrapper.results(query, num_results)\n except Exception as e:\n return repr(e)\n async def _arun(\n self,\n query: str,\n num_results: int,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> Union[List[Dict], str]:\n \"\"\"Use the tool asynchronously.\"\"\"\n try:\n return await self.api_wrapper.results_async(query, num_results)\n except Exception as e:\n return repr(e)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/metaphor_search/tool.html"}
+{"id": "d94f60a2c7e1-0", "text": "Source code for langchain.tools.steamship_image_generation.tool\n\"\"\"This tool allows agents to generate images using Steamship.\nSteamship offers access to different third party image generation APIs\nusing a single API key.\nToday the following models are supported:\n- Dall-E\n- Stable Diffusion\nTo use this tool, you must first set as environment variables:\n STEAMSHIP_API_KEY\n```\n\"\"\"\nfrom __future__ import annotations\nfrom enum import Enum\nfrom typing import TYPE_CHECKING, Dict, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools import BaseTool\nfrom langchain.tools.steamship_image_generation.utils import make_image_public\nfrom langchain.utils import get_from_dict_or_env\nif TYPE_CHECKING:\n pass\nclass ModelName(str, Enum):\n \"\"\"Supported Image Models for generation.\"\"\"\n DALL_E = \"dall-e\"\n STABLE_DIFFUSION = \"stable-diffusion\"\nSUPPORTED_IMAGE_SIZES = {\n ModelName.DALL_E: (\"256x256\", \"512x512\", \"1024x1024\"),\n ModelName.STABLE_DIFFUSION: (\"512x512\", \"768x768\"),\n}\n[docs]class SteamshipImageGenerationTool(BaseTool):\n try:\n from steamship import Steamship\n except ImportError:\n pass\n \"\"\"Tool used to generate images from a text-prompt.\"\"\"\n model_name: ModelName\n size: Optional[str] = \"512x512\"\n steamship: Steamship\n return_urls: Optional[bool] = False\n name = \"GenerateImage\"\n description = (\n \"Useful for when you need to generate an image.\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/steamship_image_generation/tool.html"}
+{"id": "d94f60a2c7e1-1", "text": "description = (\n \"Useful for when you need to generate an image.\"\n \"Input: A detailed text-2-image prompt describing an image\"\n \"Output: the UUID of a generated image\"\n )\n @root_validator(pre=True)\n def validate_size(cls, values: Dict) -> Dict:\n if \"size\" in values:\n size = values[\"size\"]\n model_name = values[\"model_name\"]\n if size not in SUPPORTED_IMAGE_SIZES[model_name]:\n raise RuntimeError(f\"size {size} is not supported by {model_name}\")\n return values\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n steamship_api_key = get_from_dict_or_env(\n values, \"steamship_api_key\", \"STEAMSHIP_API_KEY\"\n )\n try:\n from steamship import Steamship\n except ImportError:\n raise ImportError(\n \"steamship is not installed. \"\n \"Please install it with `pip install steamship`\"\n )\n steamship = Steamship(\n api_key=steamship_api_key,\n )\n values[\"steamship\"] = steamship\n if \"steamship_api_key\" in values:\n del values[\"steamship_api_key\"]\n return values\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n image_generator = self.steamship.use_plugin(\n plugin_handle=self.model_name.value, config={\"n\": 1, \"size\": self.size}\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/steamship_image_generation/tool.html"}
+{"id": "d94f60a2c7e1-2", "text": ")\n task = image_generator.generate(text=query, append_output_to_file=True)\n task.wait()\n blocks = task.output.blocks\n if len(blocks) > 0:\n if self.return_urls:\n return make_image_public(self.steamship, blocks[0])\n else:\n return blocks[0].id\n raise RuntimeError(f\"[{self.name}] Tool unable to generate image!\")\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"GenerateImageTool does not support async\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/steamship_image_generation/tool.html"}
+{"id": "4685a0041336-0", "text": "Source code for langchain.tools.human.tool\n\"\"\"Tool for asking human input.\"\"\"\nfrom typing import Callable, Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\ndef _print_func(text: str) -> None:\n print(\"\\n\")\n print(text)\n[docs]class HumanInputRun(BaseTool):\n \"\"\"Tool that adds the capability to ask user for input.\"\"\"\n name = \"Human\"\n description = (\n \"You can ask a human for guidance when you think you \"\n \"got stuck or you are not sure what to do next. \"\n \"The input should be a question for the human.\"\n )\n prompt_func: Callable[[str], None] = Field(default_factory=lambda: _print_func)\n input_func: Callable = Field(default_factory=lambda: input)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Human input tool.\"\"\"\n self.prompt_func(query)\n return self.input_func()\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Human tool asynchronously.\"\"\"\n raise NotImplementedError(\"Human tool does not support async\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/human/tool.html"}
+{"id": "44fc3f2786fb-0", "text": "Source code for langchain.tools.brave_search.tool\nfrom __future__ import annotations\nfrom typing import Any, Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.brave_search import BraveSearchWrapper\n[docs]class BraveSearch(BaseTool):\n name = \"brave-search\"\n description = (\n \"a search engine. \"\n \"useful for when you need to answer questions about current events.\"\n \" input should be a search query.\"\n )\n search_wrapper: BraveSearchWrapper\n[docs] @classmethod\n def from_api_key(\n cls, api_key: str, search_kwargs: Optional[dict] = None, **kwargs: Any\n ) -> BraveSearch:\n wrapper = BraveSearchWrapper(api_key=api_key, search_kwargs=search_kwargs or {})\n return cls(search_wrapper=wrapper, **kwargs)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.search_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"BraveSearch does not support async\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/brave_search/tool.html"}
+{"id": "774445ac739c-0", "text": "Source code for langchain.tools.shell.tool\nimport asyncio\nimport platform\nimport warnings\nfrom typing import List, Optional, Type, Union\nfrom pydantic import BaseModel, Field, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.bash import BashProcess\nclass ShellInput(BaseModel):\n \"\"\"Commands for the Bash Shell tool.\"\"\"\n commands: Union[str, List[str]] = Field(\n ...,\n description=\"List of shell commands to run. Deserialized using json.loads\",\n )\n \"\"\"List of shell commands to run.\"\"\"\n @root_validator\n def _validate_commands(cls, values: dict) -> dict:\n \"\"\"Validate commands.\"\"\"\n # TODO: Add real validators\n commands = values.get(\"commands\")\n if not isinstance(commands, list):\n values[\"commands\"] = [commands]\n # Warn that the bash tool is not safe\n warnings.warn(\n \"The shell tool has no safeguards by default. Use at your own risk.\"\n )\n return values\ndef _get_default_bash_processs() -> BashProcess:\n \"\"\"Get file path from string.\"\"\"\n return BashProcess(return_err_output=True)\ndef _get_platform() -> str:\n \"\"\"Get platform.\"\"\"\n system = platform.system()\n if system == \"Darwin\":\n return \"MacOS\"\n return system\n[docs]class ShellTool(BaseTool):\n \"\"\"Tool to run shell commands.\"\"\"\n process: BashProcess = Field(default_factory=_get_default_bash_processs)\n \"\"\"Bash process to run commands.\"\"\"\n name: str = \"terminal\"\n \"\"\"Name of tool.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/shell/tool.html"}
+{"id": "774445ac739c-1", "text": "name: str = \"terminal\"\n \"\"\"Name of tool.\"\"\"\n description: str = f\"Run shell commands on this {_get_platform()} machine.\"\n \"\"\"Description of tool.\"\"\"\n args_schema: Type[BaseModel] = ShellInput\n \"\"\"Schema for input arguments.\"\"\"\n def _run(\n self,\n commands: Union[str, List[str]],\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run commands and return final output.\"\"\"\n return self.process.run(commands)\n async def _arun(\n self,\n commands: Union[str, List[str]],\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run commands asynchronously and return final output.\"\"\"\n return await asyncio.get_event_loop().run_in_executor(\n None, self.process.run, commands\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/shell/tool.html"}
+{"id": "7cca9a4641df-0", "text": "Source code for langchain.tools.vectorstore.tool\n\"\"\"Tools for interacting with vectorstores.\"\"\"\nimport json\nfrom typing import Any, Dict, Optional\nfrom pydantic import BaseModel, Field\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain\nfrom langchain.llms.openai import OpenAI\nfrom langchain.tools.base import BaseTool\nfrom langchain.vectorstores.base import VectorStore\nclass BaseVectorStoreTool(BaseModel):\n \"\"\"Base class for tools that use a VectorStore.\"\"\"\n vectorstore: VectorStore = Field(exclude=True)\n llm: BaseLanguageModel = Field(default_factory=lambda: OpenAI(temperature=0))\n class Config(BaseTool.Config):\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\ndef _create_description_from_template(values: Dict[str, Any]) -> Dict[str, Any]:\n values[\"description\"] = values[\"template\"].format(name=values[\"name\"])\n return values\n[docs]class VectorStoreQATool(BaseVectorStoreTool, BaseTool):\n \"\"\"Tool for the VectorDBQA chain. To be initialized with name and chain.\"\"\"\n[docs] @staticmethod\n def get_description(name: str, description: str) -> str:\n template: str = (\n \"Useful for when you need to answer questions about {name}. \"\n \"Whenever you need information about {description} \"\n \"you should ALWAYS use this. \"\n \"Input should be a fully formed question.\"\n )\n return template.format(name=name, description=description)\n def _run(\n self,\n query: str,", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/vectorstore/tool.html"}
+{"id": "7cca9a4641df-1", "text": "def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n chain = RetrievalQA.from_chain_type(\n self.llm, retriever=self.vectorstore.as_retriever()\n )\n return chain.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"VectorStoreQATool does not support async\")\n[docs]class VectorStoreQAWithSourcesTool(BaseVectorStoreTool, BaseTool):\n \"\"\"Tool for the VectorDBQAWithSources chain.\"\"\"\n[docs] @staticmethod\n def get_description(name: str, description: str) -> str:\n template: str = (\n \"Useful for when you need to answer questions about {name} and the sources \"\n \"used to construct the answer. \"\n \"Whenever you need information about {description} \"\n \"you should ALWAYS use this. \"\n \" Input should be a fully formed question. \"\n \"Output is a json serialized dictionary with keys `answer` and `sources`. \"\n \"Only use this tool if the user explicitly asks for sources.\"\n )\n return template.format(name=name, description=description)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n chain = RetrievalQAWithSourcesChain.from_chain_type(\n self.llm, retriever=self.vectorstore.as_retriever()\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/vectorstore/tool.html"}
+{"id": "7cca9a4641df-2", "text": "self.llm, retriever=self.vectorstore.as_retriever()\n )\n return json.dumps(chain({chain.question_key: query}, return_only_outputs=True))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"VectorStoreQAWithSourcesTool does not support async\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/vectorstore/tool.html"}
+{"id": "b05ff50c4fa0-0", "text": "Source code for langchain.tools.youtube.search\n\"\"\"\nAdapted from https://github.com/venuv/langchain_yt_tools\nCustomYTSearchTool searches YouTube videos related to a person\nand returns a specified number of video URLs.\nInput to this tool should be a comma separated list,\n - the first part contains a person name\n - and the second(optional) a number that is the\n maximum number of video results to return\n \"\"\"\nimport json\nfrom typing import Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools import BaseTool\n[docs]class YouTubeSearchTool(BaseTool):\n name = \"YouTubeSearch\"\n description = (\n \"search for youtube videos associated with a person. \"\n \"the input to this tool should be a comma separated list, \"\n \"the first part contains a person name and the second a \"\n \"number that is the maximum number of video results \"\n \"to return aka num_results. the second part is optional\"\n )\n def _search(self, person: str, num_results: int) -> str:\n from youtube_search import YoutubeSearch\n results = YoutubeSearch(person, num_results).to_json()\n data = json.loads(results)\n url_suffix_list = [video[\"url_suffix\"] for video in data[\"videos\"]]\n return str(url_suffix_list)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n values = query.split(\",\")\n person = values[0]\n if len(values) > 1:\n num_results = int(values[1])\n else:", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/youtube/search.html"}
+{"id": "b05ff50c4fa0-1", "text": "num_results = int(values[1])\n else:\n num_results = 2\n return self._search(person, num_results)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"YouTubeSearchTool does not yet support async\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/youtube/search.html"}
+{"id": "850ba2236f2c-0", "text": "Source code for langchain.tools.openweathermap.tool\n\"\"\"Tool for the OpenWeatherMap API.\"\"\"\nfrom typing import Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities import OpenWeatherMapAPIWrapper\n[docs]class OpenWeatherMapQueryRun(BaseTool):\n \"\"\"Tool that adds the capability to query using the OpenWeatherMap API.\"\"\"\n api_wrapper: OpenWeatherMapAPIWrapper = Field(\n default_factory=OpenWeatherMapAPIWrapper\n )\n name = \"OpenWeatherMap\"\n description = (\n \"A wrapper around OpenWeatherMap API. \"\n \"Useful for fetching current weather information for a specified location. \"\n \"Input should be a location string (e.g. London,GB).\"\n )\n def _run(\n self, location: str, run_manager: Optional[CallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Use the OpenWeatherMap tool.\"\"\"\n return self.api_wrapper.run(location)\n async def _arun(\n self,\n location: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the OpenWeatherMap tool asynchronously.\"\"\"\n raise NotImplementedError(\"OpenWeatherMapQueryRun does not support async\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/openweathermap/tool.html"}
+{"id": "dfa4518822ee-0", "text": "Source code for langchain.tools.google_places.tool\n\"\"\"Tool for the Google search API.\"\"\"\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.google_places_api import GooglePlacesAPIWrapper\nclass GooglePlacesSchema(BaseModel):\n query: str = Field(..., description=\"Query for google maps\")\n[docs]class GooglePlacesTool(BaseTool):\n \"\"\"Tool that adds the capability to query the Google places API.\"\"\"\n name = \"Google Places\"\n description = (\n \"A wrapper around Google Places. \"\n \"Useful for when you need to validate or \"\n \"discover addressed from ambiguous text. \"\n \"Input should be a search query.\"\n )\n api_wrapper: GooglePlacesAPIWrapper = Field(default_factory=GooglePlacesAPIWrapper)\n args_schema: Type[BaseModel] = GooglePlacesSchema\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"GooglePlacesRun does not support async\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/google_places/tool.html"}
+{"id": "20b478c3588a-0", "text": "Source code for langchain.tools.zapier.tool\n\"\"\"## Zapier Natural Language Actions API\n\\\nFull docs here: https://nla.zapier.com/api/v1/docs\n**Zapier Natural Language Actions** gives you access to the 5k+ apps, 20k+ actions\non Zapier's platform through a natural language API interface.\nNLA supports apps like Gmail, Salesforce, Trello, Slack, Asana, HubSpot, Google Sheets,\nMicrosoft Teams, and thousands more apps: https://zapier.com/apps\nZapier NLA handles ALL the underlying API auth and translation from\nnatural language --> underlying API call --> return simplified output for LLMs\nThe key idea is you, or your users, expose a set of actions via an oauth-like setup\nwindow, which you can then query and execute via a REST API.\nNLA offers both API Key and OAuth for signing NLA API requests.\n1. Server-side (API Key): for quickly getting started, testing, and production scenarios\n where LangChain will only use actions exposed in the developer's Zapier account\n (and will use the developer's connected accounts on Zapier.com)\n2. User-facing (Oauth): for production scenarios where you are deploying an end-user\n facing application and LangChain needs access to end-user's exposed actions and\n connected accounts on Zapier.com\nThis quick start will focus on the server-side use case for brevity.\nReview [full docs](https://nla.zapier.com/api/v1/docs) or reach out to\nnla@zapier.com for user-facing oauth developer support.\nTypically, you'd use SequentialChain, here's a basic example:\n 1. Use NLA to find an email in Gmail\n 2. Use LLMChain to generate a draft reply to (1)", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/zapier/tool.html"}
+{"id": "20b478c3588a-1", "text": "2. Use LLMChain to generate a draft reply to (1)\n 3. Use NLA to send the draft reply (2) to someone in Slack via direct message\nIn code, below:\n```python\nimport os\n# get from https://platform.openai.com/\nos.environ[\"OPENAI_API_KEY\"] = os.environ.get(\"OPENAI_API_KEY\", \"\")\n# get from https://nla.zapier.com/demo/provider/debug\n# (under User Information, after logging in):\nos.environ[\"ZAPIER_NLA_API_KEY\"] = os.environ.get(\"ZAPIER_NLA_API_KEY\", \"\")\nfrom langchain.llms import OpenAI\nfrom langchain.agents import initialize_agent\nfrom langchain.agents.agent_toolkits import ZapierToolkit\nfrom langchain.utilities.zapier import ZapierNLAWrapper\n## step 0. expose gmail 'find email' and slack 'send channel message' actions\n# first go here, log in, expose (enable) the two actions:\n# https://nla.zapier.com/demo/start\n# -- for this example, can leave all fields \"Have AI guess\"\n# in an oauth scenario, you'd get your own id (instead of 'demo')\n# which you route your users through first\nllm = OpenAI(temperature=0)\nzapier = ZapierNLAWrapper()\n## To leverage a nla_oauth_access_token you may pass the value to the ZapierNLAWrapper\n## If you do this there is no need to initialize the ZAPIER_NLA_API_KEY env variable\n# zapier = ZapierNLAWrapper(zapier_nla_oauth_access_token=\"TOKEN_HERE\")\ntoolkit = ZapierToolkit.from_zapier_nla_wrapper(zapier)\nagent = initialize_agent(\n toolkit.get_tools(),", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/zapier/tool.html"}
+{"id": "20b478c3588a-2", "text": "agent = initialize_agent(\n toolkit.get_tools(),\n llm,\n agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n verbose=True\n)\nagent.run((\"Summarize the last email I received regarding Silicon Valley Bank. \"\n \"Send the summary to the #test-zapier channel in slack.\"))\n```\n\"\"\"\nfrom typing import Any, Dict, Optional\nfrom pydantic import Field, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.zapier.prompt import BASE_ZAPIER_TOOL_PROMPT\nfrom langchain.utilities.zapier import ZapierNLAWrapper\n[docs]class ZapierNLARunAction(BaseTool):\n \"\"\"\n Args:\n action_id: a specific action ID (from list actions) of the action to execute\n (the set api_key must be associated with the action owner)\n instructions: a natural language instruction string for using the action\n (eg. \"get the latest email from Mike Knoop\" for \"Gmail: find email\" action)\n params: a dict, optional. Any params provided will *override* AI guesses\n from `instructions` (see \"understanding the AI guessing flow\" here:\n https://nla.zapier.com/api/v1/docs)\n \"\"\"\n api_wrapper: ZapierNLAWrapper = Field(default_factory=ZapierNLAWrapper)\n action_id: str\n params: Optional[dict] = None\n base_prompt: str = BASE_ZAPIER_TOOL_PROMPT\n zapier_description: str\n params_schema: Dict[str, str] = Field(default_factory=dict)\n name = \"\"\n description = \"\"\n @root_validator", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/zapier/tool.html"}
+{"id": "20b478c3588a-3", "text": "name = \"\"\n description = \"\"\n @root_validator\n def set_name_description(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n zapier_description = values[\"zapier_description\"]\n params_schema = values[\"params_schema\"]\n if \"instructions\" in params_schema:\n del params_schema[\"instructions\"]\n # Ensure base prompt (if overrided) contains necessary input fields\n necessary_fields = {\"{zapier_description}\", \"{params}\"}\n if not all(field in values[\"base_prompt\"] for field in necessary_fields):\n raise ValueError(\n \"Your custom base Zapier prompt must contain input fields for \"\n \"{zapier_description} and {params}.\"\n )\n values[\"name\"] = zapier_description\n values[\"description\"] = values[\"base_prompt\"].format(\n zapier_description=zapier_description,\n params=str(list(params_schema.keys())),\n )\n return values\n def _run(\n self, instructions: str, run_manager: Optional[CallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Use the Zapier NLA tool to return a list of all exposed user actions.\"\"\"\n return self.api_wrapper.run_as_str(self.action_id, instructions, self.params)\n async def _arun(\n self,\n _: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Zapier NLA tool to return a list of all exposed user actions.\"\"\"\n raise NotImplementedError(\"ZapierNLAListActions does not support async\")\nZapierNLARunAction.__doc__ = (\n ZapierNLAWrapper.run.__doc__ + ZapierNLARunAction.__doc__ # type: ignore\n)", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/zapier/tool.html"}
+{"id": "20b478c3588a-4", "text": ")\n# other useful actions\n[docs]class ZapierNLAListActions(BaseTool):\n \"\"\"\n Args:\n None\n \"\"\"\n name = \"Zapier NLA: List Actions\"\n description = BASE_ZAPIER_TOOL_PROMPT + (\n \"This tool returns a list of the user's exposed actions.\"\n )\n api_wrapper: ZapierNLAWrapper = Field(default_factory=ZapierNLAWrapper)\n def _run(\n self,\n _: str = \"\",\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Zapier NLA tool to return a list of all exposed user actions.\"\"\"\n return self.api_wrapper.list_as_str()\n async def _arun(\n self,\n _: str = \"\",\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Zapier NLA tool to return a list of all exposed user actions.\"\"\"\n raise NotImplementedError(\"ZapierNLAListActions does not support async\")\nZapierNLAListActions.__doc__ = (\n ZapierNLAWrapper.list.__doc__ + ZapierNLAListActions.__doc__ # type: ignore\n)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/zapier/tool.html"}
+{"id": "c84f6e5f236e-0", "text": "Source code for langchain.tools.bing_search.tool\n\"\"\"Tool for the Bing search API.\"\"\"\nfrom typing import Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.bing_search import BingSearchAPIWrapper\n[docs]class BingSearchRun(BaseTool):\n \"\"\"Tool that adds the capability to query the Bing search API.\"\"\"\n name = \"Bing Search\"\n description = (\n \"A wrapper around Bing Search. \"\n \"Useful for when you need to answer questions about current events. \"\n \"Input should be a search query.\"\n )\n api_wrapper: BingSearchAPIWrapper\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"BingSearchRun does not support async\")\n[docs]class BingSearchResults(BaseTool):\n \"\"\"Tool that has capability to query the Bing Search API and get back json.\"\"\"\n name = \"Bing Search Results JSON\"\n description = (\n \"A wrapper around Bing Search. \"\n \"Useful for when you need to answer questions about current events. \"\n \"Input should be a search query. Output is a JSON array of the query results\"\n )\n num_results: int = 4\n api_wrapper: BingSearchAPIWrapper\n def _run(", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/bing_search/tool.html"}
+{"id": "c84f6e5f236e-1", "text": "api_wrapper: BingSearchAPIWrapper\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return str(self.api_wrapper.results(query, self.num_results))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"BingSearchResults does not support async\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/bing_search/tool.html"}
+{"id": "3835ad2d6713-0", "text": "Source code for langchain.tools.google_serper.tool\n\"\"\"Tool for the Serper.dev Google Search API.\"\"\"\nfrom typing import Optional\nfrom pydantic.fields import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.google_serper import GoogleSerperAPIWrapper\n[docs]class GoogleSerperRun(BaseTool):\n \"\"\"Tool that adds the capability to query the Serper.dev Google search API.\"\"\"\n name = \"Google Serper\"\n description = (\n \"A low-cost Google Search API.\"\n \"Useful for when you need to answer questions about current events.\"\n \"Input should be a search query.\"\n )\n api_wrapper: GoogleSerperAPIWrapper\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return str(self.api_wrapper.run(query))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n return (await self.api_wrapper.arun(query)).__str__()\n[docs]class GoogleSerperResults(BaseTool):\n \"\"\"Tool that has capability to query the Serper.dev Google Search API\n and get back json.\"\"\"\n name = \"Google Serrper Results JSON\"\n description = (\n \"A low-cost Google Search API.\"\n \"Useful for when you need to answer questions about current events.\"\n \"Input should be a search query. Output is a JSON object of the query results\"\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/google_serper/tool.html"}
+{"id": "3835ad2d6713-1", "text": ")\n api_wrapper: GoogleSerperAPIWrapper = Field(default_factory=GoogleSerperAPIWrapper)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return str(self.api_wrapper.results(query))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n return (await self.api_wrapper.aresults(query)).__str__()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/google_serper/tool.html"}
+{"id": "5bfc193c2798-0", "text": "Source code for langchain.tools.pubmed.tool\n\"\"\"Tool for the Pubmed API.\"\"\"\nfrom typing import Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.pupmed import PubMedAPIWrapper\n[docs]class PubmedQueryRun(BaseTool):\n \"\"\"Tool that adds the capability to search using the PubMed API.\"\"\"\n name = \"PubMed\"\n description = (\n \"A wrapper around PubMed.org \"\n \"Useful for when you need to answer questions about Physics, Mathematics, \"\n \"Computer Science, Quantitative Biology, Quantitative Finance, Statistics, \"\n \"Electrical Engineering, and Economics \"\n \"from scientific articles on PubMed.org. \"\n \"Input should be a search query.\"\n )\n api_wrapper: PubMedAPIWrapper = Field(default_factory=PubMedAPIWrapper)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Arxiv tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the PubMed tool asynchronously.\"\"\"\n raise NotImplementedError(\"PubMedAPIWrapper does not support async\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/pubmed/tool.html"}
+{"id": "ceb1f061c6ec-0", "text": "Source code for langchain.tools.file_management.write\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.file_management.utils import (\n INVALID_PATH_TEMPLATE,\n BaseFileToolMixin,\n FileValidationError,\n)\nclass WriteFileInput(BaseModel):\n \"\"\"Input for WriteFileTool.\"\"\"\n file_path: str = Field(..., description=\"name of file\")\n text: str = Field(..., description=\"text to write to file\")\n append: bool = Field(\n default=False, description=\"Whether to append to an existing file.\"\n )\n[docs]class WriteFileTool(BaseFileToolMixin, BaseTool):\n name: str = \"write_file\"\n args_schema: Type[BaseModel] = WriteFileInput\n description: str = \"Write file to disk\"\n def _run(\n self,\n file_path: str,\n text: str,\n append: bool = False,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n write_path = self.get_relative_path(file_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(arg_name=\"file_path\", value=file_path)\n try:\n write_path.parent.mkdir(exist_ok=True, parents=False)\n mode = \"a\" if append else \"w\"\n with write_path.open(mode, encoding=\"utf-8\") as f:\n f.write(text)\n return f\"File written successfully to {file_path}.\"\n except Exception as e:\n return \"Error: \" + str(e)", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/file_management/write.html"}
+{"id": "ceb1f061c6ec-1", "text": "except Exception as e:\n return \"Error: \" + str(e)\n async def _arun(\n self,\n file_path: str,\n text: str,\n append: bool = False,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n # TODO: Add aiofiles method\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/file_management/write.html"}
+{"id": "5ba4be7e513d-0", "text": "Source code for langchain.tools.file_management.list_dir\nimport os\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.file_management.utils import (\n INVALID_PATH_TEMPLATE,\n BaseFileToolMixin,\n FileValidationError,\n)\nclass DirectoryListingInput(BaseModel):\n \"\"\"Input for ListDirectoryTool.\"\"\"\n dir_path: str = Field(default=\".\", description=\"Subdirectory to list.\")\n[docs]class ListDirectoryTool(BaseFileToolMixin, BaseTool):\n name: str = \"list_directory\"\n args_schema: Type[BaseModel] = DirectoryListingInput\n description: str = \"List files and directories in a specified folder\"\n def _run(\n self,\n dir_path: str = \".\",\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n dir_path_ = self.get_relative_path(dir_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(arg_name=\"dir_path\", value=dir_path)\n try:\n entries = os.listdir(dir_path_)\n if entries:\n return \"\\n\".join(entries)\n else:\n return f\"No files found in directory {dir_path}\"\n except Exception as e:\n return \"Error: \" + str(e)\n async def _arun(\n self,\n dir_path: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n # TODO: Add aiofiles method\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/file_management/list_dir.html"}
+{"id": "5ba4be7e513d-1", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/file_management/list_dir.html"}
+{"id": "dc1fd371fa15-0", "text": "Source code for langchain.tools.file_management.delete\nimport os\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.file_management.utils import (\n INVALID_PATH_TEMPLATE,\n BaseFileToolMixin,\n FileValidationError,\n)\nclass FileDeleteInput(BaseModel):\n \"\"\"Input for DeleteFileTool.\"\"\"\n file_path: str = Field(..., description=\"Path of the file to delete\")\n[docs]class DeleteFileTool(BaseFileToolMixin, BaseTool):\n name: str = \"file_delete\"\n args_schema: Type[BaseModel] = FileDeleteInput\n description: str = \"Delete a file\"\n def _run(\n self,\n file_path: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n file_path_ = self.get_relative_path(file_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(arg_name=\"file_path\", value=file_path)\n if not file_path_.exists():\n return f\"Error: no such file or directory: {file_path}\"\n try:\n os.remove(file_path_)\n return f\"File deleted successfully: {file_path}.\"\n except Exception as e:\n return \"Error: \" + str(e)\n async def _arun(\n self,\n file_path: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n # TODO: Add aiofiles method\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/file_management/delete.html"}
+{"id": "dc1fd371fa15-1", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/file_management/delete.html"}
+{"id": "6e011418d2e1-0", "text": "Source code for langchain.tools.file_management.file_search\nimport fnmatch\nimport os\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.file_management.utils import (\n INVALID_PATH_TEMPLATE,\n BaseFileToolMixin,\n FileValidationError,\n)\nclass FileSearchInput(BaseModel):\n \"\"\"Input for FileSearchTool.\"\"\"\n dir_path: str = Field(\n default=\".\",\n description=\"Subdirectory to search in.\",\n )\n pattern: str = Field(\n ...,\n description=\"Unix shell regex, where * matches everything.\",\n )\n[docs]class FileSearchTool(BaseFileToolMixin, BaseTool):\n name: str = \"file_search\"\n args_schema: Type[BaseModel] = FileSearchInput\n description: str = (\n \"Recursively search for files in a subdirectory that match the regex pattern\"\n )\n def _run(\n self,\n pattern: str,\n dir_path: str = \".\",\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n dir_path_ = self.get_relative_path(dir_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(arg_name=\"dir_path\", value=dir_path)\n matches = []\n try:\n for root, _, filenames in os.walk(dir_path_):\n for filename in fnmatch.filter(filenames, pattern):\n absolute_path = os.path.join(root, filename)\n relative_path = os.path.relpath(absolute_path, dir_path_)\n matches.append(relative_path)\n if matches:", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/file_management/file_search.html"}
+{"id": "6e011418d2e1-1", "text": "matches.append(relative_path)\n if matches:\n return \"\\n\".join(matches)\n else:\n return f\"No files found for pattern {pattern} in directory {dir_path}\"\n except Exception as e:\n return \"Error: \" + str(e)\n async def _arun(\n self,\n dir_path: str,\n pattern: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n # TODO: Add aiofiles method\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/file_management/file_search.html"}
+{"id": "5763d28956e9-0", "text": "Source code for langchain.tools.file_management.read\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.file_management.utils import (\n INVALID_PATH_TEMPLATE,\n BaseFileToolMixin,\n FileValidationError,\n)\nclass ReadFileInput(BaseModel):\n \"\"\"Input for ReadFileTool.\"\"\"\n file_path: str = Field(..., description=\"name of file\")\n[docs]class ReadFileTool(BaseFileToolMixin, BaseTool):\n name: str = \"read_file\"\n args_schema: Type[BaseModel] = ReadFileInput\n description: str = \"Read file from disk\"\n def _run(\n self,\n file_path: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n read_path = self.get_relative_path(file_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(arg_name=\"file_path\", value=file_path)\n if not read_path.exists():\n return f\"Error: no such file or directory: {file_path}\"\n try:\n with read_path.open(\"r\", encoding=\"utf-8\") as f:\n content = f.read()\n return content\n except Exception as e:\n return \"Error: \" + str(e)\n async def _arun(\n self,\n file_path: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n # TODO: Add aiofiles method\n raise NotImplementedError\nBy Harrison Chase", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/file_management/read.html"}
+{"id": "5763d28956e9-1", "text": "# TODO: Add aiofiles method\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/file_management/read.html"}
+{"id": "a20400e9799c-0", "text": "Source code for langchain.tools.file_management.copy\nimport shutil\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.file_management.utils import (\n INVALID_PATH_TEMPLATE,\n BaseFileToolMixin,\n FileValidationError,\n)\nclass FileCopyInput(BaseModel):\n \"\"\"Input for CopyFileTool.\"\"\"\n source_path: str = Field(..., description=\"Path of the file to copy\")\n destination_path: str = Field(..., description=\"Path to save the copied file\")\n[docs]class CopyFileTool(BaseFileToolMixin, BaseTool):\n name: str = \"copy_file\"\n args_schema: Type[BaseModel] = FileCopyInput\n description: str = \"Create a copy of a file in a specified location\"\n def _run(\n self,\n source_path: str,\n destination_path: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n source_path_ = self.get_relative_path(source_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(\n arg_name=\"source_path\", value=source_path\n )\n try:\n destination_path_ = self.get_relative_path(destination_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(\n arg_name=\"destination_path\", value=destination_path\n )\n try:\n shutil.copy2(source_path_, destination_path_, follow_symlinks=False)\n return f\"File copied successfully from {source_path} to {destination_path}.\"\n except Exception as e:", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/file_management/copy.html"}
+{"id": "a20400e9799c-1", "text": "except Exception as e:\n return \"Error: \" + str(e)\n async def _arun(\n self,\n source_path: str,\n destination_path: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n # TODO: Add aiofiles method\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/file_management/copy.html"}
+{"id": "3d9e62e8ff9d-0", "text": "Source code for langchain.tools.file_management.move\nimport shutil\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.file_management.utils import (\n INVALID_PATH_TEMPLATE,\n BaseFileToolMixin,\n FileValidationError,\n)\nclass FileMoveInput(BaseModel):\n \"\"\"Input for MoveFileTool.\"\"\"\n source_path: str = Field(..., description=\"Path of the file to move\")\n destination_path: str = Field(..., description=\"New path for the moved file\")\n[docs]class MoveFileTool(BaseFileToolMixin, BaseTool):\n name: str = \"move_file\"\n args_schema: Type[BaseModel] = FileMoveInput\n description: str = \"Move or rename a file from one location to another\"\n def _run(\n self,\n source_path: str,\n destination_path: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n source_path_ = self.get_relative_path(source_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(\n arg_name=\"source_path\", value=source_path\n )\n try:\n destination_path_ = self.get_relative_path(destination_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(\n arg_name=\"destination_path_\", value=destination_path_\n )\n if not source_path_.exists():\n return f\"Error: no such file or directory {source_path}\"\n try:\n # shutil.move expects str args in 3.8\n shutil.move(str(source_path_), destination_path_)", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/file_management/move.html"}
+{"id": "3d9e62e8ff9d-1", "text": "shutil.move(str(source_path_), destination_path_)\n return f\"File moved successfully from {source_path} to {destination_path}.\"\n except Exception as e:\n return \"Error: \" + str(e)\n async def _arun(\n self,\n source_path: str,\n destination_path: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n # TODO: Add aiofiles method\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/file_management/move.html"}
+{"id": "f9439ebd008a-0", "text": "Source code for langchain.tools.gmail.search\nimport base64\nimport email\nfrom enum import Enum\nfrom typing import Any, Dict, List, Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.gmail.base import GmailBaseTool\nfrom langchain.tools.gmail.utils import clean_email_body\nclass Resource(str, Enum):\n THREADS = \"threads\"\n MESSAGES = \"messages\"\nclass SearchArgsSchema(BaseModel):\n # From https://support.google.com/mail/answer/7190?hl=en\n query: str = Field(\n ...,\n description=\"The Gmail query. Example filters include from:sender,\"\n \" to:recipient, subject:subject, -filtered_term,\"\n \" in:folder, is:important|read|starred, after:year/mo/date, \"\n \"before:year/mo/date, label:label_name\"\n ' \"exact phrase\".'\n \" Search newer/older than using d (day), m (month), and y (year): \"\n \"newer_than:2d, older_than:1y.\"\n \" Attachments with extension example: filename:pdf. Multiple term\"\n \" matching example: from:amy OR from:david.\",\n )\n resource: Resource = Field(\n default=Resource.MESSAGES,\n description=\"Whether to search for threads or messages.\",\n )\n max_results: int = Field(\n default=10,\n description=\"The maximum number of results to return.\",\n )\n[docs]class GmailSearch(GmailBaseTool):\n name: str = \"search_gmail\"\n description: str = (", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/gmail/search.html"}
+{"id": "f9439ebd008a-1", "text": "name: str = \"search_gmail\"\n description: str = (\n \"Use this tool to search for email messages or threads.\"\n \" The input must be a valid Gmail query.\"\n \" The output is a JSON list of the requested resource.\"\n )\n args_schema: Type[SearchArgsSchema] = SearchArgsSchema\n def _parse_threads(self, threads: List[Dict[str, Any]]) -> List[Dict[str, Any]]:\n # Add the thread message snippets to the thread results\n results = []\n for thread in threads:\n thread_id = thread[\"id\"]\n thread_data = (\n self.api_resource.users()\n .threads()\n .get(userId=\"me\", id=thread_id)\n .execute()\n )\n messages = thread_data[\"messages\"]\n thread[\"messages\"] = []\n for message in messages:\n snippet = message[\"snippet\"]\n thread[\"messages\"].append({\"snippet\": snippet, \"id\": message[\"id\"]})\n results.append(thread)\n return results\n def _parse_messages(self, messages: List[Dict[str, Any]]) -> List[Dict[str, Any]]:\n results = []\n for message in messages:\n message_id = message[\"id\"]\n message_data = (\n self.api_resource.users()\n .messages()\n .get(userId=\"me\", format=\"raw\", id=message_id)\n .execute()\n )\n raw_message = base64.urlsafe_b64decode(message_data[\"raw\"])\n email_msg = email.message_from_bytes(raw_message)\n subject = email_msg[\"Subject\"]\n sender = email_msg[\"From\"]\n message_body = email_msg.get_payload()\n body = clean_email_body(message_body)\n results.append(\n {", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/gmail/search.html"}
+{"id": "f9439ebd008a-2", "text": "body = clean_email_body(message_body)\n results.append(\n {\n \"id\": message[\"id\"],\n \"threadId\": message_data[\"threadId\"],\n \"snippet\": message_data[\"snippet\"],\n \"body\": body,\n \"subject\": subject,\n \"sender\": sender,\n }\n )\n return results\n def _run(\n self,\n query: str,\n resource: Resource = Resource.MESSAGES,\n max_results: int = 10,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> List[Dict[str, Any]]:\n \"\"\"Run the tool.\"\"\"\n results = (\n self.api_resource.users()\n .messages()\n .list(userId=\"me\", q=query, maxResults=max_results)\n .execute()\n .get(resource.value, [])\n )\n if resource == Resource.THREADS:\n return self._parse_threads(results)\n elif resource == Resource.MESSAGES:\n return self._parse_messages(results)\n else:\n raise NotImplementedError(f\"Resource of type {resource} not implemented.\")\n async def _arun(\n self,\n query: str,\n resource: Resource = Resource.MESSAGES,\n max_results: int = 10,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> List[Dict[str, Any]]:\n \"\"\"Run the tool.\"\"\"\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/gmail/search.html"}
+{"id": "be4b4899ede1-0", "text": "Source code for langchain.tools.gmail.create_draft\nimport base64\nfrom email.message import EmailMessage\nfrom typing import List, Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.gmail.base import GmailBaseTool\nclass CreateDraftSchema(BaseModel):\n message: str = Field(\n ...,\n description=\"The message to include in the draft.\",\n )\n to: List[str] = Field(\n ...,\n description=\"The list of recipients.\",\n )\n subject: str = Field(\n ...,\n description=\"The subject of the message.\",\n )\n cc: Optional[List[str]] = Field(\n None,\n description=\"The list of CC recipients.\",\n )\n bcc: Optional[List[str]] = Field(\n None,\n description=\"The list of BCC recipients.\",\n )\n[docs]class GmailCreateDraft(GmailBaseTool):\n name: str = \"create_gmail_draft\"\n description: str = (\n \"Use this tool to create a draft email with the provided message fields.\"\n )\n args_schema: Type[CreateDraftSchema] = CreateDraftSchema\n def _prepare_draft_message(\n self,\n message: str,\n to: List[str],\n subject: str,\n cc: Optional[List[str]] = None,\n bcc: Optional[List[str]] = None,\n ) -> dict:\n draft_message = EmailMessage()\n draft_message.set_content(message)\n draft_message[\"To\"] = \", \".join(to)\n draft_message[\"Subject\"] = subject\n if cc is not None:", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/gmail/create_draft.html"}
+{"id": "be4b4899ede1-1", "text": "draft_message[\"Subject\"] = subject\n if cc is not None:\n draft_message[\"Cc\"] = \", \".join(cc)\n if bcc is not None:\n draft_message[\"Bcc\"] = \", \".join(bcc)\n encoded_message = base64.urlsafe_b64encode(draft_message.as_bytes()).decode()\n return {\"message\": {\"raw\": encoded_message}}\n def _run(\n self,\n message: str,\n to: List[str],\n subject: str,\n cc: Optional[List[str]] = None,\n bcc: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n create_message = self._prepare_draft_message(message, to, subject, cc, bcc)\n draft = (\n self.api_resource.users()\n .drafts()\n .create(userId=\"me\", body=create_message)\n .execute()\n )\n output = f'Draft created. Draft Id: {draft[\"id\"]}'\n return output\n except Exception as e:\n raise Exception(f\"An error occurred: {e}\")\n async def _arun(\n self,\n message: str,\n to: List[str],\n subject: str,\n cc: Optional[List[str]] = None,\n bcc: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n raise NotImplementedError(f\"The tool {self.name} does not support async yet.\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/gmail/create_draft.html"}
+{"id": "d0abf429b933-0", "text": "Source code for langchain.tools.gmail.send_message\n\"\"\"Send Gmail messages.\"\"\"\nimport base64\nfrom email.mime.multipart import MIMEMultipart\nfrom email.mime.text import MIMEText\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.gmail.base import GmailBaseTool\nclass SendMessageSchema(BaseModel):\n message: str = Field(\n ...,\n description=\"The message to send.\",\n )\n to: List[str] = Field(\n ...,\n description=\"The list of recipients.\",\n )\n subject: str = Field(\n ...,\n description=\"The subject of the message.\",\n )\n cc: Optional[List[str]] = Field(\n None,\n description=\"The list of CC recipients.\",\n )\n bcc: Optional[List[str]] = Field(\n None,\n description=\"The list of BCC recipients.\",\n )\n[docs]class GmailSendMessage(GmailBaseTool):\n name: str = \"send_gmail_message\"\n description: str = (\n \"Use this tool to send email messages.\" \" The input is the message, recipents\"\n )\n def _prepare_message(\n self,\n message: str,\n to: List[str],\n subject: str,\n cc: Optional[List[str]] = None,\n bcc: Optional[List[str]] = None,\n ) -> Dict[str, Any]:\n \"\"\"Create a message for an email.\"\"\"\n mime_message = MIMEMultipart()\n mime_message.attach(MIMEText(message, \"html\"))\n mime_message[\"To\"] = \", \".join(to)", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/gmail/send_message.html"}
+{"id": "d0abf429b933-1", "text": "mime_message[\"To\"] = \", \".join(to)\n mime_message[\"Subject\"] = subject\n if cc is not None:\n mime_message[\"Cc\"] = \", \".join(cc)\n if bcc is not None:\n mime_message[\"Bcc\"] = \", \".join(bcc)\n encoded_message = base64.urlsafe_b64encode(mime_message.as_bytes()).decode()\n return {\"raw\": encoded_message}\n def _run(\n self,\n message: str,\n to: List[str],\n subject: str,\n cc: Optional[List[str]] = None,\n bcc: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run the tool.\"\"\"\n try:\n create_message = self._prepare_message(message, to, subject, cc=cc, bcc=bcc)\n send_message = (\n self.api_resource.users()\n .messages()\n .send(userId=\"me\", body=create_message)\n )\n sent_message = send_message.execute()\n return f'Message sent. Message Id: {sent_message[\"id\"]}'\n except Exception as error:\n raise Exception(f\"An error occurred: {error}\")\n async def _arun(\n self,\n message: str,\n to: List[str],\n subject: str,\n cc: Optional[List[str]] = None,\n bcc: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run the tool asynchronously.\"\"\"\n raise NotImplementedError(f\"The tool {self.name} does not support async yet.\")\nBy Harrison Chase", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/gmail/send_message.html"}
+{"id": "d0abf429b933-2", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/gmail/send_message.html"}
+{"id": "64ab4a715eb4-0", "text": "Source code for langchain.tools.gmail.get_message\nimport base64\nimport email\nfrom typing import Dict, Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.gmail.base import GmailBaseTool\nfrom langchain.tools.gmail.utils import clean_email_body\nclass SearchArgsSchema(BaseModel):\n message_id: str = Field(\n ...,\n description=\"The unique ID of the email message, retrieved from a search.\",\n )\n[docs]class GmailGetMessage(GmailBaseTool):\n name: str = \"get_gmail_message\"\n description: str = (\n \"Use this tool to fetch an email by message ID.\"\n \" Returns the thread ID, snipet, body, subject, and sender.\"\n )\n args_schema: Type[SearchArgsSchema] = SearchArgsSchema\n def _run(\n self,\n message_id: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> Dict:\n \"\"\"Run the tool.\"\"\"\n query = (\n self.api_resource.users()\n .messages()\n .get(userId=\"me\", format=\"raw\", id=message_id)\n )\n message_data = query.execute()\n raw_message = base64.urlsafe_b64decode(message_data[\"raw\"])\n email_msg = email.message_from_bytes(raw_message)\n subject = email_msg[\"Subject\"]\n sender = email_msg[\"From\"]\n message_body = email_msg.get_payload()\n body = clean_email_body(message_body)\n return {\n \"id\": message_id,\n \"threadId\": message_data[\"threadId\"],\n \"snippet\": message_data[\"snippet\"],", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/gmail/get_message.html"}
+{"id": "64ab4a715eb4-1", "text": "\"snippet\": message_data[\"snippet\"],\n \"body\": body,\n \"subject\": subject,\n \"sender\": sender,\n }\n async def _arun(\n self,\n message_id: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> Dict:\n \"\"\"Run the tool.\"\"\"\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/gmail/get_message.html"}
+{"id": "61faa6f25fbb-0", "text": "Source code for langchain.tools.gmail.get_thread\nfrom typing import Dict, Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.gmail.base import GmailBaseTool\nclass GetThreadSchema(BaseModel):\n # From https://support.google.com/mail/answer/7190?hl=en\n thread_id: str = Field(\n ...,\n description=\"The thread ID.\",\n )\n[docs]class GmailGetThread(GmailBaseTool):\n name: str = \"get_gmail_thread\"\n description: str = (\n \"Use this tool to search for email messages.\"\n \" The input must be a valid Gmail query.\"\n \" The output is a JSON list of messages.\"\n )\n args_schema: Type[GetThreadSchema] = GetThreadSchema\n def _run(\n self,\n thread_id: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> Dict:\n \"\"\"Run the tool.\"\"\"\n query = self.api_resource.users().threads().get(userId=\"me\", id=thread_id)\n thread_data = query.execute()\n if not isinstance(thread_data, dict):\n raise ValueError(\"The output of the query must be a list.\")\n messages = thread_data[\"messages\"]\n thread_data[\"messages\"] = []\n keys_to_keep = [\"id\", \"snippet\", \"snippet\"]\n # TODO: Parse body.\n for message in messages:\n thread_data[\"messages\"].append(\n {k: message[k] for k in keys_to_keep if k in message}\n )\n return thread_data\n async def _arun(\n self,", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/gmail/get_thread.html"}
+{"id": "61faa6f25fbb-1", "text": ")\n return thread_data\n async def _arun(\n self,\n thread_id: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> Dict:\n \"\"\"Run the tool.\"\"\"\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/gmail/get_thread.html"}
+{"id": "956ee536c225-0", "text": "Source code for langchain.tools.wolfram_alpha.tool\n\"\"\"Tool for the Wolfram Alpha API.\"\"\"\nfrom typing import Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper\n[docs]class WolframAlphaQueryRun(BaseTool):\n \"\"\"Tool that adds the capability to query using the Wolfram Alpha SDK.\"\"\"\n name = \"Wolfram Alpha\"\n description = (\n \"A wrapper around Wolfram Alpha. \"\n \"Useful for when you need to answer questions about Math, \"\n \"Science, Technology, Culture, Society and Everyday Life. \"\n \"Input should be a search query.\"\n )\n api_wrapper: WolframAlphaAPIWrapper\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the WolframAlpha tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the WolframAlpha tool asynchronously.\"\"\"\n raise NotImplementedError(\"WolframAlphaQueryRun does not support async\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/wolfram_alpha/tool.html"}
+{"id": "423e1e69a1d3-0", "text": "Source code for langchain.tools.google_search.tool\n\"\"\"Tool for the Google search API.\"\"\"\nfrom typing import Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.google_search import GoogleSearchAPIWrapper\n[docs]class GoogleSearchRun(BaseTool):\n \"\"\"Tool that adds the capability to query the Google search API.\"\"\"\n name = \"Google Search\"\n description = (\n \"A wrapper around Google Search. \"\n \"Useful for when you need to answer questions about current events. \"\n \"Input should be a search query.\"\n )\n api_wrapper: GoogleSearchAPIWrapper\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"GoogleSearchRun does not support async\")\n[docs]class GoogleSearchResults(BaseTool):\n \"\"\"Tool that has capability to query the Google Search API and get back json.\"\"\"\n name = \"Google Search Results JSON\"\n description = (\n \"A wrapper around Google Search. \"\n \"Useful for when you need to answer questions about current events. \"\n \"Input should be a search query. Output is a JSON array of the query results\"\n )\n num_results: int = 4\n api_wrapper: GoogleSearchAPIWrapper\n def _run(\n self,", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/google_search/tool.html"}
+{"id": "423e1e69a1d3-1", "text": "api_wrapper: GoogleSearchAPIWrapper\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return str(self.api_wrapper.results(query, self.num_results))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"GoogleSearchRun does not support async\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/google_search/tool.html"}
+{"id": "789080bc3878-0", "text": "Source code for langchain.tools.openapi.utils.api_models\n\"\"\"Pydantic models for parsing an OpenAPI spec.\"\"\"\nimport logging\nfrom enum import Enum\nfrom typing import Any, Dict, List, Optional, Sequence, Tuple, Type, Union\nfrom openapi_schema_pydantic import MediaType, Parameter, Reference, RequestBody, Schema\nfrom pydantic import BaseModel, Field\nfrom langchain.tools.openapi.utils.openapi_utils import HTTPVerb, OpenAPISpec\nlogger = logging.getLogger(__name__)\nPRIMITIVE_TYPES = {\n \"integer\": int,\n \"number\": float,\n \"string\": str,\n \"boolean\": bool,\n \"array\": List,\n \"object\": Dict,\n \"null\": None,\n}\n# See https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.1.0.md#parameterIn\n# for more info.\nclass APIPropertyLocation(Enum):\n \"\"\"The location of the property.\"\"\"\n QUERY = \"query\"\n PATH = \"path\"\n HEADER = \"header\"\n COOKIE = \"cookie\" # Not yet supported\n @classmethod\n def from_str(cls, location: str) -> \"APIPropertyLocation\":\n \"\"\"Parse an APIPropertyLocation.\"\"\"\n try:\n return cls(location)\n except ValueError:\n raise ValueError(\n f\"Invalid APIPropertyLocation. Valid values are {cls.__members__}\"\n )\n_SUPPORTED_MEDIA_TYPES = (\"application/json\",)\nSUPPORTED_LOCATIONS = {\n APIPropertyLocation.QUERY,\n APIPropertyLocation.PATH,\n}\nINVALID_LOCATION_TEMPL = (\n 'Unsupported APIPropertyLocation \"{location}\"'\n \" for parameter {name}. \"\n + f\"Valid values are {[loc.value for loc in SUPPORTED_LOCATIONS]}\"\n)", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"}
+{"id": "789080bc3878-1", "text": ")\nSCHEMA_TYPE = Union[str, Type, tuple, None, Enum]\nclass APIPropertyBase(BaseModel):\n \"\"\"Base model for an API property.\"\"\"\n # The name of the parameter is required and is case sensitive.\n # If \"in\" is \"path\", the \"name\" field must correspond to a template expression\n # within the path field in the Paths Object.\n # If \"in\" is \"header\" and the \"name\" field is \"Accept\", \"Content-Type\",\n # or \"Authorization\", the parameter definition is ignored.\n # For all other cases, the \"name\" corresponds to the parameter\n # name used by the \"in\" property.\n name: str = Field(alias=\"name\")\n \"\"\"The name of the property.\"\"\"\n required: bool = Field(alias=\"required\")\n \"\"\"Whether the property is required.\"\"\"\n type: SCHEMA_TYPE = Field(alias=\"type\")\n \"\"\"The type of the property.\n \n Either a primitive type, a component/parameter type,\n or an array or 'object' (dict) of the above.\"\"\"\n default: Optional[Any] = Field(alias=\"default\", default=None)\n \"\"\"The default value of the property.\"\"\"\n description: Optional[str] = Field(alias=\"description\", default=None)\n \"\"\"The description of the property.\"\"\"\nclass APIProperty(APIPropertyBase):\n \"\"\"A model for a property in the query, path, header, or cookie params.\"\"\"\n location: APIPropertyLocation = Field(alias=\"location\")\n \"\"\"The path/how it's being passed to the endpoint.\"\"\"\n @staticmethod\n def _cast_schema_list_type(schema: Schema) -> Optional[Union[str, Tuple[str, ...]]]:\n type_ = schema.type\n if not isinstance(type_, list):\n return type_", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"}
+{"id": "789080bc3878-2", "text": "if not isinstance(type_, list):\n return type_\n else:\n return tuple(type_)\n @staticmethod\n def _get_schema_type_for_enum(parameter: Parameter, schema: Schema) -> Enum:\n \"\"\"Get the schema type when the parameter is an enum.\"\"\"\n param_name = f\"{parameter.name}Enum\"\n return Enum(param_name, {str(v): v for v in schema.enum})\n @staticmethod\n def _get_schema_type_for_array(\n schema: Schema,\n ) -> Optional[Union[str, Tuple[str, ...]]]:\n items = schema.items\n if isinstance(items, Schema):\n schema_type = APIProperty._cast_schema_list_type(items)\n elif isinstance(items, Reference):\n ref_name = items.ref.split(\"/\")[-1]\n schema_type = ref_name # TODO: Add ref definitions to make his valid\n else:\n raise ValueError(f\"Unsupported array items: {items}\")\n if isinstance(schema_type, str):\n # TODO: recurse\n schema_type = (schema_type,)\n return schema_type\n @staticmethod\n def _get_schema_type(parameter: Parameter, schema: Optional[Schema]) -> SCHEMA_TYPE:\n if schema is None:\n return None\n schema_type: SCHEMA_TYPE = APIProperty._cast_schema_list_type(schema)\n if schema_type == \"array\":\n schema_type = APIProperty._get_schema_type_for_array(schema)\n elif schema_type == \"object\":\n # TODO: Resolve array and object types to components.\n raise NotImplementedError(\"Objects not yet supported\")\n elif schema_type in PRIMITIVE_TYPES:\n if schema.enum:\n schema_type = APIProperty._get_schema_type_for_enum(parameter, schema)\n else:\n # Directly use the primitive type\n pass", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"}
+{"id": "789080bc3878-3", "text": "else:\n # Directly use the primitive type\n pass\n else:\n raise NotImplementedError(f\"Unsupported type: {schema_type}\")\n return schema_type\n @staticmethod\n def _validate_location(location: APIPropertyLocation, name: str) -> None:\n if location not in SUPPORTED_LOCATIONS:\n raise NotImplementedError(\n INVALID_LOCATION_TEMPL.format(location=location, name=name)\n )\n @staticmethod\n def _validate_content(content: Optional[Dict[str, MediaType]]) -> None:\n if content:\n raise ValueError(\n \"API Properties with media content not supported. \"\n \"Media content only supported within APIRequestBodyProperty's\"\n )\n @staticmethod\n def _get_schema(parameter: Parameter, spec: OpenAPISpec) -> Optional[Schema]:\n schema = parameter.param_schema\n if isinstance(schema, Reference):\n schema = spec.get_referenced_schema(schema)\n elif schema is None:\n return None\n elif not isinstance(schema, Schema):\n raise ValueError(f\"Error dereferencing schema: {schema}\")\n return schema\n @staticmethod\n def is_supported_location(location: str) -> bool:\n \"\"\"Return whether the provided location is supported.\"\"\"\n try:\n return APIPropertyLocation.from_str(location) in SUPPORTED_LOCATIONS\n except ValueError:\n return False\n @classmethod\n def from_parameter(cls, parameter: Parameter, spec: OpenAPISpec) -> \"APIProperty\":\n \"\"\"Instantiate from an OpenAPI Parameter.\"\"\"\n location = APIPropertyLocation.from_str(parameter.param_in)\n cls._validate_location(\n location,\n parameter.name,\n )\n cls._validate_content(parameter.content)\n schema = cls._get_schema(parameter, spec)", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"}
+{"id": "789080bc3878-4", "text": "schema = cls._get_schema(parameter, spec)\n schema_type = cls._get_schema_type(parameter, schema)\n default_val = schema.default if schema is not None else None\n return cls(\n name=parameter.name,\n location=location,\n default=default_val,\n description=parameter.description,\n required=parameter.required,\n type=schema_type,\n )\nclass APIRequestBodyProperty(APIPropertyBase):\n \"\"\"A model for a request body property.\"\"\"\n properties: List[\"APIRequestBodyProperty\"] = Field(alias=\"properties\")\n \"\"\"The sub-properties of the property.\"\"\"\n # This is useful for handling nested property cycles.\n # We can define separate types in that case.\n references_used: List[str] = Field(alias=\"references_used\")\n \"\"\"The references used by the property.\"\"\"\n @classmethod\n def _process_object_schema(\n cls, schema: Schema, spec: OpenAPISpec, references_used: List[str]\n ) -> Tuple[Union[str, List[str], None], List[\"APIRequestBodyProperty\"]]:\n properties = []\n required_props = schema.required or []\n if schema.properties is None:\n raise ValueError(\n f\"No properties found when processing object schema: {schema}\"\n )\n for prop_name, prop_schema in schema.properties.items():\n if isinstance(prop_schema, Reference):\n ref_name = prop_schema.ref.split(\"/\")[-1]\n if ref_name not in references_used:\n references_used.append(ref_name)\n prop_schema = spec.get_referenced_schema(prop_schema)\n else:\n continue\n properties.append(\n cls.from_schema(\n schema=prop_schema,\n name=prop_name,\n required=prop_name in required_props,\n spec=spec,", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"}
+{"id": "789080bc3878-5", "text": "required=prop_name in required_props,\n spec=spec,\n references_used=references_used,\n )\n )\n return schema.type, properties\n @classmethod\n def _process_array_schema(\n cls, schema: Schema, name: str, spec: OpenAPISpec, references_used: List[str]\n ) -> str:\n items = schema.items\n if items is not None:\n if isinstance(items, Reference):\n ref_name = items.ref.split(\"/\")[-1]\n if ref_name not in references_used:\n references_used.append(ref_name)\n items = spec.get_referenced_schema(items)\n else:\n pass\n return f\"Array<{ref_name}>\"\n else:\n pass\n if isinstance(items, Schema):\n array_type = cls.from_schema(\n schema=items,\n name=f\"{name}Item\",\n required=True, # TODO: Add required\n spec=spec,\n references_used=references_used,\n )\n return f\"Array<{array_type.type}>\"\n return \"array\"\n @classmethod\n def from_schema(\n cls,\n schema: Schema,\n name: str,\n required: bool,\n spec: OpenAPISpec,\n references_used: Optional[List[str]] = None,\n ) -> \"APIRequestBodyProperty\":\n \"\"\"Recursively populate from an OpenAPI Schema.\"\"\"\n if references_used is None:\n references_used = []\n schema_type = schema.type\n properties: List[APIRequestBodyProperty] = []\n if schema_type == \"object\" and schema.properties:\n schema_type, properties = cls._process_object_schema(\n schema, spec, references_used\n )\n elif schema_type == \"array\":", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"}
+{"id": "789080bc3878-6", "text": "schema, spec, references_used\n )\n elif schema_type == \"array\":\n schema_type = cls._process_array_schema(schema, name, spec, references_used)\n elif schema_type in PRIMITIVE_TYPES:\n # Use the primitive type directly\n pass\n elif schema_type is None:\n # No typing specified/parsed. WIll map to 'any'\n pass\n else:\n raise ValueError(f\"Unsupported type: {schema_type}\")\n return cls(\n name=name,\n required=required,\n type=schema_type,\n default=schema.default,\n description=schema.description,\n properties=properties,\n references_used=references_used,\n )\nclass APIRequestBody(BaseModel):\n \"\"\"A model for a request body.\"\"\"\n description: Optional[str] = Field(alias=\"description\")\n \"\"\"The description of the request body.\"\"\"\n properties: List[APIRequestBodyProperty] = Field(alias=\"properties\")\n # E.g., application/json - we only support JSON at the moment.\n media_type: str = Field(alias=\"media_type\")\n \"\"\"The media type of the request body.\"\"\"\n @classmethod\n def _process_supported_media_type(\n cls,\n media_type_obj: MediaType,\n spec: OpenAPISpec,\n ) -> List[APIRequestBodyProperty]:\n \"\"\"Process the media type of the request body.\"\"\"\n references_used = []\n schema = media_type_obj.media_type_schema\n if isinstance(schema, Reference):\n references_used.append(schema.ref.split(\"/\")[-1])\n schema = spec.get_referenced_schema(schema)\n if schema is None:\n raise ValueError(\n f\"Could not resolve schema for media type: {media_type_obj}\"\n )\n api_request_body_properties = []", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"}
+{"id": "789080bc3878-7", "text": ")\n api_request_body_properties = []\n required_properties = schema.required or []\n if schema.type == \"object\" and schema.properties:\n for prop_name, prop_schema in schema.properties.items():\n if isinstance(prop_schema, Reference):\n prop_schema = spec.get_referenced_schema(prop_schema)\n api_request_body_properties.append(\n APIRequestBodyProperty.from_schema(\n schema=prop_schema,\n name=prop_name,\n required=prop_name in required_properties,\n spec=spec,\n )\n )\n else:\n api_request_body_properties.append(\n APIRequestBodyProperty(\n name=\"body\",\n required=True,\n type=schema.type,\n default=schema.default,\n description=schema.description,\n properties=[],\n references_used=references_used,\n )\n )\n return api_request_body_properties\n @classmethod\n def from_request_body(\n cls, request_body: RequestBody, spec: OpenAPISpec\n ) -> \"APIRequestBody\":\n \"\"\"Instantiate from an OpenAPI RequestBody.\"\"\"\n properties = []\n for media_type, media_type_obj in request_body.content.items():\n if media_type not in _SUPPORTED_MEDIA_TYPES:\n continue\n api_request_body_properties = cls._process_supported_media_type(\n media_type_obj,\n spec,\n )\n properties.extend(api_request_body_properties)\n return cls(\n description=request_body.description,\n properties=properties,\n media_type=media_type,\n )\n[docs]class APIOperation(BaseModel):\n \"\"\"A model for a single API operation.\"\"\"\n operation_id: str = Field(alias=\"operation_id\")\n \"\"\"The unique identifier of the operation.\"\"\"\n description: Optional[str] = Field(alias=\"description\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"}
+{"id": "789080bc3878-8", "text": "description: Optional[str] = Field(alias=\"description\")\n \"\"\"The description of the operation.\"\"\"\n base_url: str = Field(alias=\"base_url\")\n \"\"\"The base URL of the operation.\"\"\"\n path: str = Field(alias=\"path\")\n \"\"\"The path of the operation.\"\"\"\n method: HTTPVerb = Field(alias=\"method\")\n \"\"\"The HTTP method of the operation.\"\"\"\n properties: Sequence[APIProperty] = Field(alias=\"properties\")\n # TODO: Add parse in used components to be able to specify what type of\n # referenced object it is.\n # \"\"\"The properties of the operation.\"\"\"\n # components: Dict[str, BaseModel] = Field(alias=\"components\")\n request_body: Optional[APIRequestBody] = Field(alias=\"request_body\")\n \"\"\"The request body of the operation.\"\"\"\n @staticmethod\n def _get_properties_from_parameters(\n parameters: List[Parameter], spec: OpenAPISpec\n ) -> List[APIProperty]:\n \"\"\"Get the properties of the operation.\"\"\"\n properties = []\n for param in parameters:\n if APIProperty.is_supported_location(param.param_in):\n properties.append(APIProperty.from_parameter(param, spec))\n elif param.required:\n raise ValueError(\n INVALID_LOCATION_TEMPL.format(\n location=param.param_in, name=param.name\n )\n )\n else:\n logger.warning(\n INVALID_LOCATION_TEMPL.format(\n location=param.param_in, name=param.name\n )\n + \" Ignoring optional parameter\"\n )\n pass\n return properties\n[docs] @classmethod\n def from_openapi_url(\n cls,\n spec_url: str,\n path: str,\n method: str,\n ) -> \"APIOperation\":", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"}
+{"id": "789080bc3878-9", "text": "path: str,\n method: str,\n ) -> \"APIOperation\":\n \"\"\"Create an APIOperation from an OpenAPI URL.\"\"\"\n spec = OpenAPISpec.from_url(spec_url)\n return cls.from_openapi_spec(spec, path, method)\n[docs] @classmethod\n def from_openapi_spec(\n cls,\n spec: OpenAPISpec,\n path: str,\n method: str,\n ) -> \"APIOperation\":\n \"\"\"Create an APIOperation from an OpenAPI spec.\"\"\"\n operation = spec.get_operation(path, method)\n parameters = spec.get_parameters_for_operation(operation)\n properties = cls._get_properties_from_parameters(parameters, spec)\n operation_id = OpenAPISpec.get_cleaned_operation_id(operation, path, method)\n request_body = spec.get_request_body_for_operation(operation)\n api_request_body = (\n APIRequestBody.from_request_body(request_body, spec)\n if request_body is not None\n else None\n )\n description = operation.description or operation.summary\n if not description and spec.paths is not None:\n description = spec.paths[path].description or spec.paths[path].summary\n return cls(\n operation_id=operation_id,\n description=description,\n base_url=spec.base_url,\n path=path,\n method=method,\n properties=properties,\n request_body=api_request_body,\n )\n[docs] @staticmethod\n def ts_type_from_python(type_: SCHEMA_TYPE) -> str:\n if type_ is None:\n # TODO: Handle Nones better. These often result when\n # parsing specs that are < v3\n return \"any\"\n elif isinstance(type_, str):\n return {\n \"str\": \"string\",", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"}
+{"id": "789080bc3878-10", "text": "elif isinstance(type_, str):\n return {\n \"str\": \"string\",\n \"integer\": \"number\",\n \"float\": \"number\",\n \"date-time\": \"string\",\n }.get(type_, type_)\n elif isinstance(type_, tuple):\n return f\"Array<{APIOperation.ts_type_from_python(type_[0])}>\"\n elif isinstance(type_, type) and issubclass(type_, Enum):\n return \" | \".join([f\"'{e.value}'\" for e in type_])\n else:\n return str(type_)\n def _format_nested_properties(\n self, properties: List[APIRequestBodyProperty], indent: int = 2\n ) -> str:\n \"\"\"Format nested properties.\"\"\"\n formatted_props = []\n for prop in properties:\n prop_name = prop.name\n prop_type = self.ts_type_from_python(prop.type)\n prop_required = \"\" if prop.required else \"?\"\n prop_desc = f\"/* {prop.description} */\" if prop.description else \"\"\n if prop.properties:\n nested_props = self._format_nested_properties(\n prop.properties, indent + 2\n )\n prop_type = f\"{{\\n{nested_props}\\n{' ' * indent}}}\"\n formatted_props.append(\n f\"{prop_desc}\\n{' ' * indent}{prop_name}{prop_required}: {prop_type},\"\n )\n return \"\\n\".join(formatted_props)\n[docs] def to_typescript(self) -> str:\n \"\"\"Get typescript string representation of the operation.\"\"\"\n operation_name = self.operation_id\n params = []\n if self.request_body:\n formatted_request_body_props = self._format_nested_properties(\n self.request_body.properties\n )\n params.append(formatted_request_body_props)", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"}
+{"id": "789080bc3878-11", "text": "self.request_body.properties\n )\n params.append(formatted_request_body_props)\n for prop in self.properties:\n prop_name = prop.name\n prop_type = self.ts_type_from_python(prop.type)\n prop_required = \"\" if prop.required else \"?\"\n prop_desc = f\"/* {prop.description} */\" if prop.description else \"\"\n params.append(f\"{prop_desc}\\n\\t\\t{prop_name}{prop_required}: {prop_type},\")\n formatted_params = \"\\n\".join(params).strip()\n description_str = f\"/* {self.description} */\" if self.description else \"\"\n typescript_definition = f\"\"\"\n{description_str}\ntype {operation_name} = (_: {{\n{formatted_params}\n}}) => any;\n\"\"\"\n return typescript_definition.strip()\n @property\n def query_params(self) -> List[str]:\n return [\n property.name\n for property in self.properties\n if property.location == APIPropertyLocation.QUERY\n ]\n @property\n def path_params(self) -> List[str]:\n return [\n property.name\n for property in self.properties\n if property.location == APIPropertyLocation.PATH\n ]\n @property\n def body_params(self) -> List[str]:\n if self.request_body is None:\n return []\n return [prop.name for prop in self.request_body.properties]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"}
+{"id": "12f178f7435e-0", "text": "Source code for langchain.tools.openapi.utils.openapi_utils\n\"\"\"Utility functions for parsing an OpenAPI spec.\"\"\"\nimport copy\nimport json\nimport logging\nimport re\nfrom enum import Enum\nfrom pathlib import Path\nfrom typing import Dict, List, Optional, Union\nimport requests\nimport yaml\nfrom openapi_schema_pydantic import (\n Components,\n OpenAPI,\n Operation,\n Parameter,\n PathItem,\n Paths,\n Reference,\n RequestBody,\n Schema,\n)\nfrom pydantic import ValidationError\nlogger = logging.getLogger(__name__)\nclass HTTPVerb(str, Enum):\n \"\"\"HTTP verbs.\"\"\"\n GET = \"get\"\n PUT = \"put\"\n POST = \"post\"\n DELETE = \"delete\"\n OPTIONS = \"options\"\n HEAD = \"head\"\n PATCH = \"patch\"\n TRACE = \"trace\"\n @classmethod\n def from_str(cls, verb: str) -> \"HTTPVerb\":\n \"\"\"Parse an HTTP verb.\"\"\"\n try:\n return cls(verb)\n except ValueError:\n raise ValueError(f\"Invalid HTTP verb. Valid values are {cls.__members__}\")\n[docs]class OpenAPISpec(OpenAPI):\n \"\"\"OpenAPI Model that removes misformatted parts of the spec.\"\"\"\n @property\n def _paths_strict(self) -> Paths:\n if not self.paths:\n raise ValueError(\"No paths found in spec\")\n return self.paths\n def _get_path_strict(self, path: str) -> PathItem:\n path_item = self._paths_strict.get(path)\n if not path_item:\n raise ValueError(f\"No path found for {path}\")\n return path_item\n @property\n def _components_strict(self) -> Components:", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/openapi_utils.html"}
+{"id": "12f178f7435e-1", "text": "@property\n def _components_strict(self) -> Components:\n \"\"\"Get components or err.\"\"\"\n if self.components is None:\n raise ValueError(\"No components found in spec. \")\n return self.components\n @property\n def _parameters_strict(self) -> Dict[str, Union[Parameter, Reference]]:\n \"\"\"Get parameters or err.\"\"\"\n parameters = self._components_strict.parameters\n if parameters is None:\n raise ValueError(\"No parameters found in spec. \")\n return parameters\n @property\n def _schemas_strict(self) -> Dict[str, Schema]:\n \"\"\"Get the dictionary of schemas or err.\"\"\"\n schemas = self._components_strict.schemas\n if schemas is None:\n raise ValueError(\"No schemas found in spec. \")\n return schemas\n @property\n def _request_bodies_strict(self) -> Dict[str, Union[RequestBody, Reference]]:\n \"\"\"Get the request body or err.\"\"\"\n request_bodies = self._components_strict.requestBodies\n if request_bodies is None:\n raise ValueError(\"No request body found in spec. \")\n return request_bodies\n def _get_referenced_parameter(self, ref: Reference) -> Union[Parameter, Reference]:\n \"\"\"Get a parameter (or nested reference) or err.\"\"\"\n ref_name = ref.ref.split(\"/\")[-1]\n parameters = self._parameters_strict\n if ref_name not in parameters:\n raise ValueError(f\"No parameter found for {ref_name}\")\n return parameters[ref_name]\n def _get_root_referenced_parameter(self, ref: Reference) -> Parameter:\n \"\"\"Get the root reference or err.\"\"\"\n parameter = self._get_referenced_parameter(ref)\n while isinstance(parameter, Reference):", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/openapi_utils.html"}
+{"id": "12f178f7435e-2", "text": "parameter = self._get_referenced_parameter(ref)\n while isinstance(parameter, Reference):\n parameter = self._get_referenced_parameter(parameter)\n return parameter\n[docs] def get_referenced_schema(self, ref: Reference) -> Schema:\n \"\"\"Get a schema (or nested reference) or err.\"\"\"\n ref_name = ref.ref.split(\"/\")[-1]\n schemas = self._schemas_strict\n if ref_name not in schemas:\n raise ValueError(f\"No schema found for {ref_name}\")\n return schemas[ref_name]\n def _get_root_referenced_schema(self, ref: Reference) -> Schema:\n \"\"\"Get the root reference or err.\"\"\"\n schema = self.get_referenced_schema(ref)\n while isinstance(schema, Reference):\n schema = self.get_referenced_schema(schema)\n return schema\n def _get_referenced_request_body(\n self, ref: Reference\n ) -> Optional[Union[Reference, RequestBody]]:\n \"\"\"Get a request body (or nested reference) or err.\"\"\"\n ref_name = ref.ref.split(\"/\")[-1]\n request_bodies = self._request_bodies_strict\n if ref_name not in request_bodies:\n raise ValueError(f\"No request body found for {ref_name}\")\n return request_bodies[ref_name]\n def _get_root_referenced_request_body(\n self, ref: Reference\n ) -> Optional[RequestBody]:\n \"\"\"Get the root request Body or err.\"\"\"\n request_body = self._get_referenced_request_body(ref)\n while isinstance(request_body, Reference):\n request_body = self._get_referenced_request_body(request_body)\n return request_body\n @staticmethod\n def _alert_unsupported_spec(obj: dict) -> None:\n \"\"\"Alert if the spec is not supported.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/openapi_utils.html"}
+{"id": "12f178f7435e-3", "text": "\"\"\"Alert if the spec is not supported.\"\"\"\n warning_message = (\n \" This may result in degraded performance.\"\n + \" Convert your OpenAPI spec to 3.1.* spec\"\n + \" for better support.\"\n )\n swagger_version = obj.get(\"swagger\")\n openapi_version = obj.get(\"openapi\")\n if isinstance(openapi_version, str):\n if openapi_version != \"3.1.0\":\n logger.warning(\n f\"Attempting to load an OpenAPI {openapi_version}\"\n f\" spec. {warning_message}\"\n )\n else:\n pass\n elif isinstance(swagger_version, str):\n logger.warning(\n f\"Attempting to load a Swagger {swagger_version}\"\n f\" spec. {warning_message}\"\n )\n else:\n raise ValueError(\n \"Attempting to load an unsupported spec:\"\n f\"\\n\\n{obj}\\n{warning_message}\"\n )\n[docs] @classmethod\n def parse_obj(cls, obj: dict) -> \"OpenAPISpec\":\n try:\n cls._alert_unsupported_spec(obj)\n return super().parse_obj(obj)\n except ValidationError as e:\n # We are handling possibly misconfigured specs and want to do a best-effort\n # job to get a reasonable interface out of it.\n new_obj = copy.deepcopy(obj)\n for error in e.errors():\n keys = error[\"loc\"]\n item = new_obj\n for key in keys[:-1]:\n item = item[key]\n item.pop(keys[-1], None)\n return cls.parse_obj(new_obj)\n[docs] @classmethod\n def from_spec_dict(cls, spec_dict: dict) -> \"OpenAPISpec\":", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/openapi_utils.html"}
+{"id": "12f178f7435e-4", "text": "def from_spec_dict(cls, spec_dict: dict) -> \"OpenAPISpec\":\n \"\"\"Get an OpenAPI spec from a dict.\"\"\"\n return cls.parse_obj(spec_dict)\n[docs] @classmethod\n def from_text(cls, text: str) -> \"OpenAPISpec\":\n \"\"\"Get an OpenAPI spec from a text.\"\"\"\n try:\n spec_dict = json.loads(text)\n except json.JSONDecodeError:\n spec_dict = yaml.safe_load(text)\n return cls.from_spec_dict(spec_dict)\n[docs] @classmethod\n def from_file(cls, path: Union[str, Path]) -> \"OpenAPISpec\":\n \"\"\"Get an OpenAPI spec from a file path.\"\"\"\n path_ = path if isinstance(path, Path) else Path(path)\n if not path_.exists():\n raise FileNotFoundError(f\"{path} does not exist\")\n with path_.open(\"r\") as f:\n return cls.from_text(f.read())\n[docs] @classmethod\n def from_url(cls, url: str) -> \"OpenAPISpec\":\n \"\"\"Get an OpenAPI spec from a URL.\"\"\"\n response = requests.get(url)\n return cls.from_text(response.text)\n @property\n def base_url(self) -> str:\n \"\"\"Get the base url.\"\"\"\n return self.servers[0].url\n[docs] def get_methods_for_path(self, path: str) -> List[str]:\n \"\"\"Return a list of valid methods for the specified path.\"\"\"\n path_item = self._get_path_strict(path)\n results = []\n for method in HTTPVerb:\n operation = getattr(path_item, method.value, None)\n if isinstance(operation, Operation):\n results.append(method.value)\n return results", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/openapi_utils.html"}
+{"id": "12f178f7435e-5", "text": "if isinstance(operation, Operation):\n results.append(method.value)\n return results\n[docs] def get_operation(self, path: str, method: str) -> Operation:\n \"\"\"Get the operation object for a given path and HTTP method.\"\"\"\n path_item = self._get_path_strict(path)\n operation_obj = getattr(path_item, method, None)\n if not isinstance(operation_obj, Operation):\n raise ValueError(f\"No {method} method found for {path}\")\n return operation_obj\n[docs] def get_parameters_for_operation(self, operation: Operation) -> List[Parameter]:\n \"\"\"Get the components for a given operation.\"\"\"\n parameters = []\n if operation.parameters:\n for parameter in operation.parameters:\n if isinstance(parameter, Reference):\n parameter = self._get_root_referenced_parameter(parameter)\n parameters.append(parameter)\n return parameters\n[docs] def get_request_body_for_operation(\n self, operation: Operation\n ) -> Optional[RequestBody]:\n \"\"\"Get the request body for a given operation.\"\"\"\n request_body = operation.requestBody\n if isinstance(request_body, Reference):\n request_body = self._get_root_referenced_request_body(request_body)\n return request_body\n[docs] @staticmethod\n def get_cleaned_operation_id(operation: Operation, path: str, method: str) -> str:\n \"\"\"Get a cleaned operation id from an operation id.\"\"\"\n operation_id = operation.operationId\n if operation_id is None:\n # Replace all punctuation of any kind with underscore\n path = re.sub(r\"[^a-zA-Z0-9]\", \"_\", path.lstrip(\"/\"))\n operation_id = f\"{path}_{method}\"\n return operation_id.replace(\"-\", \"_\").replace(\".\", \"_\").replace(\"/\", \"_\")\nBy Harrison Chase", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/openapi_utils.html"}
+{"id": "12f178f7435e-6", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/openapi_utils.html"}
+{"id": "3a4b469b7672-0", "text": "Source code for langchain.tools.ddg_search.tool\n\"\"\"Tool for the DuckDuckGo search API.\"\"\"\nimport warnings\nfrom typing import Any, Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.duckduckgo_search import DuckDuckGoSearchAPIWrapper\n[docs]class DuckDuckGoSearchRun(BaseTool):\n \"\"\"Tool that adds the capability to query the DuckDuckGo search API.\"\"\"\n name = \"DuckDuckGo Search\"\n description = (\n \"A wrapper around DuckDuckGo Search. \"\n \"Useful for when you need to answer questions about current events. \"\n \"Input should be a search query.\"\n )\n api_wrapper: DuckDuckGoSearchAPIWrapper = Field(\n default_factory=DuckDuckGoSearchAPIWrapper\n )\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"DuckDuckGoSearch does not support async\")\n[docs]class DuckDuckGoSearchResults(BaseTool):\n \"\"\"Tool that queries the Duck Duck Go Search API and get back json.\"\"\"\n name = \"DuckDuckGo Results JSON\"\n description = (\n \"A wrapper around Duck Duck Go Search. \"", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/ddg_search/tool.html"}
+{"id": "3a4b469b7672-1", "text": "description = (\n \"A wrapper around Duck Duck Go Search. \"\n \"Useful for when you need to answer questions about current events. \"\n \"Input should be a search query. Output is a JSON array of the query results\"\n )\n num_results: int = 4\n api_wrapper: DuckDuckGoSearchAPIWrapper = Field(\n default_factory=DuckDuckGoSearchAPIWrapper\n )\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return str(self.api_wrapper.results(query, self.num_results))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"DuckDuckGoSearchResults does not support async\")\ndef DuckDuckGoSearchTool(*args: Any, **kwargs: Any) -> DuckDuckGoSearchRun:\n warnings.warn(\n \"DuckDuckGoSearchTool will be deprecated in the future. \"\n \"Please use DuckDuckGoSearchRun instead.\",\n DeprecationWarning,\n )\n return DuckDuckGoSearchRun(*args, **kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/ddg_search/tool.html"}
+{"id": "e5718d13777f-0", "text": "Source code for langchain.tools.scenexplain.tool\n\"\"\"Tool for the SceneXplain API.\"\"\"\nfrom typing import Optional\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.scenexplain import SceneXplainAPIWrapper\nclass SceneXplainInput(BaseModel):\n \"\"\"Input for SceneXplain.\"\"\"\n query: str = Field(..., description=\"The link to the image to explain\")\n[docs]class SceneXplainTool(BaseTool):\n \"\"\"Tool that adds the capability to explain images.\"\"\"\n name = \"Image Explainer\"\n description = (\n \"An Image Captioning Tool: Use this tool to generate a detailed caption \"\n \"for an image. The input can be an image file of any format, and \"\n \"the output will be a text description that covers every detail of the image.\"\n )\n api_wrapper: SceneXplainAPIWrapper = Field(default_factory=SceneXplainAPIWrapper)\n def _run(\n self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"SceneXplainTool does not support async\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/scenexplain/tool.html"}
+{"id": "cde0d43a8533-0", "text": "Source code for langchain.tools.wikipedia.tool\n\"\"\"Tool for the Wikipedia API.\"\"\"\nfrom typing import Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.wikipedia import WikipediaAPIWrapper\n[docs]class WikipediaQueryRun(BaseTool):\n \"\"\"Tool that adds the capability to search using the Wikipedia API.\"\"\"\n name = \"Wikipedia\"\n description = (\n \"A wrapper around Wikipedia. \"\n \"Useful for when you need to answer general questions about \"\n \"people, places, companies, facts, historical events, or other subjects. \"\n \"Input should be a search query.\"\n )\n api_wrapper: WikipediaAPIWrapper\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Wikipedia tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Wikipedia tool asynchronously.\"\"\"\n raise NotImplementedError(\"WikipediaQueryRun does not support async\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/wikipedia/tool.html"}
+{"id": "d17c547decdd-0", "text": "Source code for langchain.tools.powerbi.tool\n\"\"\"Tools for interacting with a Power BI dataset.\"\"\"\nimport logging\nfrom typing import Any, Dict, Optional, Tuple\nfrom pydantic import Field, validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.chains.llm import LLMChain\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.powerbi.prompt import (\n BAD_REQUEST_RESPONSE,\n DEFAULT_FEWSHOT_EXAMPLES,\n QUESTION_TO_QUERY,\n RETRY_RESPONSE,\n)\nfrom langchain.utilities.powerbi import PowerBIDataset, json_to_md\nlogger = logging.getLogger(__name__)\n[docs]class QueryPowerBITool(BaseTool):\n \"\"\"Tool for querying a Power BI Dataset.\"\"\"\n name = \"query_powerbi\"\n description = \"\"\"\n Input to this tool is a detailed question about the dataset, output is a result from the dataset. It will try to answer the question using the dataset, and if it cannot, it will ask for clarification.\n Example Input: \"How many rows are in table1?\"\n \"\"\" # noqa: E501\n llm_chain: LLMChain\n powerbi: PowerBIDataset = Field(exclude=True)\n template: Optional[str] = QUESTION_TO_QUERY\n examples: Optional[str] = DEFAULT_FEWSHOT_EXAMPLES\n session_cache: Dict[str, Any] = Field(default_factory=dict, exclude=True)\n max_iterations: int = 5\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n @validator(\"llm_chain\")\n def validate_llm_chain_input_variables( # pylint: disable=E0213\n cls, llm_chain: LLMChain", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/powerbi/tool.html"}
+{"id": "d17c547decdd-1", "text": "cls, llm_chain: LLMChain\n ) -> LLMChain:\n \"\"\"Make sure the LLM chain has the correct input variables.\"\"\"\n if llm_chain.prompt.input_variables != [\n \"tool_input\",\n \"tables\",\n \"schemas\",\n \"examples\",\n ]:\n raise ValueError(\n \"LLM chain for QueryPowerBITool must have input variables ['tool_input', 'tables', 'schemas', 'examples'], found %s\", # noqa: C0301 E501 # pylint: disable=C0301\n llm_chain.prompt.input_variables,\n )\n return llm_chain\n def _check_cache(self, tool_input: str) -> Optional[str]:\n \"\"\"Check if the input is present in the cache.\n If the value is a bad request, overwrite with the escalated version,\n if not present return None.\"\"\"\n if tool_input not in self.session_cache:\n return None\n return self.session_cache[tool_input]\n def _run(\n self,\n tool_input: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Execute the query, return the results or an error message.\"\"\"\n if cache := self._check_cache(tool_input):\n logger.debug(\"Found cached result for %s: %s\", tool_input, cache)\n return cache\n try:\n logger.info(\"Running PBI Query Tool with input: %s\", tool_input)\n query = self.llm_chain.predict(\n tool_input=tool_input,\n tables=self.powerbi.get_table_names(),\n schemas=self.powerbi.get_schemas(),\n examples=self.examples,\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/powerbi/tool.html"}
+{"id": "d17c547decdd-2", "text": "schemas=self.powerbi.get_schemas(),\n examples=self.examples,\n )\n except Exception as exc: # pylint: disable=broad-except\n self.session_cache[tool_input] = f\"Error on call to LLM: {exc}\"\n return self.session_cache[tool_input]\n if query == \"I cannot answer this\":\n self.session_cache[tool_input] = query\n return self.session_cache[tool_input]\n logger.info(\"Query: %s\", query)\n pbi_result = self.powerbi.run(command=query)\n result, error = self._parse_output(pbi_result)\n iterations = kwargs.get(\"iterations\", 0)\n if error and iterations < self.max_iterations:\n return self._run(\n tool_input=RETRY_RESPONSE.format(\n tool_input=tool_input, query=query, error=error\n ),\n run_manager=run_manager,\n iterations=iterations + 1,\n )\n self.session_cache[tool_input] = (\n result if result else BAD_REQUEST_RESPONSE.format(error=error)\n )\n return self.session_cache[tool_input]\n async def _arun(\n self,\n tool_input: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Execute the query, return the results or an error message.\"\"\"\n if cache := self._check_cache(tool_input):\n logger.debug(\"Found cached result for %s: %s\", tool_input, cache)\n return cache\n try:\n logger.info(\"Running PBI Query Tool with input: %s\", tool_input)\n query = await self.llm_chain.apredict(\n tool_input=tool_input,", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/powerbi/tool.html"}
+{"id": "d17c547decdd-3", "text": "query = await self.llm_chain.apredict(\n tool_input=tool_input,\n tables=self.powerbi.get_table_names(),\n schemas=self.powerbi.get_schemas(),\n examples=self.examples,\n )\n except Exception as exc: # pylint: disable=broad-except\n self.session_cache[tool_input] = f\"Error on call to LLM: {exc}\"\n return self.session_cache[tool_input]\n if query == \"I cannot answer this\":\n self.session_cache[tool_input] = query\n return self.session_cache[tool_input]\n logger.info(\"Query: %s\", query)\n pbi_result = await self.powerbi.arun(command=query)\n result, error = self._parse_output(pbi_result)\n iterations = kwargs.get(\"iterations\", 0)\n if error and iterations < self.max_iterations:\n return await self._arun(\n tool_input=RETRY_RESPONSE.format(\n tool_input=tool_input, query=query, error=error\n ),\n run_manager=run_manager,\n iterations=iterations + 1,\n )\n self.session_cache[tool_input] = (\n result if result else BAD_REQUEST_RESPONSE.format(error=error)\n )\n return self.session_cache[tool_input]\n def _parse_output(\n self, pbi_result: Dict[str, Any]\n ) -> Tuple[Optional[str], Optional[str]]:\n \"\"\"Parse the output of the query to a markdown table.\"\"\"\n if \"results\" in pbi_result:\n return json_to_md(pbi_result[\"results\"][0][\"tables\"][0][\"rows\"]), None\n if \"error\" in pbi_result:\n if (\n \"pbi.error\" in pbi_result[\"error\"]", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/powerbi/tool.html"}
+{"id": "d17c547decdd-4", "text": "if (\n \"pbi.error\" in pbi_result[\"error\"]\n and \"details\" in pbi_result[\"error\"][\"pbi.error\"]\n ):\n return None, pbi_result[\"error\"][\"pbi.error\"][\"details\"][0][\"detail\"]\n return None, pbi_result[\"error\"]\n return None, \"Unknown error\"\n[docs]class InfoPowerBITool(BaseTool):\n \"\"\"Tool for getting metadata about a PowerBI Dataset.\"\"\"\n name = \"schema_powerbi\"\n description = \"\"\"\n Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\n Be sure that the tables actually exist by calling list_tables_powerbi first!\n Example Input: \"table1, table2, table3\"\n \"\"\" # noqa: E501\n powerbi: PowerBIDataset = Field(exclude=True)\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n def _run(\n self,\n tool_input: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Get the schema for tables in a comma-separated list.\"\"\"\n return self.powerbi.get_table_info(tool_input.split(\", \"))\n async def _arun(\n self,\n tool_input: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n return await self.powerbi.aget_table_info(tool_input.split(\", \"))\n[docs]class ListPowerBITool(BaseTool):\n \"\"\"Tool for getting tables names.\"\"\"\n name = \"list_tables_powerbi\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/powerbi/tool.html"}
+{"id": "d17c547decdd-5", "text": "\"\"\"Tool for getting tables names.\"\"\"\n name = \"list_tables_powerbi\"\n description = \"Input is an empty string, output is a comma separated list of tables in the database.\" # noqa: E501 # pylint: disable=C0301\n powerbi: PowerBIDataset = Field(exclude=True)\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n def _run(\n self,\n tool_input: Optional[str] = None,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Get the names of the tables.\"\"\"\n return \", \".join(self.powerbi.get_table_names())\n async def _arun(\n self,\n tool_input: Optional[str] = None,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Get the names of the tables.\"\"\"\n return \", \".join(self.powerbi.get_table_names())\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/tools/powerbi/tool.html"}
+{"id": "56ab6aae5005-0", "text": "Source code for langchain.experimental.autonomous_agents.baby_agi.baby_agi\n\"\"\"BabyAGI agent.\"\"\"\nfrom collections import deque\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Field\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.experimental.autonomous_agents.baby_agi.task_creation import (\n TaskCreationChain,\n)\nfrom langchain.experimental.autonomous_agents.baby_agi.task_execution import (\n TaskExecutionChain,\n)\nfrom langchain.experimental.autonomous_agents.baby_agi.task_prioritization import (\n TaskPrioritizationChain,\n)\nfrom langchain.vectorstores.base import VectorStore\n[docs]class BabyAGI(Chain, BaseModel):\n \"\"\"Controller model for the BabyAGI agent.\"\"\"\n task_list: deque = Field(default_factory=deque)\n task_creation_chain: Chain = Field(...)\n task_prioritization_chain: Chain = Field(...)\n execution_chain: Chain = Field(...)\n task_id_counter: int = Field(1)\n vectorstore: VectorStore = Field(init=False)\n max_iterations: Optional[int] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n def add_task(self, task: Dict) -> None:\n self.task_list.append(task)\n def print_task_list(self) -> None:\n print(\"\\033[95m\\033[1m\" + \"\\n*****TASK LIST*****\\n\" + \"\\033[0m\\033[0m\")\n for t in self.task_list:\n print(str(t[\"task_id\"]) + \": \" + t[\"task_name\"])", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html"}
+{"id": "56ab6aae5005-1", "text": "print(str(t[\"task_id\"]) + \": \" + t[\"task_name\"])\n def print_next_task(self, task: Dict) -> None:\n print(\"\\033[92m\\033[1m\" + \"\\n*****NEXT TASK*****\\n\" + \"\\033[0m\\033[0m\")\n print(str(task[\"task_id\"]) + \": \" + task[\"task_name\"])\n def print_task_result(self, result: str) -> None:\n print(\"\\033[93m\\033[1m\" + \"\\n*****TASK RESULT*****\\n\" + \"\\033[0m\\033[0m\")\n print(result)\n @property\n def input_keys(self) -> List[str]:\n return [\"objective\"]\n @property\n def output_keys(self) -> List[str]:\n return []\n[docs] def get_next_task(\n self, result: str, task_description: str, objective: str\n ) -> List[Dict]:\n \"\"\"Get the next task.\"\"\"\n task_names = [t[\"task_name\"] for t in self.task_list]\n incomplete_tasks = \", \".join(task_names)\n response = self.task_creation_chain.run(\n result=result,\n task_description=task_description,\n incomplete_tasks=incomplete_tasks,\n objective=objective,\n )\n new_tasks = response.split(\"\\n\")\n return [\n {\"task_name\": task_name} for task_name in new_tasks if task_name.strip()\n ]\n[docs] def prioritize_tasks(self, this_task_id: int, objective: str) -> List[Dict]:\n \"\"\"Prioritize tasks.\"\"\"\n task_names = [t[\"task_name\"] for t in list(self.task_list)]\n next_task_id = int(this_task_id) + 1", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html"}
+{"id": "56ab6aae5005-2", "text": "next_task_id = int(this_task_id) + 1\n response = self.task_prioritization_chain.run(\n task_names=\", \".join(task_names),\n next_task_id=str(next_task_id),\n objective=objective,\n )\n new_tasks = response.split(\"\\n\")\n prioritized_task_list = []\n for task_string in new_tasks:\n if not task_string.strip():\n continue\n task_parts = task_string.strip().split(\".\", 1)\n if len(task_parts) == 2:\n task_id = task_parts[0].strip()\n task_name = task_parts[1].strip()\n prioritized_task_list.append(\n {\"task_id\": task_id, \"task_name\": task_name}\n )\n return prioritized_task_list\n def _get_top_tasks(self, query: str, k: int) -> List[str]:\n \"\"\"Get the top k tasks based on the query.\"\"\"\n results = self.vectorstore.similarity_search(query, k=k)\n if not results:\n return []\n return [str(item.metadata[\"task\"]) for item in results]\n[docs] def execute_task(self, objective: str, task: str, k: int = 5) -> str:\n \"\"\"Execute a task.\"\"\"\n context = self._get_top_tasks(query=objective, k=k)\n return self.execution_chain.run(\n objective=objective, context=\"\\n\".join(context), task=task\n )\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Run the agent.\"\"\"\n objective = inputs[\"objective\"]", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html"}
+{"id": "56ab6aae5005-3", "text": "\"\"\"Run the agent.\"\"\"\n objective = inputs[\"objective\"]\n first_task = inputs.get(\"first_task\", \"Make a todo list\")\n self.add_task({\"task_id\": 1, \"task_name\": first_task})\n num_iters = 0\n while True:\n if self.task_list:\n self.print_task_list()\n # Step 1: Pull the first task\n task = self.task_list.popleft()\n self.print_next_task(task)\n # Step 2: Execute the task\n result = self.execute_task(objective, task[\"task_name\"])\n this_task_id = int(task[\"task_id\"])\n self.print_task_result(result)\n # Step 3: Store the result in Pinecone\n result_id = f\"result_{task['task_id']}\"\n self.vectorstore.add_texts(\n texts=[result],\n metadatas=[{\"task\": task[\"task_name\"]}],\n ids=[result_id],\n )\n # Step 4: Create new tasks and reprioritize task list\n new_tasks = self.get_next_task(result, task[\"task_name\"], objective)\n for new_task in new_tasks:\n self.task_id_counter += 1\n new_task.update({\"task_id\": self.task_id_counter})\n self.add_task(new_task)\n self.task_list = deque(self.prioritize_tasks(this_task_id, objective))\n num_iters += 1\n if self.max_iterations is not None and num_iters == self.max_iterations:\n print(\n \"\\033[91m\\033[1m\" + \"\\n*****TASK ENDING*****\\n\" + \"\\033[0m\\033[0m\"\n )\n break\n return {}\n[docs] @classmethod\n def from_llm(", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html"}
+{"id": "56ab6aae5005-4", "text": "return {}\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n vectorstore: VectorStore,\n verbose: bool = False,\n task_execution_chain: Optional[Chain] = None,\n **kwargs: Dict[str, Any],\n ) -> \"BabyAGI\":\n \"\"\"Initialize the BabyAGI Controller.\"\"\"\n task_creation_chain = TaskCreationChain.from_llm(llm, verbose=verbose)\n task_prioritization_chain = TaskPrioritizationChain.from_llm(\n llm, verbose=verbose\n )\n if task_execution_chain is None:\n execution_chain: Chain = TaskExecutionChain.from_llm(llm, verbose=verbose)\n else:\n execution_chain = task_execution_chain\n return cls(\n task_creation_chain=task_creation_chain,\n task_prioritization_chain=task_prioritization_chain,\n execution_chain=execution_chain,\n vectorstore=vectorstore,\n **kwargs,\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html"}
+{"id": "58636522b24c-0", "text": "Source code for langchain.experimental.autonomous_agents.autogpt.agent\nfrom __future__ import annotations\nfrom typing import List, Optional\nfrom pydantic import ValidationError\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.experimental.autonomous_agents.autogpt.output_parser import (\n AutoGPTOutputParser,\n BaseAutoGPTOutputParser,\n)\nfrom langchain.experimental.autonomous_agents.autogpt.prompt import AutoGPTPrompt\nfrom langchain.experimental.autonomous_agents.autogpt.prompt_generator import (\n FINISH_NAME,\n)\nfrom langchain.schema import (\n AIMessage,\n BaseMessage,\n Document,\n HumanMessage,\n SystemMessage,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.human.tool import HumanInputRun\nfrom langchain.vectorstores.base import VectorStoreRetriever\n[docs]class AutoGPT:\n \"\"\"Agent class for interacting with Auto-GPT.\"\"\"\n def __init__(\n self,\n ai_name: str,\n memory: VectorStoreRetriever,\n chain: LLMChain,\n output_parser: BaseAutoGPTOutputParser,\n tools: List[BaseTool],\n feedback_tool: Optional[HumanInputRun] = None,\n ):\n self.ai_name = ai_name\n self.memory = memory\n self.full_message_history: List[BaseMessage] = []\n self.next_action_count = 0\n self.chain = chain\n self.output_parser = output_parser\n self.tools = tools\n self.feedback_tool = feedback_tool\n @classmethod\n def from_llm_and_tools(\n cls,\n ai_name: str,\n ai_role: str,\n memory: VectorStoreRetriever,", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/autogpt/agent.html"}
+{"id": "58636522b24c-1", "text": "ai_role: str,\n memory: VectorStoreRetriever,\n tools: List[BaseTool],\n llm: BaseChatModel,\n human_in_the_loop: bool = False,\n output_parser: Optional[BaseAutoGPTOutputParser] = None,\n ) -> AutoGPT:\n prompt = AutoGPTPrompt(\n ai_name=ai_name,\n ai_role=ai_role,\n tools=tools,\n input_variables=[\"memory\", \"messages\", \"goals\", \"user_input\"],\n token_counter=llm.get_num_tokens,\n )\n human_feedback_tool = HumanInputRun() if human_in_the_loop else None\n chain = LLMChain(llm=llm, prompt=prompt)\n return cls(\n ai_name,\n memory,\n chain,\n output_parser or AutoGPTOutputParser(),\n tools,\n feedback_tool=human_feedback_tool,\n )\n def run(self, goals: List[str]) -> str:\n user_input = (\n \"Determine which next command to use, \"\n \"and respond using the format specified above:\"\n )\n # Interaction Loop\n loop_count = 0\n while True:\n # Discontinue if continuous limit is reached\n loop_count += 1\n # Send message to AI, get response\n assistant_reply = self.chain.run(\n goals=goals,\n messages=self.full_message_history,\n memory=self.memory,\n user_input=user_input,\n )\n # Print Assistant thoughts\n print(assistant_reply)\n self.full_message_history.append(HumanMessage(content=user_input))\n self.full_message_history.append(AIMessage(content=assistant_reply))\n # Get command name and arguments", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/autogpt/agent.html"}
+{"id": "58636522b24c-2", "text": "# Get command name and arguments\n action = self.output_parser.parse(assistant_reply)\n tools = {t.name: t for t in self.tools}\n if action.name == FINISH_NAME:\n return action.args[\"response\"]\n if action.name in tools:\n tool = tools[action.name]\n try:\n observation = tool.run(action.args)\n except ValidationError as e:\n observation = (\n f\"Validation Error in args: {str(e)}, args: {action.args}\"\n )\n except Exception as e:\n observation = (\n f\"Error: {str(e)}, {type(e).__name__}, args: {action.args}\"\n )\n result = f\"Command {tool.name} returned: {observation}\"\n elif action.name == \"ERROR\":\n result = f\"Error: {action.args}. \"\n else:\n result = (\n f\"Unknown command '{action.name}'. \"\n f\"Please refer to the 'COMMANDS' list for available \"\n f\"commands and only respond in the specified JSON format.\"\n )\n memory_to_add = (\n f\"Assistant Reply: {assistant_reply} \" f\"\\nResult: {result} \"\n )\n if self.feedback_tool is not None:\n feedback = f\"\\n{self.feedback_tool.run('Input: ')}\"\n if feedback in {\"q\", \"stop\"}:\n print(\"EXITING\")\n return \"EXITING\"\n memory_to_add += feedback\n self.memory.add_documents([Document(page_content=memory_to_add)])\n self.full_message_history.append(SystemMessage(content=result))\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/autogpt/agent.html"}
+{"id": "36481f540ad3-0", "text": "Source code for langchain.experimental.generative_agents.memory\nimport logging\nimport re\nfrom datetime import datetime\nfrom typing import Any, Dict, List, Optional\nfrom langchain import LLMChain\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.prompts import PromptTemplate\nfrom langchain.retrievers import TimeWeightedVectorStoreRetriever\nfrom langchain.schema import BaseMemory, Document\nfrom langchain.utils import mock_now\nlogger = logging.getLogger(__name__)\n[docs]class GenerativeAgentMemory(BaseMemory):\n llm: BaseLanguageModel\n \"\"\"The core language model.\"\"\"\n memory_retriever: TimeWeightedVectorStoreRetriever\n \"\"\"The retriever to fetch related memories.\"\"\"\n verbose: bool = False\n reflection_threshold: Optional[float] = None\n \"\"\"When aggregate_importance exceeds reflection_threshold, stop to reflect.\"\"\"\n current_plan: List[str] = []\n \"\"\"The current plan of the agent.\"\"\"\n # A weight of 0.15 makes this less important than it\n # would be otherwise, relative to salience and time\n importance_weight: float = 0.15\n \"\"\"How much weight to assign the memory importance.\"\"\"\n aggregate_importance: float = 0.0 # : :meta private:\n \"\"\"Track the sum of the 'importance' of recent memories.\n Triggers reflection when it reaches reflection_threshold.\"\"\"\n max_tokens_limit: int = 1200 # : :meta private:\n # input keys\n queries_key: str = \"queries\"\n most_recent_memories_token_key: str = \"recent_memories_token\"\n add_memory_key: str = \"add_memory\"\n # output keys\n relevant_memories_key: str = \"relevant_memories\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"}
+{"id": "36481f540ad3-1", "text": "# output keys\n relevant_memories_key: str = \"relevant_memories\"\n relevant_memories_simple_key: str = \"relevant_memories_simple\"\n most_recent_memories_key: str = \"most_recent_memories\"\n now_key: str = \"now\"\n reflecting: bool = False\n def chain(self, prompt: PromptTemplate) -> LLMChain:\n return LLMChain(llm=self.llm, prompt=prompt, verbose=self.verbose)\n @staticmethod\n def _parse_list(text: str) -> List[str]:\n \"\"\"Parse a newline-separated string into a list of strings.\"\"\"\n lines = re.split(r\"\\n\", text.strip())\n lines = [line for line in lines if line.strip()] # remove empty lines\n return [re.sub(r\"^\\s*\\d+\\.\\s*\", \"\", line).strip() for line in lines]\n def _get_topics_of_reflection(self, last_k: int = 50) -> List[str]:\n \"\"\"Return the 3 most salient high-level questions about recent observations.\"\"\"\n prompt = PromptTemplate.from_template(\n \"{observations}\\n\\n\"\n \"Given only the information above, what are the 3 most salient \"\n \"high-level questions we can answer about the subjects in the statements?\\n\"\n \"Provide each question on a new line.\"\n )\n observations = self.memory_retriever.memory_stream[-last_k:]\n observation_str = \"\\n\".join(\n [self._format_memory_detail(o) for o in observations]\n )\n result = self.chain(prompt).run(observations=observation_str)\n return self._parse_list(result)\n def _get_insights_on_topic(\n self, topic: str, now: Optional[datetime] = None", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"}
+{"id": "36481f540ad3-2", "text": "self, topic: str, now: Optional[datetime] = None\n ) -> List[str]:\n \"\"\"Generate 'insights' on a topic of reflection, based on pertinent memories.\"\"\"\n prompt = PromptTemplate.from_template(\n \"Statements relevant to: '{topic}'\\n\"\n \"---\\n\"\n \"{related_statements}\\n\"\n \"---\\n\"\n \"What 5 high-level novel insights can you infer from the above statements \"\n \"that are relevant for answering the following question?\\n\"\n \"Do not include any insights that are not relevant to the question.\\n\"\n \"Do not repeat any insights that have already been made.\\n\\n\"\n \"Question: {topic}\\n\\n\"\n \"(example format: insight (because of 1, 5, 3))\\n\"\n )\n related_memories = self.fetch_memories(topic, now=now)\n related_statements = \"\\n\".join(\n [\n self._format_memory_detail(memory, prefix=f\"{i+1}. \")\n for i, memory in enumerate(related_memories)\n ]\n )\n result = self.chain(prompt).run(\n topic=topic, related_statements=related_statements\n )\n # TODO: Parse the connections between memories and insights\n return self._parse_list(result)\n[docs] def pause_to_reflect(self, now: Optional[datetime] = None) -> List[str]:\n \"\"\"Reflect on recent observations and generate 'insights'.\"\"\"\n if self.verbose:\n logger.info(\"Character is reflecting\")\n new_insights = []\n topics = self._get_topics_of_reflection()\n for topic in topics:\n insights = self._get_insights_on_topic(topic, now=now)", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"}
+{"id": "36481f540ad3-3", "text": "insights = self._get_insights_on_topic(topic, now=now)\n for insight in insights:\n self.add_memory(insight, now=now)\n new_insights.extend(insights)\n return new_insights\n def _score_memory_importance(self, memory_content: str) -> float:\n \"\"\"Score the absolute importance of the given memory.\"\"\"\n prompt = PromptTemplate.from_template(\n \"On the scale of 1 to 10, where 1 is purely mundane\"\n + \" (e.g., brushing teeth, making bed) and 10 is\"\n + \" extremely poignant (e.g., a break up, college\"\n + \" acceptance), rate the likely poignancy of the\"\n + \" following piece of memory. Respond with a single integer.\"\n + \"\\nMemory: {memory_content}\"\n + \"\\nRating: \"\n )\n score = self.chain(prompt).run(memory_content=memory_content).strip()\n if self.verbose:\n logger.info(f\"Importance score: {score}\")\n match = re.search(r\"^\\D*(\\d+)\", score)\n if match:\n return (float(match.group(1)) / 10) * self.importance_weight\n else:\n return 0.0\n def _score_memories_importance(self, memory_content: str) -> List[float]:\n \"\"\"Score the absolute importance of the given memory.\"\"\"\n prompt = PromptTemplate.from_template(\n \"On the scale of 1 to 10, where 1 is purely mundane\"\n + \" (e.g., brushing teeth, making bed) and 10 is\"\n + \" extremely poignant (e.g., a break up, college\"\n + \" acceptance), rate the likely poignancy of the\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"}
+{"id": "36481f540ad3-4", "text": "+ \" acceptance), rate the likely poignancy of the\"\n + \" following piece of memory. Always answer with only a list of numbers.\"\n + \" If just given one memory still respond in a list.\"\n + \" Memories are separated by semi colans (;)\"\n + \"\\Memories: {memory_content}\"\n + \"\\nRating: \"\n )\n scores = self.chain(prompt).run(memory_content=memory_content).strip()\n if self.verbose:\n logger.info(f\"Importance scores: {scores}\")\n # Split into list of strings and convert to floats\n scores_list = [float(x) for x in scores.split(\";\")]\n return scores_list\n[docs] def add_memories(\n self, memory_content: str, now: Optional[datetime] = None\n ) -> List[str]:\n \"\"\"Add an observations or memories to the agent's memory.\"\"\"\n importance_scores = self._score_memories_importance(memory_content)\n self.aggregate_importance += max(importance_scores)\n memory_list = memory_content.split(\";\")\n documents = []\n for i in range(len(memory_list)):\n documents.append(\n Document(\n page_content=memory_list[i],\n metadata={\"importance\": importance_scores[i]},\n )\n )\n result = self.memory_retriever.add_documents(documents, current_time=now)\n # After an agent has processed a certain amount of memories (as measured by\n # aggregate importance), it is time to reflect on recent events to add\n # more synthesized memories to the agent's memory stream.\n if (\n self.reflection_threshold is not None\n and self.aggregate_importance > self.reflection_threshold\n and not self.reflecting\n ):\n self.reflecting = True", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"}
+{"id": "36481f540ad3-5", "text": "and not self.reflecting\n ):\n self.reflecting = True\n self.pause_to_reflect(now=now)\n # Hack to clear the importance from reflection\n self.aggregate_importance = 0.0\n self.reflecting = False\n return result\n[docs] def add_memory(\n self, memory_content: str, now: Optional[datetime] = None\n ) -> List[str]:\n \"\"\"Add an observation or memory to the agent's memory.\"\"\"\n importance_score = self._score_memory_importance(memory_content)\n self.aggregate_importance += importance_score\n document = Document(\n page_content=memory_content, metadata={\"importance\": importance_score}\n )\n result = self.memory_retriever.add_documents([document], current_time=now)\n # After an agent has processed a certain amount of memories (as measured by\n # aggregate importance), it is time to reflect on recent events to add\n # more synthesized memories to the agent's memory stream.\n if (\n self.reflection_threshold is not None\n and self.aggregate_importance > self.reflection_threshold\n and not self.reflecting\n ):\n self.reflecting = True\n self.pause_to_reflect(now=now)\n # Hack to clear the importance from reflection\n self.aggregate_importance = 0.0\n self.reflecting = False\n return result\n[docs] def fetch_memories(\n self, observation: str, now: Optional[datetime] = None\n ) -> List[Document]:\n \"\"\"Fetch related memories.\"\"\"\n if now is not None:\n with mock_now(now):\n return self.memory_retriever.get_relevant_documents(observation)\n else:\n return self.memory_retriever.get_relevant_documents(observation)", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"}
+{"id": "36481f540ad3-6", "text": "else:\n return self.memory_retriever.get_relevant_documents(observation)\n def format_memories_detail(self, relevant_memories: List[Document]) -> str:\n content = []\n for mem in relevant_memories:\n content.append(self._format_memory_detail(mem, prefix=\"- \"))\n return \"\\n\".join([f\"{mem}\" for mem in content])\n def _format_memory_detail(self, memory: Document, prefix: str = \"\") -> str:\n created_time = memory.metadata[\"created_at\"].strftime(\"%B %d, %Y, %I:%M %p\")\n return f\"{prefix}[{created_time}] {memory.page_content.strip()}\"\n def format_memories_simple(self, relevant_memories: List[Document]) -> str:\n return \"; \".join([f\"{mem.page_content}\" for mem in relevant_memories])\n def _get_memories_until_limit(self, consumed_tokens: int) -> str:\n \"\"\"Reduce the number of tokens in the documents.\"\"\"\n result = []\n for doc in self.memory_retriever.memory_stream[::-1]:\n if consumed_tokens >= self.max_tokens_limit:\n break\n consumed_tokens += self.llm.get_num_tokens(doc.page_content)\n if consumed_tokens < self.max_tokens_limit:\n result.append(doc)\n return self.format_memories_simple(result)\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Input keys this memory class will load dynamically.\"\"\"\n return []\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n \"\"\"Return key-value pairs given the text input to the chain.\"\"\"\n queries = inputs.get(self.queries_key)\n now = inputs.get(self.now_key)\n if queries is not None:", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"}
+{"id": "36481f540ad3-7", "text": "now = inputs.get(self.now_key)\n if queries is not None:\n relevant_memories = [\n mem for query in queries for mem in self.fetch_memories(query, now=now)\n ]\n return {\n self.relevant_memories_key: self.format_memories_detail(\n relevant_memories\n ),\n self.relevant_memories_simple_key: self.format_memories_simple(\n relevant_memories\n ),\n }\n most_recent_memories_token = inputs.get(self.most_recent_memories_token_key)\n if most_recent_memories_token is not None:\n return {\n self.most_recent_memories_key: self._get_memories_until_limit(\n most_recent_memories_token\n )\n }\n return {}\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, Any]) -> None:\n \"\"\"Save the context of this model run to memory.\"\"\"\n # TODO: fix the save memory key\n mem = outputs.get(self.add_memory_key)\n now = outputs.get(self.now_key)\n if mem:\n self.add_memory(mem, now=now)\n[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"\n # TODO\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"}
+{"id": "5164b97cae39-0", "text": "Source code for langchain.experimental.generative_agents.generative_agent\nimport re\nfrom datetime import datetime\nfrom typing import Any, Dict, List, Optional, Tuple\nfrom pydantic import BaseModel, Field\nfrom langchain import LLMChain\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.experimental.generative_agents.memory import GenerativeAgentMemory\nfrom langchain.prompts import PromptTemplate\n[docs]class GenerativeAgent(BaseModel):\n \"\"\"A character with memory and innate characteristics.\"\"\"\n name: str\n \"\"\"The character's name.\"\"\"\n age: Optional[int] = None\n \"\"\"The optional age of the character.\"\"\"\n traits: str = \"N/A\"\n \"\"\"Permanent traits to ascribe to the character.\"\"\"\n status: str\n \"\"\"The traits of the character you wish not to change.\"\"\"\n memory: GenerativeAgentMemory\n \"\"\"The memory object that combines relevance, recency, and 'importance'.\"\"\"\n llm: BaseLanguageModel\n \"\"\"The underlying language model.\"\"\"\n verbose: bool = False\n summary: str = \"\" #: :meta private:\n \"\"\"Stateful self-summary generated via reflection on the character's memory.\"\"\"\n summary_refresh_seconds: int = 3600 #: :meta private:\n \"\"\"How frequently to re-generate the summary.\"\"\"\n last_refreshed: datetime = Field(default_factory=datetime.now) # : :meta private:\n \"\"\"The last time the character's summary was regenerated.\"\"\"\n daily_summaries: List[str] = Field(default_factory=list) # : :meta private:\n \"\"\"Summary of the events in the plan that the agent took.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n # LLM-related methods\n @staticmethod", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html"}
+{"id": "5164b97cae39-1", "text": "arbitrary_types_allowed = True\n # LLM-related methods\n @staticmethod\n def _parse_list(text: str) -> List[str]:\n \"\"\"Parse a newline-separated string into a list of strings.\"\"\"\n lines = re.split(r\"\\n\", text.strip())\n return [re.sub(r\"^\\s*\\d+\\.\\s*\", \"\", line).strip() for line in lines]\n def chain(self, prompt: PromptTemplate) -> LLMChain:\n return LLMChain(\n llm=self.llm, prompt=prompt, verbose=self.verbose, memory=self.memory\n )\n def _get_entity_from_observation(self, observation: str) -> str:\n prompt = PromptTemplate.from_template(\n \"What is the observed entity in the following observation? {observation}\"\n + \"\\nEntity=\"\n )\n return self.chain(prompt).run(observation=observation).strip()\n def _get_entity_action(self, observation: str, entity_name: str) -> str:\n prompt = PromptTemplate.from_template(\n \"What is the {entity} doing in the following observation? {observation}\"\n + \"\\nThe {entity} is\"\n )\n return (\n self.chain(prompt).run(entity=entity_name, observation=observation).strip()\n )\n[docs] def summarize_related_memories(self, observation: str) -> str:\n \"\"\"Summarize memories that are most relevant to an observation.\"\"\"\n prompt = PromptTemplate.from_template(\n \"\"\"\n{q1}?\nContext from memory:\n{relevant_memories}\nRelevant context: \n\"\"\"\n )\n entity_name = self._get_entity_from_observation(observation)\n entity_action = self._get_entity_action(observation, entity_name)", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html"}
+{"id": "5164b97cae39-2", "text": "entity_action = self._get_entity_action(observation, entity_name)\n q1 = f\"What is the relationship between {self.name} and {entity_name}\"\n q2 = f\"{entity_name} is {entity_action}\"\n return self.chain(prompt=prompt).run(q1=q1, queries=[q1, q2]).strip()\n def _generate_reaction(\n self, observation: str, suffix: str, now: Optional[datetime] = None\n ) -> str:\n \"\"\"React to a given observation or dialogue act.\"\"\"\n prompt = PromptTemplate.from_template(\n \"{agent_summary_description}\"\n + \"\\nIt is {current_time}.\"\n + \"\\n{agent_name}'s status: {agent_status}\"\n + \"\\nSummary of relevant context from {agent_name}'s memory:\"\n + \"\\n{relevant_memories}\"\n + \"\\nMost recent observations: {most_recent_memories}\"\n + \"\\nObservation: {observation}\"\n + \"\\n\\n\"\n + suffix\n )\n agent_summary_description = self.get_summary(now=now)\n relevant_memories_str = self.summarize_related_memories(observation)\n current_time_str = (\n datetime.now().strftime(\"%B %d, %Y, %I:%M %p\")\n if now is None\n else now.strftime(\"%B %d, %Y, %I:%M %p\")\n )\n kwargs: Dict[str, Any] = dict(\n agent_summary_description=agent_summary_description,\n current_time=current_time_str,\n relevant_memories=relevant_memories_str,\n agent_name=self.name,\n observation=observation,\n agent_status=self.status,\n )\n consumed_tokens = self.llm.get_num_tokens(", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html"}
+{"id": "5164b97cae39-3", "text": ")\n consumed_tokens = self.llm.get_num_tokens(\n prompt.format(most_recent_memories=\"\", **kwargs)\n )\n kwargs[self.memory.most_recent_memories_token_key] = consumed_tokens\n return self.chain(prompt=prompt).run(**kwargs).strip()\n def _clean_response(self, text: str) -> str:\n return re.sub(f\"^{self.name} \", \"\", text.strip()).strip()\n[docs] def generate_reaction(\n self, observation: str, now: Optional[datetime] = None\n ) -> Tuple[bool, str]:\n \"\"\"React to a given observation.\"\"\"\n call_to_action_template = (\n \"Should {agent_name} react to the observation, and if so,\"\n + \" what would be an appropriate reaction? Respond in one line.\"\n + ' If the action is to engage in dialogue, write:\\nSAY: \"what to say\"'\n + \"\\notherwise, write:\\nREACT: {agent_name}'s reaction (if anything).\"\n + \"\\nEither do nothing, react, or say something but not both.\\n\\n\"\n )\n full_result = self._generate_reaction(\n observation, call_to_action_template, now=now\n )\n result = full_result.strip().split(\"\\n\")[0]\n # AAA\n self.memory.save_context(\n {},\n {\n self.memory.add_memory_key: f\"{self.name} observed \"\n f\"{observation} and reacted by {result}\",\n self.memory.now_key: now,\n },\n )\n if \"REACT:\" in result:\n reaction = self._clean_response(result.split(\"REACT:\")[-1])\n return False, f\"{self.name} {reaction}\"\n if \"SAY:\" in result:", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html"}
+{"id": "5164b97cae39-4", "text": "if \"SAY:\" in result:\n said_value = self._clean_response(result.split(\"SAY:\")[-1])\n return True, f\"{self.name} said {said_value}\"\n else:\n return False, result\n[docs] def generate_dialogue_response(\n self, observation: str, now: Optional[datetime] = None\n ) -> Tuple[bool, str]:\n \"\"\"React to a given observation.\"\"\"\n call_to_action_template = (\n \"What would {agent_name} say? To end the conversation, write:\"\n ' GOODBYE: \"what to say\". Otherwise to continue the conversation,'\n ' write: SAY: \"what to say next\"\\n\\n'\n )\n full_result = self._generate_reaction(\n observation, call_to_action_template, now=now\n )\n result = full_result.strip().split(\"\\n\")[0]\n if \"GOODBYE:\" in result:\n farewell = self._clean_response(result.split(\"GOODBYE:\")[-1])\n self.memory.save_context(\n {},\n {\n self.memory.add_memory_key: f\"{self.name} observed \"\n f\"{observation} and said {farewell}\",\n self.memory.now_key: now,\n },\n )\n return False, f\"{self.name} said {farewell}\"\n if \"SAY:\" in result:\n response_text = self._clean_response(result.split(\"SAY:\")[-1])\n self.memory.save_context(\n {},\n {\n self.memory.add_memory_key: f\"{self.name} observed \"\n f\"{observation} and said {response_text}\",\n self.memory.now_key: now,\n },\n )\n return True, f\"{self.name} said {response_text}\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html"}
+{"id": "5164b97cae39-5", "text": ")\n return True, f\"{self.name} said {response_text}\"\n else:\n return False, result\n ######################################################\n # Agent stateful' summary methods. #\n # Each dialog or response prompt includes a header #\n # summarizing the agent's self-description. This is #\n # updated periodically through probing its memories #\n ######################################################\n def _compute_agent_summary(self) -> str:\n \"\"\"\"\"\"\n prompt = PromptTemplate.from_template(\n \"How would you summarize {name}'s core characteristics given the\"\n + \" following statements:\\n\"\n + \"{relevant_memories}\"\n + \"Do not embellish.\"\n + \"\\n\\nSummary: \"\n )\n # The agent seeks to think about their core characteristics.\n return (\n self.chain(prompt)\n .run(name=self.name, queries=[f\"{self.name}'s core characteristics\"])\n .strip()\n )\n[docs] def get_summary(\n self, force_refresh: bool = False, now: Optional[datetime] = None\n ) -> str:\n \"\"\"Return a descriptive summary of the agent.\"\"\"\n current_time = datetime.now() if now is None else now\n since_refresh = (current_time - self.last_refreshed).seconds\n if (\n not self.summary\n or since_refresh >= self.summary_refresh_seconds\n or force_refresh\n ):\n self.summary = self._compute_agent_summary()\n self.last_refreshed = current_time\n age = self.age if self.age is not None else \"N/A\"\n return (\n f\"Name: {self.name} (age: {age})\"\n + f\"\\nInnate traits: {self.traits}\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html"}
+{"id": "5164b97cae39-6", "text": "+ f\"\\nInnate traits: {self.traits}\"\n + f\"\\n{self.summary}\"\n )\n[docs] def get_full_header(\n self, force_refresh: bool = False, now: Optional[datetime] = None\n ) -> str:\n \"\"\"Return a full header of the agent's status, summary, and current time.\"\"\"\n now = datetime.now() if now is None else now\n summary = self.get_summary(force_refresh=force_refresh, now=now)\n current_time_str = now.strftime(\"%B %d, %Y, %I:%M %p\")\n return (\n f\"{summary}\\nIt is {current_time_str}.\\n{self.name}'s status: {self.status}\"\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html"}
+{"id": "a188dffdda6e-0", "text": "Source code for langchain.retrievers.remote_retriever\nfrom typing import List, Optional\nimport aiohttp\nimport requests\nfrom pydantic import BaseModel\nfrom langchain.schema import BaseRetriever, Document\n[docs]class RemoteLangChainRetriever(BaseRetriever, BaseModel):\n url: str\n headers: Optional[dict] = None\n input_key: str = \"message\"\n response_key: str = \"response\"\n page_content_key: str = \"page_content\"\n metadata_key: str = \"metadata\"\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n response = requests.post(\n self.url, json={self.input_key: query}, headers=self.headers\n )\n result = response.json()\n return [\n Document(\n page_content=r[self.page_content_key], metadata=r[self.metadata_key]\n )\n for r in result[self.response_key]\n ]\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n async with aiohttp.ClientSession() as session:\n async with session.request(\n \"POST\", self.url, headers=self.headers, json={self.input_key: query}\n ) as response:\n result = await response.json()\n return [\n Document(\n page_content=r[self.page_content_key], metadata=r[self.metadata_key]\n )\n for r in result[self.response_key]\n ]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/remote_retriever.html"}
+{"id": "ff7b2e1a7faa-0", "text": "Source code for langchain.retrievers.vespa_retriever\n\"\"\"Wrapper for retrieving documents from Vespa.\"\"\"\nfrom __future__ import annotations\nimport json\nfrom typing import TYPE_CHECKING, Any, Dict, List, Literal, Optional, Sequence, Union\nfrom langchain.schema import BaseRetriever, Document\nif TYPE_CHECKING:\n from vespa.application import Vespa\n[docs]class VespaRetriever(BaseRetriever):\n def __init__(\n self,\n app: Vespa,\n body: Dict,\n content_field: str,\n metadata_fields: Optional[Sequence[str]] = None,\n ):\n self._application = app\n self._query_body = body\n self._content_field = content_field\n self._metadata_fields = metadata_fields or ()\n def _query(self, body: Dict) -> List[Document]:\n response = self._application.query(body)\n if not str(response.status_code).startswith(\"2\"):\n raise RuntimeError(\n \"Could not retrieve data from Vespa. Error code: {}\".format(\n response.status_code\n )\n )\n root = response.json[\"root\"]\n if \"errors\" in root:\n raise RuntimeError(json.dumps(root[\"errors\"]))\n docs = []\n for child in response.hits:\n page_content = child[\"fields\"].pop(self._content_field, \"\")\n if self._metadata_fields == \"*\":\n metadata = child[\"fields\"]\n else:\n metadata = {mf: child[\"fields\"].get(mf) for mf in self._metadata_fields}\n metadata[\"id\"] = child[\"id\"]\n docs.append(Document(page_content=page_content, metadata=metadata))\n return docs", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/vespa_retriever.html"}
+{"id": "ff7b2e1a7faa-1", "text": "docs.append(Document(page_content=page_content, metadata=metadata))\n return docs\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n body = self._query_body.copy()\n body[\"query\"] = query\n return self._query(body)\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError\n[docs] def get_relevant_documents_with_filter(\n self, query: str, *, _filter: Optional[str] = None\n ) -> List[Document]:\n body = self._query_body.copy()\n _filter = f\" and {_filter}\" if _filter else \"\"\n body[\"yql\"] = body[\"yql\"] + _filter\n body[\"query\"] = query\n return self._query(body)\n[docs] @classmethod\n def from_params(\n cls,\n url: str,\n content_field: str,\n *,\n k: Optional[int] = None,\n metadata_fields: Union[Sequence[str], Literal[\"*\"]] = (),\n sources: Union[Sequence[str], Literal[\"*\"], None] = None,\n _filter: Optional[str] = None,\n yql: Optional[str] = None,\n **kwargs: Any,\n ) -> VespaRetriever:\n \"\"\"Instantiate retriever from params.\n Args:\n url (str): Vespa app URL.\n content_field (str): Field in results to return as Document page_content.\n k (Optional[int]): Number of Documents to return. Defaults to None.\n metadata_fields(Sequence[str] or \"*\"): Fields in results to include in\n document metadata. Defaults to empty tuple ().", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/vespa_retriever.html"}
+{"id": "ff7b2e1a7faa-2", "text": "document metadata. Defaults to empty tuple ().\n sources (Sequence[str] or \"*\" or None): Sources to retrieve\n from. Defaults to None.\n _filter (Optional[str]): Document filter condition expressed in YQL.\n Defaults to None.\n yql (Optional[str]): Full YQL query to be used. Should not be specified\n if _filter or sources are specified. Defaults to None.\n kwargs (Any): Keyword arguments added to query body.\n \"\"\"\n try:\n from vespa.application import Vespa\n except ImportError:\n raise ImportError(\n \"pyvespa is not installed, please install with `pip install pyvespa`\"\n )\n app = Vespa(url)\n body = kwargs.copy()\n if yql and (sources or _filter):\n raise ValueError(\n \"yql should only be specified if both sources and _filter are not \"\n \"specified.\"\n )\n else:\n if metadata_fields == \"*\":\n _fields = \"*\"\n body[\"summary\"] = \"short\"\n else:\n _fields = \", \".join([content_field] + list(metadata_fields or []))\n _sources = \", \".join(sources) if isinstance(sources, Sequence) else \"*\"\n _filter = f\" and {_filter}\" if _filter else \"\"\n yql = f\"select {_fields} from sources {_sources} where userQuery(){_filter}\"\n body[\"yql\"] = yql\n if k:\n body[\"hits\"] = k\n return cls(app, body, content_field, metadata_fields=metadata_fields)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/vespa_retriever.html"}
+{"id": "a30c4d813f67-0", "text": "Source code for langchain.retrievers.pupmed\nfrom typing import List\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.utilities.pupmed import PubMedAPIWrapper\n[docs]class PubMedRetriever(BaseRetriever, PubMedAPIWrapper):\n \"\"\"\n It is effectively a wrapper for PubMedAPIWrapper.\n It wraps load() to get_relevant_documents().\n It uses all PubMedAPIWrapper arguments without any change.\n \"\"\"\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n return self.load_docs(query=query)\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/pupmed.html"}
+{"id": "19f8fca17d41-0", "text": "Source code for langchain.retrievers.tfidf\n\"\"\"TF-IDF Retriever.\nLargely based on\nhttps://github.com/asvskartheek/Text-Retrieval/blob/master/TF-IDF%20Search%20Engine%20(SKLEARN).ipynb\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, Iterable, List, Optional\nfrom pydantic import BaseModel\nfrom langchain.schema import BaseRetriever, Document\n[docs]class TFIDFRetriever(BaseRetriever, BaseModel):\n vectorizer: Any\n docs: List[Document]\n tfidf_array: Any\n k: int = 4\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] @classmethod\n def from_texts(\n cls,\n texts: Iterable[str],\n metadatas: Optional[Iterable[dict]] = None,\n tfidf_params: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> TFIDFRetriever:\n try:\n from sklearn.feature_extraction.text import TfidfVectorizer\n except ImportError:\n raise ImportError(\n \"Could not import scikit-learn, please install with `pip install \"\n \"scikit-learn`.\"\n )\n tfidf_params = tfidf_params or {}\n vectorizer = TfidfVectorizer(**tfidf_params)\n tfidf_array = vectorizer.fit_transform(texts)\n metadatas = metadatas or ({} for _ in texts)\n docs = [Document(page_content=t, metadata=m) for t, m in zip(texts, metadatas)]", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/tfidf.html"}
+{"id": "19f8fca17d41-1", "text": "return cls(vectorizer=vectorizer, docs=docs, tfidf_array=tfidf_array, **kwargs)\n[docs] @classmethod\n def from_documents(\n cls,\n documents: Iterable[Document],\n *,\n tfidf_params: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> TFIDFRetriever:\n texts, metadatas = zip(*((d.page_content, d.metadata) for d in documents))\n return cls.from_texts(\n texts=texts, tfidf_params=tfidf_params, metadatas=metadatas, **kwargs\n )\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n from sklearn.metrics.pairwise import cosine_similarity\n query_vec = self.vectorizer.transform(\n [query]\n ) # Ip -- (n_docs,x), Op -- (n_docs,n_Feats)\n results = cosine_similarity(self.tfidf_array, query_vec).reshape(\n (-1,)\n ) # Op -- (n_docs,1) -- Cosine Sim with each doc\n return_docs = [self.docs[i] for i in results.argsort()[-self.k :][::-1]]\n return return_docs\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/tfidf.html"}
+{"id": "c8ddbcbaa2fc-0", "text": "Source code for langchain.retrievers.elastic_search_bm25\n\"\"\"Wrapper around Elasticsearch vector database.\"\"\"\nfrom __future__ import annotations\nimport uuid\nfrom typing import Any, Iterable, List\nfrom langchain.docstore.document import Document\nfrom langchain.schema import BaseRetriever\n[docs]class ElasticSearchBM25Retriever(BaseRetriever):\n \"\"\"Wrapper around Elasticsearch using BM25 as a retrieval method.\n To connect to an Elasticsearch instance that requires login credentials,\n including Elastic Cloud, use the Elasticsearch URL format\n https://username:password@es_host:9243. For example, to connect to Elastic\n Cloud, create the Elasticsearch URL with the required authentication details and\n pass it to the ElasticVectorSearch constructor as the named parameter\n elasticsearch_url.\n You can obtain your Elastic Cloud URL and login credentials by logging in to the\n Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and\n navigating to the \"Deployments\" page.\n To obtain your Elastic Cloud password for the default \"elastic\" user:\n 1. Log in to the Elastic Cloud console at https://cloud.elastic.co\n 2. Go to \"Security\" > \"Users\"\n 3. Locate the \"elastic\" user and click \"Edit\"\n 4. Click \"Reset password\"\n 5. Follow the prompts to reset the password\n The format for Elastic Cloud URLs is\n https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.\n \"\"\"\n def __init__(self, client: Any, index_name: str):\n self.client = client\n self.index_name = index_name\n[docs] @classmethod\n def create(", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/elastic_search_bm25.html"}
+{"id": "c8ddbcbaa2fc-1", "text": "self.index_name = index_name\n[docs] @classmethod\n def create(\n cls, elasticsearch_url: str, index_name: str, k1: float = 2.0, b: float = 0.75\n ) -> ElasticSearchBM25Retriever:\n from elasticsearch import Elasticsearch\n # Create an Elasticsearch client instance\n es = Elasticsearch(elasticsearch_url)\n # Define the index settings and mappings\n settings = {\n \"analysis\": {\"analyzer\": {\"default\": {\"type\": \"standard\"}}},\n \"similarity\": {\n \"custom_bm25\": {\n \"type\": \"BM25\",\n \"k1\": k1,\n \"b\": b,\n }\n },\n }\n mappings = {\n \"properties\": {\n \"content\": {\n \"type\": \"text\",\n \"similarity\": \"custom_bm25\", # Use the custom BM25 similarity\n }\n }\n }\n # Create the index with the specified settings and mappings\n es.indices.create(index=index_name, mappings=mappings, settings=settings)\n return cls(es, index_name)\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n refresh_indices: bool = True,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the retriver.\n Args:\n texts: Iterable of strings to add to the retriever.\n refresh_indices: bool to refresh ElasticSearch indices\n Returns:\n List of ids from adding the texts into the retriever.\n \"\"\"\n try:\n from elasticsearch.helpers import bulk\n except ImportError:\n raise ValueError(\n \"Could not import elasticsearch python package. \"", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/elastic_search_bm25.html"}
+{"id": "c8ddbcbaa2fc-2", "text": "raise ValueError(\n \"Could not import elasticsearch python package. \"\n \"Please install it with `pip install elasticsearch`.\"\n )\n requests = []\n ids = []\n for i, text in enumerate(texts):\n _id = str(uuid.uuid4())\n request = {\n \"_op_type\": \"index\",\n \"_index\": self.index_name,\n \"content\": text,\n \"_id\": _id,\n }\n ids.append(_id)\n requests.append(request)\n bulk(self.client, requests)\n if refresh_indices:\n self.client.indices.refresh(index=self.index_name)\n return ids\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n query_dict = {\"query\": {\"match\": {\"content\": query}}}\n res = self.client.search(index=self.index_name, body=query_dict)\n docs = []\n for r in res[\"hits\"][\"hits\"]:\n docs.append(Document(page_content=r[\"_source\"][\"content\"]))\n return docs\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/elastic_search_bm25.html"}
+{"id": "b0a5e8d6d52d-0", "text": "Source code for langchain.retrievers.wikipedia\nfrom typing import List\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.utilities.wikipedia import WikipediaAPIWrapper\n[docs]class WikipediaRetriever(BaseRetriever, WikipediaAPIWrapper):\n \"\"\"\n It is effectively a wrapper for WikipediaAPIWrapper.\n It wraps load() to get_relevant_documents().\n It uses all WikipediaAPIWrapper arguments without any change.\n \"\"\"\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n return self.load(query=query)\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/wikipedia.html"}
+{"id": "12f3ca851a30-0", "text": "Source code for langchain.retrievers.databerry\nfrom typing import List, Optional\nimport aiohttp\nimport requests\nfrom langchain.schema import BaseRetriever, Document\n[docs]class DataberryRetriever(BaseRetriever):\n datastore_url: str\n top_k: Optional[int]\n api_key: Optional[str]\n def __init__(\n self,\n datastore_url: str,\n top_k: Optional[int] = None,\n api_key: Optional[str] = None,\n ):\n self.datastore_url = datastore_url\n self.api_key = api_key\n self.top_k = top_k\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n response = requests.post(\n self.datastore_url,\n json={\n \"query\": query,\n **({\"topK\": self.top_k} if self.top_k is not None else {}),\n },\n headers={\n \"Content-Type\": \"application/json\",\n **(\n {\"Authorization\": f\"Bearer {self.api_key}\"}\n if self.api_key is not None\n else {}\n ),\n },\n )\n data = response.json()\n return [\n Document(\n page_content=r[\"text\"],\n metadata={\"source\": r[\"source\"], \"score\": r[\"score\"]},\n )\n for r in data[\"results\"]\n ]\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n async with aiohttp.ClientSession() as session:\n async with session.request(\n \"POST\",\n self.datastore_url,\n json={\n \"query\": query,", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/databerry.html"}
+{"id": "12f3ca851a30-1", "text": "self.datastore_url,\n json={\n \"query\": query,\n **({\"topK\": self.top_k} if self.top_k is not None else {}),\n },\n headers={\n \"Content-Type\": \"application/json\",\n **(\n {\"Authorization\": f\"Bearer {self.api_key}\"}\n if self.api_key is not None\n else {}\n ),\n },\n ) as response:\n data = await response.json()\n return [\n Document(\n page_content=r[\"text\"],\n metadata={\"source\": r[\"source\"], \"score\": r[\"score\"]},\n )\n for r in data[\"results\"]\n ]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/databerry.html"}
+{"id": "cbdfa619a38d-0", "text": "Source code for langchain.retrievers.pinecone_hybrid_search\n\"\"\"Taken from: https://docs.pinecone.io/docs/hybrid-search\"\"\"\nimport hashlib\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import BaseRetriever, Document\ndef hash_text(text: str) -> str:\n return str(hashlib.sha256(text.encode(\"utf-8\")).hexdigest())\ndef create_index(\n contexts: List[str],\n index: Any,\n embeddings: Embeddings,\n sparse_encoder: Any,\n ids: Optional[List[str]] = None,\n metadatas: Optional[List[dict]] = None,\n) -> None:\n batch_size = 32\n _iterator = range(0, len(contexts), batch_size)\n try:\n from tqdm.auto import tqdm\n _iterator = tqdm(_iterator)\n except ImportError:\n pass\n if ids is None:\n # create unique ids using hash of the text\n ids = [hash_text(context) for context in contexts]\n for i in _iterator:\n # find end of batch\n i_end = min(i + batch_size, len(contexts))\n # extract batch\n context_batch = contexts[i:i_end]\n batch_ids = ids[i:i_end]\n metadata_batch = (\n metadatas[i:i_end] if metadatas else [{} for _ in context_batch]\n )\n # add context passages as metadata\n meta = [\n {\"context\": context, **metadata}\n for context, metadata in zip(context_batch, metadata_batch)\n ]\n # create dense vectors\n dense_embeds = embeddings.embed_documents(context_batch)", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/pinecone_hybrid_search.html"}
+{"id": "cbdfa619a38d-1", "text": "# create dense vectors\n dense_embeds = embeddings.embed_documents(context_batch)\n # create sparse vectors\n sparse_embeds = sparse_encoder.encode_documents(context_batch)\n for s in sparse_embeds:\n s[\"values\"] = [float(s1) for s1 in s[\"values\"]]\n vectors = []\n # loop through the data and create dictionaries for upserts\n for doc_id, sparse, dense, metadata in zip(\n batch_ids, sparse_embeds, dense_embeds, meta\n ):\n vectors.append(\n {\n \"id\": doc_id,\n \"sparse_values\": sparse,\n \"values\": dense,\n \"metadata\": metadata,\n }\n )\n # upload the documents to the new hybrid index\n index.upsert(vectors)\n[docs]class PineconeHybridSearchRetriever(BaseRetriever, BaseModel):\n embeddings: Embeddings\n sparse_encoder: Any\n index: Any\n top_k: int = 4\n alpha: float = 0.5\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] def add_texts(\n self,\n texts: List[str],\n ids: Optional[List[str]] = None,\n metadatas: Optional[List[dict]] = None,\n ) -> None:\n create_index(\n texts,\n self.index,\n self.embeddings,\n self.sparse_encoder,\n ids=ids,\n metadatas=metadatas,\n )\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n try:", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/pinecone_hybrid_search.html"}
+{"id": "cbdfa619a38d-2", "text": "\"\"\"Validate that api key and python package exists in environment.\"\"\"\n try:\n from pinecone_text.hybrid import hybrid_convex_scale # noqa:F401\n from pinecone_text.sparse.base_sparse_encoder import (\n BaseSparseEncoder, # noqa:F401\n )\n except ImportError:\n raise ValueError(\n \"Could not import pinecone_text python package. \"\n \"Please install it with `pip install pinecone_text`.\"\n )\n return values\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n from pinecone_text.hybrid import hybrid_convex_scale\n sparse_vec = self.sparse_encoder.encode_queries(query)\n # convert the question into a dense vector\n dense_vec = self.embeddings.embed_query(query)\n # scale alpha with hybrid_scale\n dense_vec, sparse_vec = hybrid_convex_scale(dense_vec, sparse_vec, self.alpha)\n sparse_vec[\"values\"] = [float(s1) for s1 in sparse_vec[\"values\"]]\n # query pinecone with the query parameters\n result = self.index.query(\n vector=dense_vec,\n sparse_vector=sparse_vec,\n top_k=self.top_k,\n include_metadata=True,\n )\n final_result = []\n for res in result[\"matches\"]:\n context = res[\"metadata\"].pop(\"context\")\n final_result.append(\n Document(page_content=context, metadata=res[\"metadata\"])\n )\n # return search results as json\n return final_result\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/pinecone_hybrid_search.html"}
+{"id": "da2be2fe8b23-0", "text": "Source code for langchain.retrievers.time_weighted_retriever\n\"\"\"Retriever that combines embedding similarity with recency in retrieving values.\"\"\"\nimport datetime\nfrom copy import deepcopy\nfrom typing import Any, Dict, List, Optional, Tuple\nfrom pydantic import BaseModel, Field\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.vectorstores.base import VectorStore\ndef _get_hours_passed(time: datetime.datetime, ref_time: datetime.datetime) -> float:\n \"\"\"Get the hours passed between two datetime objects.\"\"\"\n return (time - ref_time).total_seconds() / 3600\n[docs]class TimeWeightedVectorStoreRetriever(BaseRetriever, BaseModel):\n \"\"\"Retriever combining embedding similarity with recency.\"\"\"\n vectorstore: VectorStore\n \"\"\"The vectorstore to store documents and determine salience.\"\"\"\n search_kwargs: dict = Field(default_factory=lambda: dict(k=100))\n \"\"\"Keyword arguments to pass to the vectorstore similarity search.\"\"\"\n # TODO: abstract as a queue\n memory_stream: List[Document] = Field(default_factory=list)\n \"\"\"The memory_stream of documents to search through.\"\"\"\n decay_rate: float = Field(default=0.01)\n \"\"\"The exponential decay factor used as (1.0-decay_rate)**(hrs_passed).\"\"\"\n k: int = 4\n \"\"\"The maximum number of documents to retrieve in a given call.\"\"\"\n other_score_keys: List[str] = []\n \"\"\"Other keys in the metadata to factor into the score, e.g. 'importance'.\"\"\"\n default_salience: Optional[float] = None\n \"\"\"The salience to assign memories not retrieved from the vector store.\n None assigns no salience to documents not fetched from the vector store.\n \"\"\"\n class Config:", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/time_weighted_retriever.html"}
+{"id": "da2be2fe8b23-1", "text": "\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n def _get_combined_score(\n self,\n document: Document,\n vector_relevance: Optional[float],\n current_time: datetime.datetime,\n ) -> float:\n \"\"\"Return the combined score for a document.\"\"\"\n hours_passed = _get_hours_passed(\n current_time,\n document.metadata[\"last_accessed_at\"],\n )\n score = (1.0 - self.decay_rate) ** hours_passed\n for key in self.other_score_keys:\n if key in document.metadata:\n score += document.metadata[key]\n if vector_relevance is not None:\n score += vector_relevance\n return score\n[docs] def get_salient_docs(self, query: str) -> Dict[int, Tuple[Document, float]]:\n \"\"\"Return documents that are salient to the query.\"\"\"\n docs_and_scores: List[Tuple[Document, float]]\n docs_and_scores = self.vectorstore.similarity_search_with_relevance_scores(\n query, **self.search_kwargs\n )\n results = {}\n for fetched_doc, relevance in docs_and_scores:\n if \"buffer_idx\" in fetched_doc.metadata:\n buffer_idx = fetched_doc.metadata[\"buffer_idx\"]\n doc = self.memory_stream[buffer_idx]\n results[buffer_idx] = (doc, relevance)\n return results\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n \"\"\"Return documents that are relevant to the query.\"\"\"\n current_time = datetime.datetime.now()\n docs_and_scores = {\n doc.metadata[\"buffer_idx\"]: (doc, self.default_salience)\n for doc in self.memory_stream[-self.k :]", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/time_weighted_retriever.html"}
+{"id": "da2be2fe8b23-2", "text": "for doc in self.memory_stream[-self.k :]\n }\n # If a doc is considered salient, update the salience score\n docs_and_scores.update(self.get_salient_docs(query))\n rescored_docs = [\n (doc, self._get_combined_score(doc, relevance, current_time))\n for doc, relevance in docs_and_scores.values()\n ]\n rescored_docs.sort(key=lambda x: x[1], reverse=True)\n result = []\n # Ensure frequently accessed memories aren't forgotten\n for doc, _ in rescored_docs[: self.k]:\n # TODO: Update vector store doc once `update` method is exposed.\n buffered_doc = self.memory_stream[doc.metadata[\"buffer_idx\"]]\n buffered_doc.metadata[\"last_accessed_at\"] = current_time\n result.append(buffered_doc)\n return result\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n \"\"\"Return documents that are relevant to the query.\"\"\"\n raise NotImplementedError\n[docs] def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:\n \"\"\"Add documents to vectorstore.\"\"\"\n current_time = kwargs.get(\"current_time\")\n if current_time is None:\n current_time = datetime.datetime.now()\n # Avoid mutating input documents\n dup_docs = [deepcopy(d) for d in documents]\n for i, doc in enumerate(dup_docs):\n if \"last_accessed_at\" not in doc.metadata:\n doc.metadata[\"last_accessed_at\"] = current_time\n if \"created_at\" not in doc.metadata:\n doc.metadata[\"created_at\"] = current_time\n doc.metadata[\"buffer_idx\"] = len(self.memory_stream) + i", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/time_weighted_retriever.html"}
+{"id": "da2be2fe8b23-3", "text": "doc.metadata[\"buffer_idx\"] = len(self.memory_stream) + i\n self.memory_stream.extend(dup_docs)\n return self.vectorstore.add_documents(dup_docs, **kwargs)\n[docs] async def aadd_documents(\n self, documents: List[Document], **kwargs: Any\n ) -> List[str]:\n \"\"\"Add documents to vectorstore.\"\"\"\n current_time = kwargs.get(\"current_time\")\n if current_time is None:\n current_time = datetime.datetime.now()\n # Avoid mutating input documents\n dup_docs = [deepcopy(d) for d in documents]\n for i, doc in enumerate(dup_docs):\n if \"last_accessed_at\" not in doc.metadata:\n doc.metadata[\"last_accessed_at\"] = current_time\n if \"created_at\" not in doc.metadata:\n doc.metadata[\"created_at\"] = current_time\n doc.metadata[\"buffer_idx\"] = len(self.memory_stream) + i\n self.memory_stream.extend(dup_docs)\n return await self.vectorstore.aadd_documents(dup_docs, **kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/time_weighted_retriever.html"}
+{"id": "c2c8dd5d90a5-0", "text": "Source code for langchain.retrievers.weaviate_hybrid_search\n\"\"\"Wrapper around weaviate vector database.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional\nfrom uuid import uuid4\nfrom pydantic import Extra\nfrom langchain.docstore.document import Document\nfrom langchain.schema import BaseRetriever\n[docs]class WeaviateHybridSearchRetriever(BaseRetriever):\n def __init__(\n self,\n client: Any,\n index_name: str,\n text_key: str,\n alpha: float = 0.5,\n k: int = 4,\n attributes: Optional[List[str]] = None,\n create_schema_if_missing: bool = True,\n ):\n try:\n import weaviate\n except ImportError:\n raise ImportError(\n \"Could not import weaviate python package. \"\n \"Please install it with `pip install weaviate-client`.\"\n )\n if not isinstance(client, weaviate.Client):\n raise ValueError(\n f\"client should be an instance of weaviate.Client, got {type(client)}\"\n )\n self._client = client\n self.k = k\n self.alpha = alpha\n self._index_name = index_name\n self._text_key = text_key\n self._query_attrs = [self._text_key]\n if attributes is not None:\n self._query_attrs.extend(attributes)\n if create_schema_if_missing:\n self._create_schema_if_missing()\n def _create_schema_if_missing(self) -> None:\n class_obj = {\n \"class\": self._index_name,\n \"properties\": [{\"name\": self._text_key, \"dataType\": [\"text\"]}],", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/weaviate_hybrid_search.html"}
+{"id": "c2c8dd5d90a5-1", "text": "\"properties\": [{\"name\": self._text_key, \"dataType\": [\"text\"]}],\n \"vectorizer\": \"text2vec-openai\",\n }\n if not self._client.schema.exists(self._index_name):\n self._client.schema.create_class(class_obj)\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n # added text_key\n[docs] def add_documents(self, docs: List[Document], **kwargs: Any) -> List[str]:\n \"\"\"Upload documents to Weaviate.\"\"\"\n from weaviate.util import get_valid_uuid\n with self._client.batch as batch:\n ids = []\n for i, doc in enumerate(docs):\n metadata = doc.metadata or {}\n data_properties = {self._text_key: doc.page_content, **metadata}\n # If the UUID of one of the objects already exists\n # then the existing objectwill be replaced by the new object.\n if \"uuids\" in kwargs:\n _id = kwargs[\"uuids\"][i]\n else:\n _id = get_valid_uuid(uuid4())\n batch.add_data_object(data_properties, self._index_name, _id)\n ids.append(_id)\n return ids\n[docs] def get_relevant_documents(\n self, query: str, where_filter: Optional[Dict[str, object]] = None\n ) -> List[Document]:\n \"\"\"Look up similar documents in Weaviate.\"\"\"\n query_obj = self._client.query.get(self._index_name, self._query_attrs)\n if where_filter:\n query_obj = query_obj.with_where(where_filter)", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/weaviate_hybrid_search.html"}
+{"id": "c2c8dd5d90a5-2", "text": "if where_filter:\n query_obj = query_obj.with_where(where_filter)\n result = query_obj.with_hybrid(query, alpha=self.alpha).with_limit(self.k).do()\n if \"errors\" in result:\n raise ValueError(f\"Error during query: {result['errors']}\")\n docs = []\n for res in result[\"data\"][\"Get\"][self._index_name]:\n text = res.pop(self._text_key)\n docs.append(Document(page_content=text, metadata=res))\n return docs\n[docs] async def aget_relevant_documents(\n self, query: str, where_filter: Optional[Dict[str, object]] = None\n ) -> List[Document]:\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/weaviate_hybrid_search.html"}
+{"id": "ee989284a269-0", "text": "Source code for langchain.retrievers.zep\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Dict, List, Optional\nfrom langchain.schema import BaseRetriever, Document\nif TYPE_CHECKING:\n from zep_python import MemorySearchResult\n[docs]class ZepRetriever(BaseRetriever):\n \"\"\"A Retriever implementation for the Zep long-term memory store. Search your\n user's long-term chat history with Zep.\n Note: You will need to provide the user's `session_id` to use this retriever.\n More on Zep:\n Zep provides long-term conversation storage for LLM apps. The server stores,\n summarizes, embeds, indexes, and enriches conversational AI chat\n histories, and exposes them via simple, low-latency APIs.\n For server installation instructions, see:\n https://getzep.github.io/deployment/quickstart/\n \"\"\"\n def __init__(\n self,\n session_id: str,\n url: str,\n top_k: Optional[int] = None,\n ):\n try:\n from zep_python import ZepClient\n except ImportError:\n raise ValueError(\n \"Could not import zep-python package. \"\n \"Please install it with `pip install zep-python`.\"\n )\n self.zep_client = ZepClient(base_url=url)\n self.session_id = session_id\n self.top_k = top_k\n def _search_result_to_doc(\n self, results: List[MemorySearchResult]\n ) -> List[Document]:\n return [\n Document(\n page_content=r.message.pop(\"content\"),\n metadata={\"score\": r.dist, **r.message},\n )\n for r in results", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/zep.html"}
+{"id": "ee989284a269-1", "text": ")\n for r in results\n if r.message\n ]\n[docs] def get_relevant_documents(\n self, query: str, metadata: Optional[Dict] = None\n ) -> List[Document]:\n from zep_python import MemorySearchPayload\n payload: MemorySearchPayload = MemorySearchPayload(\n text=query, metadata=metadata\n )\n results: List[MemorySearchResult] = self.zep_client.search_memory(\n self.session_id, payload, limit=self.top_k\n )\n return self._search_result_to_doc(results)\n[docs] async def aget_relevant_documents(\n self, query: str, metadata: Optional[Dict] = None\n ) -> List[Document]:\n from zep_python import MemorySearchPayload\n payload: MemorySearchPayload = MemorySearchPayload(\n text=query, metadata=metadata\n )\n results: List[MemorySearchResult] = await self.zep_client.asearch_memory(\n self.session_id, payload, limit=self.top_k\n )\n return self._search_result_to_doc(results)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/zep.html"}
+{"id": "0d4fdac9f8a2-0", "text": "Source code for langchain.retrievers.knn\n\"\"\"KNN Retriever.\nLargely based on\nhttps://github.com/karpathy/randomfun/blob/master/knn_vs_svm.ipynb\"\"\"\nfrom __future__ import annotations\nimport concurrent.futures\nfrom typing import Any, List, Optional\nimport numpy as np\nfrom pydantic import BaseModel\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import BaseRetriever, Document\ndef create_index(contexts: List[str], embeddings: Embeddings) -> np.ndarray:\n with concurrent.futures.ThreadPoolExecutor() as executor:\n return np.array(list(executor.map(embeddings.embed_query, contexts)))\n[docs]class KNNRetriever(BaseRetriever, BaseModel):\n embeddings: Embeddings\n index: Any\n texts: List[str]\n k: int = 4\n relevancy_threshold: Optional[float] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] @classmethod\n def from_texts(\n cls, texts: List[str], embeddings: Embeddings, **kwargs: Any\n ) -> KNNRetriever:\n index = create_index(texts, embeddings)\n return cls(embeddings=embeddings, index=index, texts=texts, **kwargs)\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n query_embeds = np.array(self.embeddings.embed_query(query))\n # calc L2 norm\n index_embeds = self.index / np.sqrt((self.index**2).sum(1, keepdims=True))\n query_embeds = query_embeds / np.sqrt((query_embeds**2).sum())\n similarities = index_embeds.dot(query_embeds)", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/knn.html"}
+{"id": "0d4fdac9f8a2-1", "text": "similarities = index_embeds.dot(query_embeds)\n sorted_ix = np.argsort(-similarities)\n denominator = np.max(similarities) - np.min(similarities) + 1e-6\n normalized_similarities = (similarities - np.min(similarities)) / denominator\n top_k_results = [\n Document(page_content=self.texts[row])\n for row in sorted_ix[0 : self.k]\n if (\n self.relevancy_threshold is None\n or normalized_similarities[row] >= self.relevancy_threshold\n )\n ]\n return top_k_results\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/knn.html"}
+{"id": "32d37bcc875c-0", "text": "Source code for langchain.retrievers.svm\n\"\"\"SMV Retriever.\nLargely based on\nhttps://github.com/karpathy/randomfun/blob/master/knn_vs_svm.ipynb\"\"\"\nfrom __future__ import annotations\nimport concurrent.futures\nfrom typing import Any, List, Optional\nimport numpy as np\nfrom pydantic import BaseModel\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import BaseRetriever, Document\ndef create_index(contexts: List[str], embeddings: Embeddings) -> np.ndarray:\n with concurrent.futures.ThreadPoolExecutor() as executor:\n return np.array(list(executor.map(embeddings.embed_query, contexts)))\n[docs]class SVMRetriever(BaseRetriever, BaseModel):\n embeddings: Embeddings\n index: Any\n texts: List[str]\n k: int = 4\n relevancy_threshold: Optional[float] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] @classmethod\n def from_texts(\n cls, texts: List[str], embeddings: Embeddings, **kwargs: Any\n ) -> SVMRetriever:\n index = create_index(texts, embeddings)\n return cls(embeddings=embeddings, index=index, texts=texts, **kwargs)\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n from sklearn import svm\n query_embeds = np.array(self.embeddings.embed_query(query))\n x = np.concatenate([query_embeds[None, ...], self.index])\n y = np.zeros(x.shape[0])\n y[0] = 1\n clf = svm.LinearSVC(", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/svm.html"}
+{"id": "32d37bcc875c-1", "text": "y[0] = 1\n clf = svm.LinearSVC(\n class_weight=\"balanced\", verbose=False, max_iter=10000, tol=1e-6, C=0.1\n )\n clf.fit(x, y)\n similarities = clf.decision_function(x)\n sorted_ix = np.argsort(-similarities)\n # svm.LinearSVC in scikit-learn is non-deterministic.\n # if a text is the same as a query, there is no guarantee\n # the query will be in the first index.\n # this performs a simple swap, this works because anything\n # left of the 0 should be equivalent.\n zero_index = np.where(sorted_ix == 0)[0][0]\n if zero_index != 0:\n sorted_ix[0], sorted_ix[zero_index] = sorted_ix[zero_index], sorted_ix[0]\n denominator = np.max(similarities) - np.min(similarities) + 1e-6\n normalized_similarities = (similarities - np.min(similarities)) / denominator\n top_k_results = []\n for row in sorted_ix[1 : self.k + 1]:\n if (\n self.relevancy_threshold is None\n or normalized_similarities[row] >= self.relevancy_threshold\n ):\n top_k_results.append(Document(page_content=self.texts[row - 1]))\n return top_k_results\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/svm.html"}
+{"id": "1a04a6add156-0", "text": "Source code for langchain.retrievers.metal\nfrom typing import Any, List, Optional\nfrom langchain.schema import BaseRetriever, Document\n[docs]class MetalRetriever(BaseRetriever):\n def __init__(self, client: Any, params: Optional[dict] = None):\n from metal_sdk.metal import Metal\n if not isinstance(client, Metal):\n raise ValueError(\n \"Got unexpected client, should be of type metal_sdk.metal.Metal. \"\n f\"Instead, got {type(client)}\"\n )\n self.client: Metal = client\n self.params = params or {}\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n results = self.client.search({\"text\": query}, **self.params)\n final_results = []\n for r in results[\"data\"]:\n metadata = {k: v for k, v in r.items() if k != \"text\"}\n final_results.append(Document(page_content=r[\"text\"], metadata=metadata))\n return final_results\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/metal.html"}
+{"id": "d532b8aed272-0", "text": "Source code for langchain.retrievers.azure_cognitive_search\n\"\"\"Retriever wrapper for Azure Cognitive Search.\"\"\"\nfrom __future__ import annotations\nimport json\nfrom typing import Dict, List, Optional\nimport aiohttp\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.utils import get_from_dict_or_env\n[docs]class AzureCognitiveSearchRetriever(BaseRetriever, BaseModel):\n \"\"\"Wrapper around Azure Cognitive Search.\"\"\"\n service_name: str = \"\"\n \"\"\"Name of Azure Cognitive Search service\"\"\"\n index_name: str = \"\"\n \"\"\"Name of Index inside Azure Cognitive Search service\"\"\"\n api_key: str = \"\"\n \"\"\"API Key. Both Admin and Query keys work, but for reading data it's\n recommended to use a Query key.\"\"\"\n api_version: str = \"2020-06-30\"\n \"\"\"API version\"\"\"\n aiosession: Optional[aiohttp.ClientSession] = None\n \"\"\"ClientSession, in case we want to reuse connection for better performance.\"\"\"\n content_key: str = \"content\"\n \"\"\"Key in a retrieved result to set as the Document page_content.\"\"\"\n class Config:\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that service name, index name and api key exists in environment.\"\"\"\n values[\"service_name\"] = get_from_dict_or_env(\n values, \"service_name\", \"AZURE_COGNITIVE_SEARCH_SERVICE_NAME\"\n )\n values[\"index_name\"] = get_from_dict_or_env(\n values, \"index_name\", \"AZURE_COGNITIVE_SEARCH_INDEX_NAME\"\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/azure_cognitive_search.html"}
+{"id": "d532b8aed272-1", "text": ")\n values[\"api_key\"] = get_from_dict_or_env(\n values, \"api_key\", \"AZURE_COGNITIVE_SEARCH_API_KEY\"\n )\n return values\n def _build_search_url(self, query: str) -> str:\n base_url = f\"https://{self.service_name}.search.windows.net/\"\n endpoint_path = f\"indexes/{self.index_name}/docs?api-version={self.api_version}\"\n return base_url + endpoint_path + f\"&search={query}\"\n @property\n def _headers(self) -> Dict[str, str]:\n return {\n \"Content-Type\": \"application/json\",\n \"api-key\": self.api_key,\n }\n def _search(self, query: str) -> List[dict]:\n search_url = self._build_search_url(query)\n response = requests.get(search_url, headers=self._headers)\n if response.status_code != 200:\n raise Exception(f\"Error in search request: {response}\")\n return json.loads(response.text)[\"value\"]\n async def _asearch(self, query: str) -> List[dict]:\n search_url = self._build_search_url(query)\n if not self.aiosession:\n async with aiohttp.ClientSession() as session:\n async with session.get(search_url, headers=self._headers) as response:\n response_json = await response.json()\n else:\n async with self.aiosession.get(\n search_url, headers=self._headers\n ) as response:\n response_json = await response.json()\n return response_json[\"value\"]\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n search_results = self._search(query)\n return [", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/azure_cognitive_search.html"}
+{"id": "d532b8aed272-2", "text": "search_results = self._search(query)\n return [\n Document(page_content=result.pop(self.content_key), metadata=result)\n for result in search_results\n ]\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n search_results = await self._asearch(query)\n return [\n Document(page_content=result.pop(self.content_key), metadata=result)\n for result in search_results\n ]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/azure_cognitive_search.html"}
+{"id": "4ea7338bfc21-0", "text": "Source code for langchain.retrievers.chatgpt_plugin_retriever\nfrom __future__ import annotations\nfrom typing import List, Optional\nimport aiohttp\nimport requests\nfrom pydantic import BaseModel\nfrom langchain.schema import BaseRetriever, Document\n[docs]class ChatGPTPluginRetriever(BaseRetriever, BaseModel):\n url: str\n bearer_token: str\n top_k: int = 3\n filter: Optional[dict] = None\n aiosession: Optional[aiohttp.ClientSession] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n url, json, headers = self._create_request(query)\n response = requests.post(url, json=json, headers=headers)\n results = response.json()[\"results\"][0][\"results\"]\n docs = []\n for d in results:\n content = d.pop(\"text\")\n docs.append(Document(page_content=content, metadata=d))\n return docs\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n url, json, headers = self._create_request(query)\n if not self.aiosession:\n async with aiohttp.ClientSession() as session:\n async with session.post(url, headers=headers, json=json) as response:\n res = await response.json()\n else:\n async with self.aiosession.post(\n url, headers=headers, json=json\n ) as response:\n res = await response.json()\n results = res[\"results\"][0][\"results\"]\n docs = []\n for d in results:\n content = d.pop(\"text\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/chatgpt_plugin_retriever.html"}
+{"id": "4ea7338bfc21-1", "text": "for d in results:\n content = d.pop(\"text\")\n docs.append(Document(page_content=content, metadata=d))\n return docs\n def _create_request(self, query: str) -> tuple[str, dict, dict]:\n url = f\"{self.url}/query\"\n json = {\n \"queries\": [\n {\n \"query\": query,\n \"filter\": self.filter,\n \"top_k\": self.top_k,\n }\n ]\n }\n headers = {\n \"Content-Type\": \"application/json\",\n \"Authorization\": f\"Bearer {self.bearer_token}\",\n }\n return url, json, headers\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/chatgpt_plugin_retriever.html"}
+{"id": "745e96522f02-0", "text": "Source code for langchain.retrievers.merger_retriever\nfrom typing import List\nfrom langchain.schema import BaseRetriever, Document\n[docs]class MergerRetriever(BaseRetriever):\n \"\"\"\n This class merges the results of multiple retrievers.\n Args:\n retrievers: A list of retrievers to merge.\n \"\"\"\n def __init__(\n self,\n retrievers: List[BaseRetriever],\n ):\n \"\"\"\n Initialize the MergerRetriever class.\n Args:\n retrievers: A list of retrievers to merge.\n \"\"\"\n self.retrievers = retrievers\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n \"\"\"\n Get the relevant documents for a given query.\n Args:\n query: The query to search for.\n Returns:\n A list of relevant documents.\n \"\"\"\n # Merge the results of the retrievers.\n merged_documents = self.merge_documents(query)\n return merged_documents\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n \"\"\"\n Asynchronously get the relevant documents for a given query.\n Args:\n query: The query to search for.\n Returns:\n A list of relevant documents.\n \"\"\"\n # Merge the results of the retrievers.\n merged_documents = await self.amerge_documents(query)\n return merged_documents\n[docs] def merge_documents(self, query: str) -> List[Document]:\n \"\"\"\n Merge the results of the retrievers.\n Args:\n query: The query to search for.\n Returns:\n A list of merged documents.\n \"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/merger_retriever.html"}
+{"id": "745e96522f02-1", "text": "Returns:\n A list of merged documents.\n \"\"\"\n # Get the results of all retrievers.\n retriever_docs = [\n retriever.get_relevant_documents(query) for retriever in self.retrievers\n ]\n # Merge the results of the retrievers.\n merged_documents = []\n max_docs = max(len(docs) for docs in retriever_docs)\n for i in range(max_docs):\n for retriever, doc in zip(self.retrievers, retriever_docs):\n if i < len(doc):\n merged_documents.append(doc[i])\n return merged_documents\n[docs] async def amerge_documents(self, query: str) -> List[Document]:\n \"\"\"\n Asynchronously merge the results of the retrievers.\n Args:\n query: The query to search for.\n Returns:\n A list of merged documents.\n \"\"\"\n # Get the results of all retrievers.\n retriever_docs = [\n await retriever.aget_relevant_documents(query)\n for retriever in self.retrievers\n ]\n # Merge the results of the retrievers.\n merged_documents = []\n max_docs = max(len(docs) for docs in retriever_docs)\n for i in range(max_docs):\n for retriever, doc in zip(self.retrievers, retriever_docs):\n if i < len(doc):\n merged_documents.append(doc[i])\n return merged_documents\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/merger_retriever.html"}
+{"id": "55e6d3c7e2be-0", "text": "Source code for langchain.retrievers.contextual_compression\n\"\"\"Retriever that wraps a base retriever and filters the results.\"\"\"\nfrom typing import List\nfrom pydantic import BaseModel, Extra\nfrom langchain.retrievers.document_compressors.base import (\n BaseDocumentCompressor,\n)\nfrom langchain.schema import BaseRetriever, Document\n[docs]class ContextualCompressionRetriever(BaseRetriever, BaseModel):\n \"\"\"Retriever that wraps a base retriever and compresses the results.\"\"\"\n base_compressor: BaseDocumentCompressor\n \"\"\"Compressor for compressing retrieved documents.\"\"\"\n base_retriever: BaseRetriever\n \"\"\"Base Retriever to use for getting relevant documents.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n \"\"\"Get documents relevant for a query.\n Args:\n query: string to find relevant documents for\n Returns:\n Sequence of relevant documents\n \"\"\"\n docs = self.base_retriever.get_relevant_documents(query)\n compressed_docs = self.base_compressor.compress_documents(docs, query)\n return list(compressed_docs)\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n \"\"\"Get documents relevant for a query.\n Args:\n query: string to find relevant documents for\n Returns:\n List of relevant documents\n \"\"\"\n docs = await self.base_retriever.aget_relevant_documents(query)\n compressed_docs = await self.base_compressor.acompress_documents(docs, query)\n return list(compressed_docs)\nBy Harrison Chase", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/contextual_compression.html"}
+{"id": "55e6d3c7e2be-1", "text": "return list(compressed_docs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/contextual_compression.html"}
+{"id": "fc90dc88438a-0", "text": "Source code for langchain.retrievers.arxiv\nfrom typing import List\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.utilities.arxiv import ArxivAPIWrapper\n[docs]class ArxivRetriever(BaseRetriever, ArxivAPIWrapper):\n \"\"\"\n It is effectively a wrapper for ArxivAPIWrapper.\n It wraps load() to get_relevant_documents().\n It uses all ArxivAPIWrapper arguments without any change.\n \"\"\"\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n return self.load(query=query)\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/arxiv.html"}
+{"id": "faeac3fc9c0f-0", "text": "Source code for langchain.retrievers.aws_kendra_index_retriever\n\"\"\"Retriever wrapper for AWS Kendra.\"\"\"\nimport re\nfrom typing import Any, Dict, List\nfrom langchain.schema import BaseRetriever, Document\n[docs]class AwsKendraIndexRetriever(BaseRetriever):\n \"\"\"Wrapper around AWS Kendra.\"\"\"\n kendraindex: str\n \"\"\"Kendra index id\"\"\"\n k: int\n \"\"\"Number of documents to query for.\"\"\"\n languagecode: str\n \"\"\"Languagecode used for querying.\"\"\"\n kclient: Any\n \"\"\" boto3 client for Kendra. \"\"\"\n def __init__(\n self, kclient: Any, kendraindex: str, k: int = 3, languagecode: str = \"en\"\n ):\n self.kendraindex = kendraindex\n self.k = k\n self.languagecode = languagecode\n self.kclient = kclient\n def _clean_result(self, res_text: str) -> str:\n return re.sub(\"\\s+\", \" \", res_text).replace(\"...\", \"\")\n def _get_top_n_results(self, resp: Dict, count: int) -> Document:\n r = resp[\"ResultItems\"][count]\n doc_title = r[\"DocumentTitle\"][\"Text\"]\n doc_uri = r[\"DocumentURI\"]\n r_type = r[\"Type\"]\n if (\n r[\"AdditionalAttributes\"]\n and r[\"AdditionalAttributes\"][0][\"Key\"] == \"AnswerText\"\n ):\n res_text = r[\"AdditionalAttributes\"][0][\"Value\"][\"TextWithHighlightsValue\"][\n \"Text\"\n ]\n else:\n res_text = r[\"DocumentExcerpt\"][\"Text\"]\n doc_excerpt = self._clean_result(res_text)", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/aws_kendra_index_retriever.html"}
+{"id": "faeac3fc9c0f-1", "text": "doc_excerpt = self._clean_result(res_text)\n combined_text = f\"\"\"Document Title: {doc_title}\nDocument Excerpt: {doc_excerpt}\n\"\"\"\n return Document(\n page_content=combined_text,\n metadata={\n \"source\": doc_uri,\n \"title\": doc_title,\n \"excerpt\": doc_excerpt,\n \"type\": r_type,\n },\n )\n def _kendra_query(self, kquery: str) -> List[Document]:\n response = self.kclient.query(\n IndexId=self.kendraindex,\n QueryText=kquery.strip(),\n AttributeFilter={\n \"AndAllFilters\": [\n {\n \"EqualsTo\": {\n \"Key\": \"_language_code\",\n \"Value\": {\n \"StringValue\": self.languagecode,\n },\n }\n }\n ]\n },\n )\n if len(response[\"ResultItems\"]) > self.k:\n r_count = self.k\n else:\n r_count = len(response[\"ResultItems\"])\n return [self._get_top_n_results(response, i) for i in range(0, r_count)]\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n \"\"\"Run search on Kendra index and get top k documents\n docs = get_relevant_documents('This is my query')\n \"\"\"\n return self._kendra_query(query)\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError(\"AwsKendraIndexRetriever does not support async\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/aws_kendra_index_retriever.html"}
+{"id": "95f2bf39d14e-0", "text": "Source code for langchain.retrievers.document_compressors.base\n\"\"\"Interface for retrieved document compressors.\"\"\"\nfrom abc import ABC, abstractmethod\nfrom typing import List, Sequence, Union\nfrom pydantic import BaseModel\nfrom langchain.schema import BaseDocumentTransformer, Document\nclass BaseDocumentCompressor(BaseModel, ABC):\n \"\"\"Base abstraction interface for document compression.\"\"\"\n @abstractmethod\n def compress_documents(\n self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n \"\"\"Compress retrieved documents given the query context.\"\"\"\n @abstractmethod\n async def acompress_documents(\n self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n \"\"\"Compress retrieved documents given the query context.\"\"\"\n[docs]class DocumentCompressorPipeline(BaseDocumentCompressor):\n \"\"\"Document compressor that uses a pipeline of transformers.\"\"\"\n transformers: List[Union[BaseDocumentTransformer, BaseDocumentCompressor]]\n \"\"\"List of document filters that are chained together and run in sequence.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def compress_documents(\n self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n \"\"\"Transform a list of documents.\"\"\"\n for _transformer in self.transformers:\n if isinstance(_transformer, BaseDocumentCompressor):\n documents = _transformer.compress_documents(documents, query)\n elif isinstance(_transformer, BaseDocumentTransformer):\n documents = _transformer.transform_documents(documents)\n else:\n raise ValueError(f\"Got unexpected transformer type: {_transformer}\")\n return documents\n[docs] async def acompress_documents(\n self, documents: Sequence[Document], query: str", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/base.html"}
+{"id": "95f2bf39d14e-1", "text": "self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n \"\"\"Compress retrieved documents given the query context.\"\"\"\n for _transformer in self.transformers:\n if isinstance(_transformer, BaseDocumentCompressor):\n documents = await _transformer.acompress_documents(documents, query)\n elif isinstance(_transformer, BaseDocumentTransformer):\n documents = await _transformer.atransform_documents(documents)\n else:\n raise ValueError(f\"Got unexpected transformer type: {_transformer}\")\n return documents\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/base.html"}
+{"id": "51edcea0bcee-0", "text": "Source code for langchain.retrievers.document_compressors.cohere_rerank\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Dict, Sequence\nfrom pydantic import Extra, root_validator\nfrom langchain.retrievers.document_compressors.base import BaseDocumentCompressor\nfrom langchain.schema import Document\nfrom langchain.utils import get_from_dict_or_env\nif TYPE_CHECKING:\n from cohere import Client\nelse:\n # We do to avoid pydantic annotation issues when actually instantiating\n # while keeping this import optional\n try:\n from cohere import Client\n except ImportError:\n pass\n[docs]class CohereRerank(BaseDocumentCompressor):\n client: Client\n top_n: int = 3\n model: str = \"rerank-english-v2.0\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n cohere_api_key = get_from_dict_or_env(\n values, \"cohere_api_key\", \"COHERE_API_KEY\"\n )\n try:\n import cohere\n values[\"client\"] = cohere.Client(cohere_api_key)\n except ImportError:\n raise ImportError(\n \"Could not import cohere python package. \"\n \"Please install it with `pip install cohere`.\"\n )\n return values\n[docs] def compress_documents(\n self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n if len(documents) == 0: # to avoid empty api call\n return []", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/cohere_rerank.html"}
+{"id": "51edcea0bcee-1", "text": "return []\n doc_list = list(documents)\n _docs = [d.page_content for d in doc_list]\n results = self.client.rerank(\n model=self.model, query=query, documents=_docs, top_n=self.top_n\n )\n final_results = []\n for r in results:\n doc = doc_list[r.index]\n doc.metadata[\"relevance_score\"] = r.relevance_score\n final_results.append(doc)\n return final_results\n[docs] async def acompress_documents(\n self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/cohere_rerank.html"}
+{"id": "de3863e2f03c-0", "text": "Source code for langchain.retrievers.document_compressors.embeddings_filter\n\"\"\"Document compressor that uses embeddings to drop documents unrelated to the query.\"\"\"\nfrom typing import Callable, Dict, Optional, Sequence\nimport numpy as np\nfrom pydantic import root_validator\nfrom langchain.document_transformers import (\n _get_embeddings_from_stateful_docs,\n get_stateful_documents,\n)\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.math_utils import cosine_similarity\nfrom langchain.retrievers.document_compressors.base import (\n BaseDocumentCompressor,\n)\nfrom langchain.schema import Document\n[docs]class EmbeddingsFilter(BaseDocumentCompressor):\n embeddings: Embeddings\n \"\"\"Embeddings to use for embedding document contents and queries.\"\"\"\n similarity_fn: Callable = cosine_similarity\n \"\"\"Similarity function for comparing documents. Function expected to take as input\n two matrices (List[List[float]]) and return a matrix of scores where higher values\n indicate greater similarity.\"\"\"\n k: Optional[int] = 20\n \"\"\"The number of relevant documents to return. Can be set to None, in which case\n `similarity_threshold` must be specified. Defaults to 20.\"\"\"\n similarity_threshold: Optional[float]\n \"\"\"Threshold for determining when two documents are similar enough\n to be considered redundant. Defaults to None, must be specified if `k` is set\n to None.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n @root_validator()\n def validate_params(cls, values: Dict) -> Dict:\n \"\"\"Validate similarity parameters.\"\"\"\n if values[\"k\"] is None and values[\"similarity_threshold\"] is None:\n raise ValueError(\"Must specify one of `k` or `similarity_threshold`.\")\n return values", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/embeddings_filter.html"}
+{"id": "de3863e2f03c-1", "text": "return values\n[docs] def compress_documents(\n self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n \"\"\"Filter documents based on similarity of their embeddings to the query.\"\"\"\n stateful_documents = get_stateful_documents(documents)\n embedded_documents = _get_embeddings_from_stateful_docs(\n self.embeddings, stateful_documents\n )\n embedded_query = self.embeddings.embed_query(query)\n similarity = self.similarity_fn([embedded_query], embedded_documents)[0]\n included_idxs = np.arange(len(embedded_documents))\n if self.k is not None:\n included_idxs = np.argsort(similarity)[::-1][: self.k]\n if self.similarity_threshold is not None:\n similar_enough = np.where(\n similarity[included_idxs] > self.similarity_threshold\n )\n included_idxs = included_idxs[similar_enough]\n return [stateful_documents[i] for i in included_idxs]\n[docs] async def acompress_documents(\n self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n \"\"\"Filter down documents.\"\"\"\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/embeddings_filter.html"}
+{"id": "ba936cb49a3e-0", "text": "Source code for langchain.retrievers.document_compressors.chain_filter\n\"\"\"Filter that uses an LLM to drop documents that aren't relevant to the query.\"\"\"\nfrom typing import Any, Callable, Dict, Optional, Sequence\nfrom langchain import BasePromptTemplate, LLMChain, PromptTemplate\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.output_parsers.boolean import BooleanOutputParser\nfrom langchain.retrievers.document_compressors.base import BaseDocumentCompressor\nfrom langchain.retrievers.document_compressors.chain_filter_prompt import (\n prompt_template,\n)\nfrom langchain.schema import Document\ndef _get_default_chain_prompt() -> PromptTemplate:\n return PromptTemplate(\n template=prompt_template,\n input_variables=[\"question\", \"context\"],\n output_parser=BooleanOutputParser(),\n )\ndef default_get_input(query: str, doc: Document) -> Dict[str, Any]:\n \"\"\"Return the compression chain input.\"\"\"\n return {\"question\": query, \"context\": doc.page_content}\n[docs]class LLMChainFilter(BaseDocumentCompressor):\n \"\"\"Filter that drops documents that aren't relevant to the query.\"\"\"\n llm_chain: LLMChain\n \"\"\"LLM wrapper to use for filtering documents. \n The chain prompt is expected to have a BooleanOutputParser.\"\"\"\n get_input: Callable[[str, Document], dict] = default_get_input\n \"\"\"Callable for constructing the chain input from the query and a Document.\"\"\"\n[docs] def compress_documents(\n self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n \"\"\"Filter down documents based on their relevance to the query.\"\"\"\n filtered_docs = []\n for doc in documents:\n _input = self.get_input(query, doc)\n include_doc = self.llm_chain.predict_and_parse(**_input)", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/chain_filter.html"}
+{"id": "ba936cb49a3e-1", "text": "include_doc = self.llm_chain.predict_and_parse(**_input)\n if include_doc:\n filtered_docs.append(doc)\n return filtered_docs\n[docs] async def acompress_documents(\n self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n \"\"\"Filter down documents.\"\"\"\n raise NotImplementedError\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n prompt: Optional[BasePromptTemplate] = None,\n **kwargs: Any\n ) -> \"LLMChainFilter\":\n _prompt = prompt if prompt is not None else _get_default_chain_prompt()\n llm_chain = LLMChain(llm=llm, prompt=_prompt)\n return cls(llm_chain=llm_chain, **kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/chain_filter.html"}
+{"id": "8ce6b3c39fe3-0", "text": "Source code for langchain.retrievers.document_compressors.chain_extract\n\"\"\"DocumentFilter that uses an LLM chain to extract the relevant parts of documents.\"\"\"\nfrom __future__ import annotations\nimport asyncio\nfrom typing import Any, Callable, Dict, Optional, Sequence\nfrom langchain import LLMChain, PromptTemplate\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.retrievers.document_compressors.base import BaseDocumentCompressor\nfrom langchain.retrievers.document_compressors.chain_extract_prompt import (\n prompt_template,\n)\nfrom langchain.schema import BaseOutputParser, Document\ndef default_get_input(query: str, doc: Document) -> Dict[str, Any]:\n \"\"\"Return the compression chain input.\"\"\"\n return {\"question\": query, \"context\": doc.page_content}\nclass NoOutputParser(BaseOutputParser[str]):\n \"\"\"Parse outputs that could return a null string of some sort.\"\"\"\n no_output_str: str = \"NO_OUTPUT\"\n def parse(self, text: str) -> str:\n cleaned_text = text.strip()\n if cleaned_text == self.no_output_str:\n return \"\"\n return cleaned_text\ndef _get_default_chain_prompt() -> PromptTemplate:\n output_parser = NoOutputParser()\n template = prompt_template.format(no_output_str=output_parser.no_output_str)\n return PromptTemplate(\n template=template,\n input_variables=[\"question\", \"context\"],\n output_parser=output_parser,\n )\n[docs]class LLMChainExtractor(BaseDocumentCompressor):\n llm_chain: LLMChain\n \"\"\"LLM wrapper to use for compressing documents.\"\"\"\n get_input: Callable[[str, Document], dict] = default_get_input\n \"\"\"Callable for constructing the chain input from the query and a Document.\"\"\"\n[docs] def compress_documents(", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/chain_extract.html"}
+{"id": "8ce6b3c39fe3-1", "text": "[docs] def compress_documents(\n self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n \"\"\"Compress page content of raw documents.\"\"\"\n compressed_docs = []\n for doc in documents:\n _input = self.get_input(query, doc)\n output = self.llm_chain.predict_and_parse(**_input)\n if len(output) == 0:\n continue\n compressed_docs.append(Document(page_content=output, metadata=doc.metadata))\n return compressed_docs\n[docs] async def acompress_documents(\n self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n \"\"\"Compress page content of raw documents asynchronously.\"\"\"\n outputs = await asyncio.gather(\n *[\n self.llm_chain.apredict_and_parse(**self.get_input(query, doc))\n for doc in documents\n ]\n )\n compressed_docs = []\n for i, doc in enumerate(documents):\n if len(outputs[i]) == 0:\n continue\n compressed_docs.append(\n Document(page_content=outputs[i], metadata=doc.metadata)\n )\n return compressed_docs\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n prompt: Optional[PromptTemplate] = None,\n get_input: Optional[Callable[[str, Document], str]] = None,\n llm_chain_kwargs: Optional[dict] = None,\n ) -> LLMChainExtractor:\n \"\"\"Initialize from LLM.\"\"\"\n _prompt = prompt if prompt is not None else _get_default_chain_prompt()\n _get_input = get_input if get_input is not None else default_get_input", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/chain_extract.html"}
+{"id": "8ce6b3c39fe3-2", "text": "_get_input = get_input if get_input is not None else default_get_input\n llm_chain = LLMChain(llm=llm, prompt=_prompt, **(llm_chain_kwargs or {}))\n return cls(llm_chain=llm_chain, get_input=_get_input)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/chain_extract.html"}
+{"id": "f06de8e32e77-0", "text": "Source code for langchain.retrievers.self_query.base\n\"\"\"Retriever that generates and executes structured queries over its own data source.\"\"\"\nfrom typing import Any, Dict, List, Optional, Type, cast\nfrom pydantic import BaseModel, Field, root_validator\nfrom langchain import LLMChain\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.chains.query_constructor.base import load_query_constructor_chain\nfrom langchain.chains.query_constructor.ir import StructuredQuery, Visitor\nfrom langchain.chains.query_constructor.schema import AttributeInfo\nfrom langchain.retrievers.self_query.chroma import ChromaTranslator\nfrom langchain.retrievers.self_query.pinecone import PineconeTranslator\nfrom langchain.retrievers.self_query.qdrant import QdrantTranslator\nfrom langchain.retrievers.self_query.weaviate import WeaviateTranslator\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.vectorstores import Chroma, Pinecone, Qdrant, VectorStore, Weaviate\ndef _get_builtin_translator(vectorstore: VectorStore) -> Visitor:\n \"\"\"Get the translator class corresponding to the vector store class.\"\"\"\n vectorstore_cls = vectorstore.__class__\n BUILTIN_TRANSLATORS: Dict[Type[VectorStore], Type[Visitor]] = {\n Pinecone: PineconeTranslator,\n Chroma: ChromaTranslator,\n Weaviate: WeaviateTranslator,\n Qdrant: QdrantTranslator,\n }\n if vectorstore_cls not in BUILTIN_TRANSLATORS:\n raise ValueError(\n f\"Self query retriever with Vector Store type {vectorstore_cls}\"\n f\" not supported.\"\n )\n if isinstance(vectorstore, Qdrant):\n return QdrantTranslator(metadata_key=vectorstore.metadata_payload_key)", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/self_query/base.html"}
+{"id": "f06de8e32e77-1", "text": "return QdrantTranslator(metadata_key=vectorstore.metadata_payload_key)\n return BUILTIN_TRANSLATORS[vectorstore_cls]()\n[docs]class SelfQueryRetriever(BaseRetriever, BaseModel):\n \"\"\"Retriever that wraps around a vector store and uses an LLM to generate\n the vector store queries.\"\"\"\n vectorstore: VectorStore\n \"\"\"The underlying vector store from which documents will be retrieved.\"\"\"\n llm_chain: LLMChain\n \"\"\"The LLMChain for generating the vector store queries.\"\"\"\n search_type: str = \"similarity\"\n \"\"\"The search type to perform on the vector store.\"\"\"\n search_kwargs: dict = Field(default_factory=dict)\n \"\"\"Keyword arguments to pass in to the vector store search.\"\"\"\n structured_query_translator: Visitor\n \"\"\"Translator for turning internal query language into vectorstore search params.\"\"\"\n verbose: bool = False\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n @root_validator(pre=True)\n def validate_translator(cls, values: Dict) -> Dict:\n \"\"\"Validate translator.\"\"\"\n if \"structured_query_translator\" not in values:\n values[\"structured_query_translator\"] = _get_builtin_translator(\n values[\"vectorstore\"]\n )\n return values\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n \"\"\"Get documents relevant for a query.\n Args:\n query: string to find relevant documents for\n Returns:\n List of relevant documents\n \"\"\"\n inputs = self.llm_chain.prep_inputs({\"query\": query})\n structured_query = cast(\n StructuredQuery, self.llm_chain.predict_and_parse(callbacks=None, **inputs)\n )\n if self.verbose:", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/self_query/base.html"}
+{"id": "f06de8e32e77-2", "text": ")\n if self.verbose:\n print(structured_query)\n new_query, new_kwargs = self.structured_query_translator.visit_structured_query(\n structured_query\n )\n if structured_query.limit is not None:\n new_kwargs[\"k\"] = structured_query.limit\n search_kwargs = {**self.search_kwargs, **new_kwargs}\n docs = self.vectorstore.search(new_query, self.search_type, **search_kwargs)\n return docs\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n vectorstore: VectorStore,\n document_contents: str,\n metadata_field_info: List[AttributeInfo],\n structured_query_translator: Optional[Visitor] = None,\n chain_kwargs: Optional[Dict] = None,\n enable_limit: bool = False,\n **kwargs: Any,\n ) -> \"SelfQueryRetriever\":\n if structured_query_translator is None:\n structured_query_translator = _get_builtin_translator(vectorstore)\n chain_kwargs = chain_kwargs or {}\n if \"allowed_comparators\" not in chain_kwargs:\n chain_kwargs[\n \"allowed_comparators\"\n ] = structured_query_translator.allowed_comparators\n if \"allowed_operators\" not in chain_kwargs:\n chain_kwargs[\n \"allowed_operators\"\n ] = structured_query_translator.allowed_operators\n llm_chain = load_query_constructor_chain(\n llm,\n document_contents,\n metadata_field_info,\n enable_limit=enable_limit,\n **chain_kwargs,\n )\n return cls(", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/self_query/base.html"}
+{"id": "f06de8e32e77-3", "text": "**chain_kwargs,\n )\n return cls(\n llm_chain=llm_chain,\n vectorstore=vectorstore,\n structured_query_translator=structured_query_translator,\n **kwargs,\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/retrievers/self_query/base.html"}
+{"id": "63674000c00a-0", "text": "Source code for langchain.vectorstores.redis\n\"\"\"Wrapper around Redis vector database.\"\"\"\nfrom __future__ import annotations\nimport json\nimport logging\nimport uuid\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Dict,\n Iterable,\n List,\n Literal,\n Mapping,\n Optional,\n Tuple,\n Type,\n)\nimport numpy as np\nfrom pydantic import BaseModel, root_validator\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nfrom langchain.vectorstores.base import VectorStore, VectorStoreRetriever\nlogger = logging.getLogger(__name__)\nif TYPE_CHECKING:\n from redis.client import Redis as RedisType\n from redis.commands.search.query import Query\n# required modules\nREDIS_REQUIRED_MODULES = [\n {\"name\": \"search\", \"ver\": 20400},\n {\"name\": \"searchlight\", \"ver\": 20400},\n]\n# distance mmetrics\nREDIS_DISTANCE_METRICS = Literal[\"COSINE\", \"IP\", \"L2\"]\ndef _check_redis_module_exist(client: RedisType, required_modules: List[dict]) -> None:\n \"\"\"Check if the correct Redis modules are installed.\"\"\"\n installed_modules = client.module_list()\n installed_modules = {\n module[b\"name\"].decode(\"utf-8\"): module for module in installed_modules\n }\n for module in required_modules:\n if module[\"name\"] in installed_modules and int(\n installed_modules[module[\"name\"]][b\"ver\"]\n ) >= int(module[\"ver\"]):\n return\n # otherwise raise error\n error_message = (\n \"Redis cannot be used as a vector database without RediSearch >=2.4\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"}
+{"id": "63674000c00a-1", "text": "\"Redis cannot be used as a vector database without RediSearch >=2.4\"\n \"Please head to https://redis.io/docs/stack/search/quick_start/\"\n \"to know more about installing the RediSearch module within Redis Stack.\"\n )\n logging.error(error_message)\n raise ValueError(error_message)\ndef _check_index_exists(client: RedisType, index_name: str) -> bool:\n \"\"\"Check if Redis index exists.\"\"\"\n try:\n client.ft(index_name).info()\n except: # noqa: E722\n logger.info(\"Index does not exist\")\n return False\n logger.info(\"Index already exists\")\n return True\ndef _redis_key(prefix: str) -> str:\n \"\"\"Redis key schema for a given prefix.\"\"\"\n return f\"{prefix}:{uuid.uuid4().hex}\"\ndef _redis_prefix(index_name: str) -> str:\n \"\"\"Redis key prefix for a given index.\"\"\"\n return f\"doc:{index_name}\"\ndef _default_relevance_score(val: float) -> float:\n return 1 - val\n[docs]class Redis(VectorStore):\n \"\"\"Wrapper around Redis vector database.\n To use, you should have the ``redis`` python package installed.\n Example:\n .. code-block:: python\n from langchain.vectorstores import Redis\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n vectorstore = Redis(\n redis_url=\"redis://username:password@localhost:6379\"\n index_name=\"my-index\",\n embedding_function=embeddings.embed_query,\n )\n \"\"\"\n def __init__(\n self,\n redis_url: str,\n index_name: str,\n embedding_function: Callable,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"}
+{"id": "63674000c00a-2", "text": "index_name: str,\n embedding_function: Callable,\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n vector_key: str = \"content_vector\",\n relevance_score_fn: Optional[\n Callable[[float], float]\n ] = _default_relevance_score,\n **kwargs: Any,\n ):\n \"\"\"Initialize with necessary components.\"\"\"\n try:\n import redis\n except ImportError:\n raise ValueError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis>=4.1.0`.\"\n )\n self.embedding_function = embedding_function\n self.index_name = index_name\n try:\n # connect to redis from url\n redis_client = redis.from_url(redis_url, **kwargs)\n # check if redis has redisearch module installed\n _check_redis_module_exist(redis_client, REDIS_REQUIRED_MODULES)\n except ValueError as e:\n raise ValueError(f\"Redis failed to connect: {e}\")\n self.client = redis_client\n self.content_key = content_key\n self.metadata_key = metadata_key\n self.vector_key = vector_key\n self.relevance_score_fn = relevance_score_fn\n def _create_index(\n self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = \"COSINE\"\n ) -> None:\n try:\n from redis.commands.search.field import TextField, VectorField\n from redis.commands.search.indexDefinition import IndexDefinition, IndexType\n except ImportError:\n raise ValueError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis`.\"\n )\n # Check if index exists\n if not _check_index_exists(self.client, self.index_name):", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"}
+{"id": "63674000c00a-3", "text": "if not _check_index_exists(self.client, self.index_name):\n # Define schema\n schema = (\n TextField(name=self.content_key),\n TextField(name=self.metadata_key),\n VectorField(\n self.vector_key,\n \"FLAT\",\n {\n \"TYPE\": \"FLOAT32\",\n \"DIM\": dim,\n \"DISTANCE_METRIC\": distance_metric,\n },\n ),\n )\n prefix = _redis_prefix(self.index_name)\n # Create Redis Index\n self.client.ft(self.index_name).create_index(\n fields=schema,\n definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH),\n )\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n embeddings: Optional[List[List[float]]] = None,\n keys: Optional[List[str]] = None,\n batch_size: int = 1000,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Add more texts to the vectorstore.\n Args:\n texts (Iterable[str]): Iterable of strings/text to add to the vectorstore.\n metadatas (Optional[List[dict]], optional): Optional list of metadatas.\n Defaults to None.\n embeddings (Optional[List[List[float]]], optional): Optional pre-generated\n embeddings. Defaults to None.\n keys (Optional[List[str]], optional): Optional key values to use as ids.\n Defaults to None.\n batch_size (int, optional): Batch size to use for writes. Defaults to 1000.\n Returns:\n List[str]: List of ids added to the vectorstore\n \"\"\"\n ids = []\n prefix = _redis_prefix(self.index_name)", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"}
+{"id": "63674000c00a-4", "text": "\"\"\"\n ids = []\n prefix = _redis_prefix(self.index_name)\n # Write data to redis\n pipeline = self.client.pipeline(transaction=False)\n for i, text in enumerate(texts):\n # Use provided values by default or fallback\n key = keys[i] if keys else _redis_key(prefix)\n metadata = metadatas[i] if metadatas else {}\n embedding = embeddings[i] if embeddings else self.embedding_function(text)\n pipeline.hset(\n key,\n mapping={\n self.content_key: text,\n self.vector_key: np.array(embedding, dtype=np.float32).tobytes(),\n self.metadata_key: json.dumps(metadata),\n },\n )\n ids.append(key)\n # Write batch\n if i % batch_size == 0:\n pipeline.execute()\n # Cleanup final batch\n pipeline.execute()\n return ids\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"\n Returns the most similar indexed documents to the query text.\n Args:\n query (str): The query text for which to find similar documents.\n k (int): The number of documents to return. Default is 4.\n Returns:\n List[Document]: A list of documents that are most similar to the query text.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(query, k=k)\n return [doc for doc, _ in docs_and_scores]\n[docs] def similarity_search_limit_score(\n self, query: str, k: int = 4, score_threshold: float = 0.2, **kwargs: Any\n ) -> List[Document]:\n \"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"}
+{"id": "63674000c00a-5", "text": ") -> List[Document]:\n \"\"\"\n Returns the most similar indexed documents to the query text within the\n score_threshold range.\n Args:\n query (str): The query text for which to find similar documents.\n k (int): The number of documents to return. Default is 4.\n score_threshold (float): The minimum matching score required for a document\n to be considered a match. Defaults to 0.2.\n Because the similarity calculation algorithm is based on cosine similarity,\n the smaller the angle, the higher the similarity.\n Returns:\n List[Document]: A list of documents that are most similar to the query text,\n including the match score for each document.\n Note:\n If there are no documents that satisfy the score_threshold value,\n an empty list is returned.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(query, k=k)\n return [doc for doc, score in docs_and_scores if score < score_threshold]\n def _prepare_query(self, k: int) -> Query:\n try:\n from redis.commands.search.query import Query\n except ImportError:\n raise ValueError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis`.\"\n )\n # Prepare the Query\n hybrid_fields = \"*\"\n base_query = (\n f\"{hybrid_fields}=>[KNN {k} @{self.vector_key} $vector AS vector_score]\"\n )\n return_fields = [self.metadata_key, self.content_key, \"vector_score\"]\n return (\n Query(base_query)\n .return_fields(*return_fields)\n .sort_by(\"vector_score\")\n .paging(0, k)\n .dialect(2)\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"}
+{"id": "63674000c00a-6", "text": ".paging(0, k)\n .dialect(2)\n )\n[docs] def similarity_search_with_score(\n self, query: str, k: int = 4\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n # Creates embedding vector from user query\n embedding = self.embedding_function(query)\n # Creates Redis query\n redis_query = self._prepare_query(k)\n params_dict: Mapping[str, str] = {\n \"vector\": np.array(embedding) # type: ignore\n .astype(dtype=np.float32)\n .tobytes()\n }\n # Perform vector search\n results = self.client.ft(self.index_name).search(redis_query, params_dict)\n # Prepare document results\n docs = [\n (\n Document(\n page_content=result.content, metadata=json.loads(result.metadata)\n ),\n float(result.vector_score),\n )\n for result in results.docs\n ]\n return docs\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores, normalized on a scale from 0 to 1.\n 0 is dissimilar, 1 is most similar.\n \"\"\"\n if self.relevance_score_fn is None:\n raise ValueError(\n \"relevance_score_fn must be provided to\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"}
+{"id": "63674000c00a-7", "text": "raise ValueError(\n \"relevance_score_fn must be provided to\"\n \" Redis constructor to normalize scores\"\n )\n docs_and_scores = self.similarity_search_with_score(query, k=k)\n return [(doc, self.relevance_score_fn(score)) for doc, score in docs_and_scores]\n[docs] @classmethod\n def from_texts_return_keys(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n index_name: Optional[str] = None,\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n vector_key: str = \"content_vector\",\n distance_metric: REDIS_DISTANCE_METRICS = \"COSINE\",\n **kwargs: Any,\n ) -> Tuple[Redis, List[str]]:\n \"\"\"Create a Redis vectorstore from raw documents.\n This is a user-friendly interface that:\n 1. Embeds documents.\n 2. Creates a new index for the embeddings in Redis.\n 3. Adds the documents to the newly created Redis index.\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain.vectorstores import Redis\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n redisearch = RediSearch.from_texts(\n texts,\n embeddings,\n redis_url=\"redis://username:password@localhost:6379\"\n )\n \"\"\"\n redis_url = get_from_dict_or_env(kwargs, \"redis_url\", \"REDIS_URL\")\n if \"redis_url\" in kwargs:\n kwargs.pop(\"redis_url\")\n # Name of the search index if not given", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"}
+{"id": "63674000c00a-8", "text": "kwargs.pop(\"redis_url\")\n # Name of the search index if not given\n if not index_name:\n index_name = uuid.uuid4().hex\n # Create instance\n instance = cls(\n redis_url,\n index_name,\n embedding.embed_query,\n content_key=content_key,\n metadata_key=metadata_key,\n vector_key=vector_key,\n **kwargs,\n )\n # Create embeddings over documents\n embeddings = embedding.embed_documents(texts)\n # Create the search index\n instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric)\n # Add data to Redis\n keys = instance.add_texts(texts, metadatas, embeddings)\n return instance, keys\n[docs] @classmethod\n def from_texts(\n cls: Type[Redis],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n index_name: Optional[str] = None,\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n vector_key: str = \"content_vector\",\n **kwargs: Any,\n ) -> Redis:\n \"\"\"Create a Redis vectorstore from raw documents.\n This is a user-friendly interface that:\n 1. Embeds documents.\n 2. Creates a new index for the embeddings in Redis.\n 3. Adds the documents to the newly created Redis index.\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain.vectorstores import Redis\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n redisearch = RediSearch.from_texts(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"}
+{"id": "63674000c00a-9", "text": "embeddings = OpenAIEmbeddings()\n redisearch = RediSearch.from_texts(\n texts,\n embeddings,\n redis_url=\"redis://username:password@localhost:6379\"\n )\n \"\"\"\n instance, _ = cls.from_texts_return_keys(\n texts,\n embedding,\n metadatas=metadatas,\n index_name=index_name,\n content_key=content_key,\n metadata_key=metadata_key,\n vector_key=vector_key,\n **kwargs,\n )\n return instance\n[docs] @staticmethod\n def drop_index(\n index_name: str,\n delete_documents: bool,\n **kwargs: Any,\n ) -> bool:\n \"\"\"\n Drop a Redis search index.\n Args:\n index_name (str): Name of the index to drop.\n delete_documents (bool): Whether to drop the associated documents.\n Returns:\n bool: Whether or not the drop was successful.\n \"\"\"\n redis_url = get_from_dict_or_env(kwargs, \"redis_url\", \"REDIS_URL\")\n try:\n import redis\n except ImportError:\n raise ValueError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis`.\"\n )\n try:\n # We need to first remove redis_url from kwargs,\n # otherwise passing it to Redis will result in an error.\n if \"redis_url\" in kwargs:\n kwargs.pop(\"redis_url\")\n client = redis.from_url(url=redis_url, **kwargs)\n except ValueError as e:\n raise ValueError(f\"Your redis connected error: {e}\")\n # Check if index exists\n try:\n client.ft(index_name).dropindex(delete_documents)", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"}
+{"id": "63674000c00a-10", "text": "try:\n client.ft(index_name).dropindex(delete_documents)\n logger.info(\"Drop index\")\n return True\n except: # noqa: E722\n # Index not exist\n return False\n[docs] @classmethod\n def from_existing_index(\n cls,\n embedding: Embeddings,\n index_name: str,\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n vector_key: str = \"content_vector\",\n **kwargs: Any,\n ) -> Redis:\n \"\"\"Connect to an existing Redis index.\"\"\"\n redis_url = get_from_dict_or_env(kwargs, \"redis_url\", \"REDIS_URL\")\n try:\n import redis\n except ImportError:\n raise ValueError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis`.\"\n )\n try:\n # We need to first remove redis_url from kwargs,\n # otherwise passing it to Redis will result in an error.\n if \"redis_url\" in kwargs:\n kwargs.pop(\"redis_url\")\n client = redis.from_url(url=redis_url, **kwargs)\n # check if redis has redisearch module installed\n _check_redis_module_exist(client, REDIS_REQUIRED_MODULES)\n # ensure that the index already exists\n assert _check_index_exists(\n client, index_name\n ), f\"Index {index_name} does not exist\"\n except Exception as e:\n raise ValueError(f\"Redis failed to connect: {e}\")\n return cls(\n redis_url,\n index_name,\n embedding.embed_query,\n content_key=content_key,\n metadata_key=metadata_key,\n vector_key=vector_key,\n **kwargs,\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"}
+{"id": "63674000c00a-11", "text": "vector_key=vector_key,\n **kwargs,\n )\n[docs] def as_retriever(self, **kwargs: Any) -> RedisVectorStoreRetriever:\n return RedisVectorStoreRetriever(vectorstore=self, **kwargs)\nclass RedisVectorStoreRetriever(VectorStoreRetriever, BaseModel):\n vectorstore: Redis\n search_type: str = \"similarity\"\n k: int = 4\n score_threshold: float = 0.4\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n @root_validator()\n def validate_search_type(cls, values: Dict) -> Dict:\n \"\"\"Validate search type.\"\"\"\n if \"search_type\" in values:\n search_type = values[\"search_type\"]\n if search_type not in (\"similarity\", \"similarity_limit\"):\n raise ValueError(f\"search_type of {search_type} not allowed.\")\n return values\n def get_relevant_documents(self, query: str) -> List[Document]:\n if self.search_type == \"similarity\":\n docs = self.vectorstore.similarity_search(query, k=self.k)\n elif self.search_type == \"similarity_limit\":\n docs = self.vectorstore.similarity_search_limit_score(\n query, k=self.k, score_threshold=self.score_threshold\n )\n else:\n raise ValueError(f\"search_type of {self.search_type} not allowed.\")\n return docs\n async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError(\"RedisVectorStoreRetriever does not support async\")\n def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:\n \"\"\"Add documents to vectorstore.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"}
+{"id": "63674000c00a-12", "text": "\"\"\"Add documents to vectorstore.\"\"\"\n return self.vectorstore.add_documents(documents, **kwargs)\n async def aadd_documents(\n self, documents: List[Document], **kwargs: Any\n ) -> List[str]:\n \"\"\"Add documents to vectorstore.\"\"\"\n return await self.vectorstore.aadd_documents(documents, **kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"}
+{"id": "c421bd8e3941-0", "text": "Source code for langchain.vectorstores.clickhouse\n\"\"\"Wrapper around open source ClickHouse VectorSearch capability.\"\"\"\nfrom __future__ import annotations\nimport json\nimport logging\nfrom hashlib import sha1\nfrom threading import Thread\nfrom typing import Any, Dict, Iterable, List, Optional, Tuple, Union\nfrom pydantic import BaseSettings\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger()\ndef has_mul_sub_str(s: str, *args: Any) -> bool:\n for a in args:\n if a not in s:\n return False\n return True\n[docs]class ClickhouseSettings(BaseSettings):\n \"\"\"ClickHouse Client Configuration\n Attribute:\n clickhouse_host (str) : An URL to connect to MyScale backend.\n Defaults to 'localhost'.\n clickhouse_port (int) : URL port to connect with HTTP. Defaults to 8443.\n username (str) : Username to login. Defaults to None.\n password (str) : Password to login. Defaults to None.\n index_type (str): index type string.\n index_param (list): index build parameter.\n index_query_params(dict): index query parameters.\n database (str) : Database name to find the table. Defaults to 'default'.\n table (str) : Table name to operate on.\n Defaults to 'vector_table'.\n metric (str) : Metric to compute distance,\n supported are ('angular', 'euclidean', 'manhattan', 'hamming',\n 'dot'). Defaults to 'angular'.\n https://github.com/spotify/annoy/blob/main/src/annoymodule.cc#L149-L169", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"}
+{"id": "c421bd8e3941-1", "text": "column_map (Dict) : Column type map to project column name onto langchain\n semantics. Must have keys: `text`, `id`, `vector`,\n must be same size to number of columns. For example:\n .. code-block:: python\n {\n 'id': 'text_id',\n 'uuid': 'global_unique_id'\n 'embedding': 'text_embedding',\n 'document': 'text_plain',\n 'metadata': 'metadata_dictionary_in_json',\n }\n Defaults to identity map.\n \"\"\"\n host: str = \"localhost\"\n port: int = 8123\n username: Optional[str] = None\n password: Optional[str] = None\n index_type: str = \"annoy\"\n # Annoy supports L2Distance and cosineDistance.\n index_param: Optional[Union[List, Dict]] = [100, \"'L2Distance'\"]\n index_query_params: Dict[str, str] = {}\n column_map: Dict[str, str] = {\n \"id\": \"id\",\n \"uuid\": \"uuid\",\n \"document\": \"document\",\n \"embedding\": \"embedding\",\n \"metadata\": \"metadata\",\n }\n database: str = \"default\"\n table: str = \"langchain\"\n metric: str = \"angular\"\n def __getitem__(self, item: str) -> Any:\n return getattr(self, item)\n class Config:\n env_file = \".env\"\n env_prefix = \"clickhouse_\"\n env_file_encoding = \"utf-8\"\n[docs]class Clickhouse(VectorStore):\n \"\"\"Wrapper around ClickHouse vector database\n You need a `clickhouse-connect` python package, and a valid account\n to connect to ClickHouse.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"}
+{"id": "c421bd8e3941-2", "text": "to connect to ClickHouse.\n ClickHouse can not only search with simple vector indexes,\n it also supports complex query with multiple conditions,\n constraints and even sub-queries.\n For more information, please visit\n [ClickHouse official site](https://clickhouse.com/clickhouse)\n \"\"\"\n def __init__(\n self,\n embedding: Embeddings,\n config: Optional[ClickhouseSettings] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"ClickHouse Wrapper to LangChain\n embedding_function (Embeddings):\n config (ClickHouseSettings): Configuration to ClickHouse Client\n Other keyword arguments will pass into\n [clickhouse-connect](https://docs.clickhouse.com/)\n \"\"\"\n try:\n from clickhouse_connect import get_client\n except ImportError:\n raise ValueError(\n \"Could not import clickhouse connect python package. \"\n \"Please install it with `pip install clickhouse-connect`.\"\n )\n try:\n from tqdm import tqdm\n self.pgbar = tqdm\n except ImportError:\n # Just in case if tqdm is not installed\n self.pgbar = lambda x, **kwargs: x\n super().__init__()\n if config is not None:\n self.config = config\n else:\n self.config = ClickhouseSettings()\n assert self.config\n assert self.config.host and self.config.port\n assert (\n self.config.column_map\n and self.config.database\n and self.config.table\n and self.config.metric\n )\n for k in [\"id\", \"embedding\", \"document\", \"metadata\", \"uuid\"]:\n assert k in self.config.column_map\n assert self.config.metric in [\n \"angular\",\n \"euclidean\",\n \"manhattan\",", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"}
+{"id": "c421bd8e3941-3", "text": "\"angular\",\n \"euclidean\",\n \"manhattan\",\n \"hamming\",\n \"dot\",\n ]\n # initialize the schema\n dim = len(embedding.embed_query(\"test\"))\n index_params = (\n (\n \",\".join([f\"'{k}={v}'\" for k, v in self.config.index_param.items()])\n if self.config.index_param\n else \"\"\n )\n if isinstance(self.config.index_param, Dict)\n else \",\".join([str(p) for p in self.config.index_param])\n if isinstance(self.config.index_param, List)\n else self.config.index_param\n )\n self.schema = f\"\"\"\\\nCREATE TABLE IF NOT EXISTS {self.config.database}.{self.config.table}(\n {self.config.column_map['id']} Nullable(String),\n {self.config.column_map['document']} Nullable(String),\n {self.config.column_map['embedding']} Array(Float32),\n {self.config.column_map['metadata']} JSON,\n {self.config.column_map['uuid']} UUID DEFAULT generateUUIDv4(),\n CONSTRAINT cons_vec_len CHECK length({self.config.column_map['embedding']}) = {dim},\n INDEX vec_idx {self.config.column_map['embedding']} TYPE \\\n{self.config.index_type}({index_params}) GRANULARITY 1000\n) ENGINE = MergeTree ORDER BY uuid SETTINGS index_granularity = 8192\\\n\"\"\"\n self.dim = dim\n self.BS = \"\\\\\"\n self.must_escape = (\"\\\\\", \"'\")\n self.embedding_function = embedding\n self.dist_order = \"ASC\" # Only support ConsingDistance and L2Distance\n # Create a connection to clickhouse\n self.client = get_client(\n host=self.config.host,\n port=self.config.port,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"}
+{"id": "c421bd8e3941-4", "text": "host=self.config.host,\n port=self.config.port,\n username=self.config.username,\n password=self.config.password,\n **kwargs,\n )\n # Enable JSON type\n self.client.command(\"SET allow_experimental_object_type=1\")\n # Enable Annoy index\n self.client.command(\"SET allow_experimental_annoy_index=1\")\n self.client.command(self.schema)\n[docs] def escape_str(self, value: str) -> str:\n return \"\".join(f\"{self.BS}{c}\" if c in self.must_escape else c for c in value)\n def _build_insert_sql(self, transac: Iterable, column_names: Iterable[str]) -> str:\n ks = \",\".join(column_names)\n _data = []\n for n in transac:\n n = \",\".join([f\"'{self.escape_str(str(_n))}'\" for _n in n])\n _data.append(f\"({n})\")\n i_str = f\"\"\"\n INSERT INTO TABLE \n {self.config.database}.{self.config.table}({ks})\n VALUES\n {','.join(_data)}\n \"\"\"\n return i_str\n def _insert(self, transac: Iterable, column_names: Iterable[str]) -> None:\n _insert_query = self._build_insert_sql(transac, column_names)\n self.client.command(_insert_query)\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n batch_size: int = 32,\n ids: Optional[Iterable[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Insert more texts through the embeddings and add to the VectorStore.\n Args:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"}
+{"id": "c421bd8e3941-5", "text": "\"\"\"Insert more texts through the embeddings and add to the VectorStore.\n Args:\n texts: Iterable of strings to add to the VectorStore.\n ids: Optional list of ids to associate with the texts.\n batch_size: Batch size of insertion\n metadata: Optional column data to be inserted\n Returns:\n List of ids from adding the texts into the VectorStore.\n \"\"\"\n # Embed and create the documents\n ids = ids or [sha1(t.encode(\"utf-8\")).hexdigest() for t in texts]\n colmap_ = self.config.column_map\n transac = []\n column_names = {\n colmap_[\"id\"]: ids,\n colmap_[\"document\"]: texts,\n colmap_[\"embedding\"]: self.embedding_function.embed_documents(list(texts)),\n }\n metadatas = metadatas or [{} for _ in texts]\n column_names[colmap_[\"metadata\"]] = map(json.dumps, metadatas)\n assert len(set(colmap_) - set(column_names)) >= 0\n keys, values = zip(*column_names.items())\n try:\n t = None\n for v in self.pgbar(\n zip(*values), desc=\"Inserting data...\", total=len(metadatas)\n ):\n assert (\n len(v[keys.index(self.config.column_map[\"embedding\"])]) == self.dim\n )\n transac.append(v)\n if len(transac) == batch_size:\n if t:\n t.join()\n t = Thread(target=self._insert, args=[transac, keys])\n t.start()\n transac = []\n if len(transac) > 0:\n if t:\n t.join()\n self._insert(transac, keys)", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"}
+{"id": "c421bd8e3941-6", "text": "if t:\n t.join()\n self._insert(transac, keys)\n return [i for i in ids]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[Dict[Any, Any]]] = None,\n config: Optional[ClickhouseSettings] = None,\n text_ids: Optional[Iterable[str]] = None,\n batch_size: int = 32,\n **kwargs: Any,\n ) -> Clickhouse:\n \"\"\"Create ClickHouse wrapper with existing texts\n Args:\n embedding_function (Embeddings): Function to extract text embedding\n texts (Iterable[str]): List or tuple of strings to be added\n config (ClickHouseSettings, Optional): ClickHouse configuration\n text_ids (Optional[Iterable], optional): IDs for the texts.\n Defaults to None.\n batch_size (int, optional): Batchsize when transmitting data to ClickHouse.\n Defaults to 32.\n metadata (List[dict], optional): metadata to texts. Defaults to None.\n Other keyword arguments will pass into\n [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)\n Returns:\n ClickHouse Index\n \"\"\"\n ctx = cls(embedding, config, **kwargs)\n ctx.add_texts(texts, ids=text_ids, batch_size=batch_size, metadatas=metadatas)\n return ctx\n def __repr__(self) -> str:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"}
+{"id": "c421bd8e3941-7", "text": "return ctx\n def __repr__(self) -> str:\n \"\"\"Text representation for ClickHouse Vector Store, prints backends, username\n and schemas. Easy to use with `str(ClickHouse())`\n Returns:\n repr: string to show connection info and data schema\n \"\"\"\n _repr = f\"\\033[92m\\033[1m{self.config.database}.{self.config.table} @ \"\n _repr += f\"{self.config.host}:{self.config.port}\\033[0m\\n\\n\"\n _repr += f\"\\033[1musername: {self.config.username}\\033[0m\\n\\nTable Schema:\\n\"\n _repr += \"-\" * 51 + \"\\n\"\n for r in self.client.query(\n f\"DESC {self.config.database}.{self.config.table}\"\n ).named_results():\n _repr += (\n f\"|\\033[94m{r['name']:24s}\\033[0m|\\033[96m{r['type']:24s}\\033[0m|\\n\"\n )\n _repr += \"-\" * 51 + \"\\n\"\n return _repr\n def _build_query_sql(\n self, q_emb: List[float], topk: int, where_str: Optional[str] = None\n ) -> str:\n q_emb_str = \",\".join(map(str, q_emb))\n if where_str:\n where_str = f\"PREWHERE {where_str}\"\n else:\n where_str = \"\"\n settings_strs = []\n if self.config.index_query_params:\n for k in self.config.index_query_params:\n settings_strs.append(f\"SETTING {k}={self.config.index_query_params[k]}\")\n q_str = f\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"}
+{"id": "c421bd8e3941-8", "text": "q_str = f\"\"\"\n SELECT {self.config.column_map['document']}, \n {self.config.column_map['metadata']}, dist\n FROM {self.config.database}.{self.config.table}\n {where_str}\n ORDER BY L2Distance({self.config.column_map['embedding']}, [{q_emb_str}]) \n AS dist {self.dist_order}\n LIMIT {topk} {' '.join(settings_strs)}\n \"\"\"\n return q_str\n[docs] def similarity_search(\n self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Perform a similarity search with ClickHouse\n Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of Documents\n \"\"\"\n return self.similarity_search_by_vector(\n self.embedding_function.embed_query(query), k, where_str, **kwargs\n )\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n where_str: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Perform a similarity search with ClickHouse by vectors\n Args:\n query (str): query string", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"}
+{"id": "c421bd8e3941-9", "text": "Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of (Document, similarity)\n \"\"\"\n q_str = self._build_query_sql(embedding, k, where_str)\n try:\n return [\n Document(\n page_content=r[self.config.column_map[\"document\"]],\n metadata=r[self.config.column_map[\"metadata\"]],\n )\n for r in self.client.query(q_str).named_results()\n ]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] def similarity_search_with_relevance_scores(\n self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n \"\"\"Perform a similarity search with ClickHouse\n Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"}
+{"id": "c421bd8e3941-10", "text": "NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of documents\n \"\"\"\n q_str = self._build_query_sql(\n self.embedding_function.embed_query(query), k, where_str\n )\n try:\n return [\n (\n Document(\n page_content=r[self.config.column_map[\"document\"]],\n metadata=r[self.config.column_map[\"metadata\"]],\n ),\n r[\"dist\"],\n )\n for r in self.client.query(q_str).named_results()\n ]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] def drop(self) -> None:\n \"\"\"\n Helper function: Drop data\n \"\"\"\n self.client.command(\n f\"DROP TABLE IF EXISTS {self.config.database}.{self.config.table}\"\n )\n @property\n def metadata_column(self) -> str:\n return self.config.column_map[\"metadata\"]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"}
+{"id": "2b0cfa121219-0", "text": "Source code for langchain.vectorstores.mongodb_atlas\nfrom __future__ import annotations\nimport logging\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Dict,\n Generator,\n Iterable,\n List,\n Optional,\n Tuple,\n TypeVar,\n Union,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nif TYPE_CHECKING:\n from pymongo.collection import Collection\nMongoDBDocumentType = TypeVar(\"MongoDBDocumentType\", bound=Dict[str, Any])\nlogger = logging.getLogger(__name__)\nDEFAULT_INSERT_BATCH_SIZE = 100\n[docs]class MongoDBAtlasVectorSearch(VectorStore):\n \"\"\"Wrapper around MongoDB Atlas Vector Search.\n To use, you should have both:\n - the ``pymongo`` python package installed\n - a connection string associated with a MongoDB Atlas Cluster having deployed an\n Atlas Search index\n Example:\n .. code-block:: python\n from langchain.vectorstores import MongoDBAtlasVectorSearch\n from langchain.embeddings.openai import OpenAIEmbeddings\n from pymongo import MongoClient\n mongo_client = MongoClient(\"\")\n collection = mongo_client[\"\"][\"\"]\n embeddings = OpenAIEmbeddings()\n vectorstore = MongoDBAtlasVectorSearch(collection, embeddings)\n \"\"\"\n def __init__(\n self,\n collection: Collection[MongoDBDocumentType],\n embedding: Embeddings,\n *,\n index_name: str = \"default\",\n text_key: str = \"text\",\n embedding_key: str = \"embedding\",\n ):\n \"\"\"\n Args:\n collection: MongoDB collection to add the texts to.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/mongodb_atlas.html"}
+{"id": "2b0cfa121219-1", "text": "\"\"\"\n Args:\n collection: MongoDB collection to add the texts to.\n embedding: Text embedding model to use.\n text_key: MongoDB field that will contain the text for each\n document.\n embedding_key: MongoDB field that will contain the embedding for\n each document.\n \"\"\"\n self._collection = collection\n self._embedding = embedding\n self._index_name = index_name\n self._text_key = text_key\n self._embedding_key = embedding_key\n[docs] @classmethod\n def from_connection_string(\n cls,\n connection_string: str,\n namespace: str,\n embedding: Embeddings,\n **kwargs: Any,\n ) -> MongoDBAtlasVectorSearch:\n try:\n from pymongo import MongoClient\n except ImportError:\n raise ImportError(\n \"Could not import pymongo, please install it with \"\n \"`pip install pymongo`.\"\n )\n client: MongoClient = MongoClient(connection_string)\n db_name, collection_name = namespace.split(\".\")\n collection = client[db_name][collection_name]\n return cls(collection, embedding, **kwargs)\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[Dict[str, Any]]] = None,\n **kwargs: Any,\n ) -> List:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n batch_size = kwargs.get(\"batch_size\", DEFAULT_INSERT_BATCH_SIZE)", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/mongodb_atlas.html"}
+{"id": "2b0cfa121219-2", "text": "\"\"\"\n batch_size = kwargs.get(\"batch_size\", DEFAULT_INSERT_BATCH_SIZE)\n _metadatas: Union[List, Generator] = metadatas or ({} for _ in texts)\n texts_batch = []\n metadatas_batch = []\n result_ids = []\n for i, (text, metadata) in enumerate(zip(texts, _metadatas)):\n texts_batch.append(text)\n metadatas_batch.append(metadata)\n if (i + 1) % batch_size == 0:\n result_ids.extend(self._insert_texts(texts_batch, metadatas_batch))\n texts_batch = []\n metadatas_batch = []\n if texts_batch:\n result_ids.extend(self._insert_texts(texts_batch, metadatas_batch))\n return result_ids\n def _insert_texts(self, texts: List[str], metadatas: List[Dict[str, Any]]) -> List:\n if not texts:\n return []\n # Embed and create the documents\n embeddings = self._embedding.embed_documents(texts)\n to_insert = [\n {self._text_key: t, self._embedding_key: embedding, **m}\n for t, m, embedding in zip(texts, metadatas, embeddings)\n ]\n # insert the documents in MongoDB Atlas\n insert_result = self._collection.insert_many(to_insert)\n return insert_result.inserted_ids\n[docs] def similarity_search_with_score(\n self,\n query: str,\n *,\n k: int = 4,\n pre_filter: Optional[dict] = None,\n post_filter_pipeline: Optional[List[Dict]] = None,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return MongoDB documents most similar to query, along with scores.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/mongodb_atlas.html"}
+{"id": "2b0cfa121219-3", "text": "\"\"\"Return MongoDB documents most similar to query, along with scores.\n Use the knnBeta Operator available in MongoDB Atlas Search\n This feature is in early access and available only for evaluation purposes, to\n validate functionality, and to gather feedback from a small closed group of\n early access users. It is not recommended for production deployments as we\n may introduce breaking changes.\n For more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta\n Args:\n query: Text to look up documents similar to.\n k: Optional Number of Documents to return. Defaults to 4.\n pre_filter: Optional Dictionary of argument(s) to prefilter on document\n fields.\n post_filter_pipeline: Optional Pipeline of MongoDB aggregation stages\n following the knnBeta search.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n knn_beta = {\n \"vector\": self._embedding.embed_query(query),\n \"path\": self._embedding_key,\n \"k\": k,\n }\n if pre_filter:\n knn_beta[\"filter\"] = pre_filter\n pipeline = [\n {\n \"$search\": {\n \"index\": self._index_name,\n \"knnBeta\": knn_beta,\n }\n },\n {\"$project\": {\"score\": {\"$meta\": \"searchScore\"}, self._embedding_key: 0}},\n ]\n if post_filter_pipeline is not None:\n pipeline.extend(post_filter_pipeline)\n cursor = self._collection.aggregate(pipeline)\n docs = []\n for res in cursor:\n text = res.pop(self._text_key)\n score = res.pop(\"score\")\n docs.append((Document(page_content=text, metadata=res), score))\n return docs", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/mongodb_atlas.html"}
+{"id": "2b0cfa121219-4", "text": "docs.append((Document(page_content=text, metadata=res), score))\n return docs\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n pre_filter: Optional[dict] = None,\n post_filter_pipeline: Optional[List[Dict]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return MongoDB documents most similar to query.\n Use the knnBeta Operator available in MongoDB Atlas Search\n This feature is in early access and available only for evaluation purposes, to\n validate functionality, and to gather feedback from a small closed group of\n early access users. It is not recommended for production deployments as we may\n introduce breaking changes.\n For more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta\n Args:\n query: Text to look up documents similar to.\n k: Optional Number of Documents to return. Defaults to 4.\n pre_filter: Optional Dictionary of argument(s) to prefilter on document\n fields.\n post_filter_pipeline: Optional Pipeline of MongoDB aggregation stages\n following the knnBeta search.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(\n query,\n k=k,\n pre_filter=pre_filter,\n post_filter_pipeline=post_filter_pipeline,\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n collection: Optional[Collection[MongoDBDocumentType]] = None,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/mongodb_atlas.html"}
+{"id": "2b0cfa121219-5", "text": "collection: Optional[Collection[MongoDBDocumentType]] = None,\n **kwargs: Any,\n ) -> MongoDBAtlasVectorSearch:\n \"\"\"Construct MongoDBAtlasVectorSearch wrapper from raw documents.\n This is a user-friendly interface that:\n 1. Embeds documents.\n 2. Adds the documents to a provided MongoDB Atlas Vector Search index\n (Lucene)\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from pymongo import MongoClient\n from langchain.vectorstores import MongoDBAtlasVectorSearch\n from langchain.embeddings import OpenAIEmbeddings\n client = MongoClient(\"\")\n collection = mongo_client[\"\"][\"\"]\n embeddings = OpenAIEmbeddings()\n vectorstore = MongoDBAtlasVectorSearch.from_texts(\n texts,\n embeddings,\n metadatas=metadatas,\n collection=collection\n )\n \"\"\"\n if collection is None:\n raise ValueError(\"Must provide 'collection' named parameter.\")\n vecstore = cls(collection, embedding, **kwargs)\n vecstore.add_texts(texts, metadatas=metadatas)\n return vecstore\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/mongodb_atlas.html"}
+{"id": "4acc52497869-0", "text": "Source code for langchain.vectorstores.typesense\n\"\"\"Wrapper around Typesense vector search\"\"\"\nfrom __future__ import annotations\nimport uuid\nfrom typing import TYPE_CHECKING, Any, Iterable, List, Optional, Tuple, Union\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_env\nfrom langchain.vectorstores.base import VectorStore\nif TYPE_CHECKING:\n from typesense.client import Client\n from typesense.collection import Collection\n[docs]class Typesense(VectorStore):\n \"\"\"Wrapper around Typesense vector search.\n To use, you should have the ``typesense`` python package installed.\n Example:\n .. code-block:: python\n from langchain.embedding.openai import OpenAIEmbeddings\n from langchain.vectorstores import Typesense\n import typesense\n node = {\n \"host\": \"localhost\", # For Typesense Cloud use xxx.a1.typesense.net\n \"port\": \"8108\", # For Typesense Cloud use 443\n \"protocol\": \"http\" # For Typesense Cloud use https\n }\n typesense_client = typesense.Client(\n {\n \"nodes\": [node],\n \"api_key\": \"\",\n \"connection_timeout_seconds\": 2\n }\n )\n typesense_collection_name = \"langchain-memory\"\n embedding = OpenAIEmbeddings()\n vectorstore = Typesense(\n typesense_client,\n typesense_collection_name,\n embedding.embed_query,\n \"text\",\n )\n \"\"\"\n def __init__(\n self,\n typesense_client: Client,\n embedding: Embeddings,\n *,\n typesense_collection_name: Optional[str] = None,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/typesense.html"}
+{"id": "4acc52497869-1", "text": "*,\n typesense_collection_name: Optional[str] = None,\n text_key: str = \"text\",\n ):\n \"\"\"Initialize with Typesense client.\"\"\"\n try:\n from typesense import Client\n except ImportError:\n raise ValueError(\n \"Could not import typesense python package. \"\n \"Please install it with `pip install typesense`.\"\n )\n if not isinstance(typesense_client, Client):\n raise ValueError(\n f\"typesense_client should be an instance of typesense.Client, \"\n f\"got {type(typesense_client)}\"\n )\n self._typesense_client = typesense_client\n self._embedding = embedding\n self._typesense_collection_name = (\n typesense_collection_name or f\"langchain-{str(uuid.uuid4())}\"\n )\n self._text_key = text_key\n @property\n def _collection(self) -> Collection:\n return self._typesense_client.collections[self._typesense_collection_name]\n def _prep_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]],\n ids: Optional[List[str]],\n ) -> List[dict]:\n \"\"\"Embed and create the documents\"\"\"\n _ids = ids or (str(uuid.uuid4()) for _ in texts)\n _metadatas: Iterable[dict] = metadatas or ({} for _ in texts)\n embedded_texts = self._embedding.embed_documents(list(texts))\n return [\n {\"id\": _id, \"vec\": vec, f\"{self._text_key}\": text, \"metadata\": metadata}\n for _id, vec, text, metadata in zip(_ids, embedded_texts, texts, _metadatas)\n ]", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/typesense.html"}
+{"id": "4acc52497869-2", "text": "]\n def _create_collection(self, num_dim: int) -> None:\n fields = [\n {\"name\": \"vec\", \"type\": \"float[]\", \"num_dim\": num_dim},\n {\"name\": f\"{self._text_key}\", \"type\": \"string\"},\n {\"name\": \".*\", \"type\": \"auto\"},\n ]\n self._typesense_client.collections.create(\n {\"name\": self._typesense_collection_name, \"fields\": fields}\n )\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embedding and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n ids: Optional list of ids to associate with the texts.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n from typesense.exceptions import ObjectNotFound\n docs = self._prep_texts(texts, metadatas, ids)\n try:\n self._collection.documents.import_(docs, {\"action\": \"upsert\"})\n except ObjectNotFound:\n # Create the collection if it doesn't already exist\n self._create_collection(len(docs[0][\"vec\"]))\n self._collection.documents.import_(docs, {\"action\": \"upsert\"})\n return [doc[\"id\"] for doc in docs]\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/typesense.html"}
+{"id": "4acc52497869-3", "text": "self,\n query: str,\n k: int = 4,\n filter: Optional[str] = \"\",\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return typesense documents most similar to query, along with scores.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter: typesense filter_by expression to filter documents on\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n embedded_query = [str(x) for x in self._embedding.embed_query(query)]\n query_obj = {\n \"q\": \"*\",\n \"vector_query\": f'vec:([{\",\".join(embedded_query)}], k:{k})',\n \"filter_by\": filter,\n \"collection\": self._typesense_collection_name,\n }\n docs = []\n response = self._typesense_client.multi_search.perform(\n {\"searches\": [query_obj]}, {}\n )\n for hit in response[\"results\"][0][\"hits\"]:\n document = hit[\"document\"]\n metadata = document[\"metadata\"]\n text = document[self._text_key]\n score = hit[\"vector_distance\"]\n docs.append((Document(page_content=text, metadata=metadata), score))\n return docs\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n filter: Optional[str] = \"\",\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return typesense documents most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/typesense.html"}
+{"id": "4acc52497869-4", "text": "k: Number of Documents to return. Defaults to 4.\n filter: typesense filter_by expression to filter documents on\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n docs_and_score = self.similarity_search_with_score(query, k=k, filter=filter)\n return [doc for doc, _ in docs_and_score]\n[docs] @classmethod\n def from_client_params(\n cls,\n embedding: Embeddings,\n *,\n host: str = \"localhost\",\n port: Union[str, int] = \"8108\",\n protocol: str = \"http\",\n typesense_api_key: Optional[str] = None,\n connection_timeout_seconds: int = 2,\n **kwargs: Any,\n ) -> Typesense:\n \"\"\"Initialize Typesense directly from client parameters.\n Example:\n .. code-block:: python\n from langchain.embedding.openai import OpenAIEmbeddings\n from langchain.vectorstores import Typesense\n # Pass in typesense_api_key as kwarg or set env var \"TYPESENSE_API_KEY\".\n vectorstore = Typesense(\n OpenAIEmbeddings(),\n host=\"localhost\",\n port=\"8108\",\n protocol=\"http\",\n typesense_collection_name=\"langchain-memory\",\n )\n \"\"\"\n try:\n from typesense import Client\n except ImportError:\n raise ValueError(\n \"Could not import typesense python package. \"\n \"Please install it with `pip install typesense`.\"\n )\n node = {\n \"host\": host,\n \"port\": str(port),\n \"protocol\": protocol,\n }\n typesense_api_key = typesense_api_key or get_from_env(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/typesense.html"}
+{"id": "4acc52497869-5", "text": "}\n typesense_api_key = typesense_api_key or get_from_env(\n \"typesense_api_key\", \"TYPESENSE_API_KEY\"\n )\n client_config = {\n \"nodes\": [node],\n \"api_key\": typesense_api_key,\n \"connection_timeout_seconds\": connection_timeout_seconds,\n }\n return cls(Client(client_config), embedding, **kwargs)\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n typesense_client: Optional[Client] = None,\n typesense_client_params: Optional[dict] = None,\n typesense_collection_name: Optional[str] = None,\n text_key: str = \"text\",\n **kwargs: Any,\n ) -> Typesense:\n \"\"\"Construct Typesense wrapper from raw text.\"\"\"\n if typesense_client:\n vectorstore = cls(typesense_client, embedding, **kwargs)\n elif typesense_client_params:\n vectorstore = cls.from_client_params(\n embedding, **typesense_client_params, **kwargs\n )\n else:\n raise ValueError(\n \"Must specify one of typesense_client or typesense_client_params.\"\n )\n vectorstore.add_texts(texts, metadatas=metadatas, ids=ids)\n return vectorstore\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/typesense.html"}
+{"id": "60ad5c3c5adc-0", "text": "Source code for langchain.vectorstores.singlestoredb\n\"\"\"Wrapper around SingleStore DB.\"\"\"\nfrom __future__ import annotations\nimport json\nfrom typing import (\n Any,\n ClassVar,\n Collection,\n Iterable,\n List,\n Optional,\n Tuple,\n Type,\n)\nfrom sqlalchemy.pool import QueuePool\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore, VectorStoreRetriever\n[docs]class SingleStoreDB(VectorStore):\n \"\"\"\n This class serves as a Pythonic interface to the SingleStore DB database.\n The prerequisite for using this class is the installation of the ``singlestoredb``\n Python package.\n The SingleStoreDB vectorstore can be created by providing an embedding function and\n the relevant parameters for the database connection, connection pool, and\n optionally, the names of the table and the fields to use.\n \"\"\"\n def _get_connection(self: SingleStoreDB) -> Any:\n try:\n import singlestoredb as s2\n except ImportError:\n raise ImportError(\n \"Could not import singlestoredb python package. \"\n \"Please install it with `pip install singlestoredb`.\"\n )\n return s2.connect(**self.connection_kwargs)\n def __init__(\n self,\n embedding: Embeddings,\n *,\n table_name: str = \"embeddings\",\n content_field: str = \"content\",\n metadata_field: str = \"metadata\",\n vector_field: str = \"vector\",\n pool_size: int = 5,\n max_overflow: int = 10,\n timeout: float = 30,\n **kwargs: Any,\n ):", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"}
+{"id": "60ad5c3c5adc-1", "text": "timeout: float = 30,\n **kwargs: Any,\n ):\n \"\"\"Initialize with necessary components.\n Args:\n embedding (Embeddings): A text embedding model.\n table_name (str, optional): Specifies the name of the table in use.\n Defaults to \"embeddings\".\n content_field (str, optional): Specifies the field to store the content.\n Defaults to \"content\".\n metadata_field (str, optional): Specifies the field to store metadata.\n Defaults to \"metadata\".\n vector_field (str, optional): Specifies the field to store the vector.\n Defaults to \"vector\".\n Following arguments pertain to the connection pool:\n pool_size (int, optional): Determines the number of active connections in\n the pool. Defaults to 5.\n max_overflow (int, optional): Determines the maximum number of connections\n allowed beyond the pool_size. Defaults to 10.\n timeout (float, optional): Specifies the maximum wait time in seconds for\n establishing a connection. Defaults to 30.\n Following arguments pertain to the database connection:\n host (str, optional): Specifies the hostname, IP address, or URL for the\n database connection. The default scheme is \"mysql\".\n user (str, optional): Database username.\n password (str, optional): Database password.\n port (int, optional): Database port. Defaults to 3306 for non-HTTP\n connections, 80 for HTTP connections, and 443 for HTTPS connections.\n database (str, optional): Database name.\n Additional optional arguments provide further customization over the\n database connection:\n pure_python (bool, optional): Toggles the connector mode. If True,\n operates in pure Python mode.\n local_infile (bool, optional): Allows local file uploads.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"}
+{"id": "60ad5c3c5adc-2", "text": "local_infile (bool, optional): Allows local file uploads.\n charset (str, optional): Specifies the character set for string values.\n ssl_key (str, optional): Specifies the path of the file containing the SSL\n key.\n ssl_cert (str, optional): Specifies the path of the file containing the SSL\n certificate.\n ssl_ca (str, optional): Specifies the path of the file containing the SSL\n certificate authority.\n ssl_cipher (str, optional): Sets the SSL cipher list.\n ssl_disabled (bool, optional): Disables SSL usage.\n ssl_verify_cert (bool, optional): Verifies the server's certificate.\n Automatically enabled if ``ssl_ca`` is specified.\n ssl_verify_identity (bool, optional): Verifies the server's identity.\n conv (dict[int, Callable], optional): A dictionary of data conversion\n functions.\n credential_type (str, optional): Specifies the type of authentication to\n use: auth.PASSWORD, auth.JWT, or auth.BROWSER_SSO.\n autocommit (bool, optional): Enables autocommits.\n results_type (str, optional): Determines the structure of the query results:\n tuples, namedtuples, dicts.\n results_format (str, optional): Deprecated. This option has been renamed to\n results_type.\n Examples:\n Basic Usage:\n .. code-block:: python\n from langchain.embeddings import OpenAIEmbeddings\n from langchain.vectorstores import SingleStoreDB\n vectorstore = SingleStoreDB(\n OpenAIEmbeddings(),\n host=\"https://user:password@127.0.0.1:3306/database\"\n )\n Advanced Usage:\n .. code-block:: python\n from langchain.embeddings import OpenAIEmbeddings", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"}
+{"id": "60ad5c3c5adc-3", "text": ".. code-block:: python\n from langchain.embeddings import OpenAIEmbeddings\n from langchain.vectorstores import SingleStoreDB\n vectorstore = SingleStoreDB(\n OpenAIEmbeddings(),\n host=\"127.0.0.1\",\n port=3306,\n user=\"user\",\n password=\"password\",\n database=\"db\",\n table_name=\"my_custom_table\",\n pool_size=10,\n timeout=60,\n )\n Using environment variables:\n .. code-block:: python\n from langchain.embeddings import OpenAIEmbeddings\n from langchain.vectorstores import SingleStoreDB\n os.environ['SINGLESTOREDB_URL'] = 'me:p455w0rd@s2-host.com/my_db'\n vectorstore = SingleStoreDB(OpenAIEmbeddings())\n \"\"\"\n self.embedding = embedding\n self.table_name = table_name\n self.content_field = content_field\n self.metadata_field = metadata_field\n self.vector_field = vector_field\n \"\"\"Pass the rest of the kwargs to the connection.\"\"\"\n self.connection_kwargs = kwargs\n \"\"\"Create connection pool.\"\"\"\n self.connection_pool = QueuePool(\n self._get_connection,\n max_overflow=max_overflow,\n pool_size=pool_size,\n timeout=timeout,\n )\n self._create_table()\n def _create_table(self: SingleStoreDB) -> None:\n \"\"\"Create table if it doesn't exist.\"\"\"\n conn = self.connection_pool.connect()\n try:\n cur = conn.cursor()\n try:\n cur.execute(\n \"\"\"CREATE TABLE IF NOT EXISTS {}\n ({} TEXT CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci,\n {} BLOB, {} JSON);\"\"\".format(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"}
+{"id": "60ad5c3c5adc-4", "text": "{} BLOB, {} JSON);\"\"\".format(\n self.table_name,\n self.content_field,\n self.vector_field,\n self.metadata_field,\n ),\n )\n finally:\n cur.close()\n finally:\n conn.close()\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n embeddings: Optional[List[List[float]]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Add more texts to the vectorstore.\n Args:\n texts (Iterable[str]): Iterable of strings/text to add to the vectorstore.\n metadatas (Optional[List[dict]], optional): Optional list of metadatas.\n Defaults to None.\n embeddings (Optional[List[List[float]]], optional): Optional pre-generated\n embeddings. Defaults to None.\n Returns:\n List[str]: empty list\n \"\"\"\n conn = self.connection_pool.connect()\n try:\n cur = conn.cursor()\n try:\n # Write data to singlestore db\n for i, text in enumerate(texts):\n # Use provided values by default or fallback\n metadata = metadatas[i] if metadatas else {}\n embedding = (\n embeddings[i]\n if embeddings\n else self.embedding.embed_documents([text])[0]\n )\n cur.execute(\n \"INSERT INTO {} VALUES (%s, JSON_ARRAY_PACK(%s), %s)\".format(\n self.table_name\n ),\n (\n text,\n \"[{}]\".format(\",\".join(map(str, embedding))),\n json.dumps(metadata),\n ),\n )\n finally:\n cur.close()\n finally:\n conn.close()", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"}
+{"id": "60ad5c3c5adc-5", "text": "finally:\n cur.close()\n finally:\n conn.close()\n return []\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Returns the most similar indexed documents to the query text.\n Uses cosine similarity.\n Args:\n query (str): The query text for which to find similar documents.\n k (int): The number of documents to return. Default is 4.\n Returns:\n List[Document]: A list of documents that are most similar to the query text.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(query, k=k)\n return [doc for doc, _ in docs_and_scores]\n[docs] def similarity_search_with_score(\n self, query: str, k: int = 4\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query. Uses cosine similarity.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n # Creates embedding vector from user query\n embedding = self.embedding.embed_query(query)\n conn = self.connection_pool.connect()\n result = []\n try:\n cur = conn.cursor()\n try:\n cur.execute(\n \"\"\"SELECT {}, {}, DOT_PRODUCT({}, JSON_ARRAY_PACK(%s)) as __score \n FROM {} ORDER BY __score DESC LIMIT %s\"\"\".format(\n self.content_field,\n self.metadata_field,\n self.vector_field,\n self.table_name,\n ),\n (", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"}
+{"id": "60ad5c3c5adc-6", "text": "self.vector_field,\n self.table_name,\n ),\n (\n \"[{}]\".format(\",\".join(map(str, embedding))),\n k,\n ),\n )\n for row in cur.fetchall():\n doc = Document(page_content=row[0], metadata=row[1])\n result.append((doc, float(row[2])))\n finally:\n cur.close()\n finally:\n conn.close()\n return result\n[docs] @classmethod\n def from_texts(\n cls: Type[SingleStoreDB],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n table_name: str = \"embeddings\",\n content_field: str = \"content\",\n metadata_field: str = \"metadata\",\n vector_field: str = \"vector\",\n pool_size: int = 5,\n max_overflow: int = 10,\n timeout: float = 30,\n **kwargs: Any,\n ) -> SingleStoreDB:\n \"\"\"Create a SingleStoreDB vectorstore from raw documents.\n This is a user-friendly interface that:\n 1. Embeds documents.\n 2. Creates a new table for the embeddings in SingleStoreDB.\n 3. Adds the documents to the newly created table.\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain.vectorstores import SingleStoreDB\n from langchain.embeddings import OpenAIEmbeddings\n s2 = SingleStoreDB.from_texts(\n texts,\n OpenAIEmbeddings(),\n host=\"username:password@localhost:3306/database\"\n )\n \"\"\"\n instance = cls(\n embedding,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"}
+{"id": "60ad5c3c5adc-7", "text": ")\n \"\"\"\n instance = cls(\n embedding,\n table_name=table_name,\n content_field=content_field,\n metadata_field=metadata_field,\n vector_field=vector_field,\n pool_size=pool_size,\n max_overflow=max_overflow,\n timeout=timeout,\n **kwargs,\n )\n instance.add_texts(texts, metadatas, embedding.embed_documents(texts), **kwargs)\n return instance\n[docs] def as_retriever(self, **kwargs: Any) -> SingleStoreDBRetriever:\n return SingleStoreDBRetriever(vectorstore=self, **kwargs)\nclass SingleStoreDBRetriever(VectorStoreRetriever):\n vectorstore: SingleStoreDB\n k: int = 4\n allowed_search_types: ClassVar[Collection[str]] = (\"similarity\",)\n def get_relevant_documents(self, query: str) -> List[Document]:\n if self.search_type == \"similarity\":\n docs = self.vectorstore.similarity_search(query, k=self.k)\n else:\n raise ValueError(f\"search_type of {self.search_type} not allowed.\")\n return docs\n async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError(\n \"SingleStoreDBVectorStoreRetriever does not support async\"\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"}
+{"id": "bc839bff3176-0", "text": "Source code for langchain.vectorstores.vectara\n\"\"\"Wrapper around Vectara vector database.\"\"\"\nfrom __future__ import annotations\nimport json\nimport logging\nimport os\nfrom hashlib import md5\nfrom typing import Any, Iterable, List, Optional, Tuple, Type\nimport requests\nfrom pydantic import Field\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import Document\nfrom langchain.vectorstores.base import VectorStore, VectorStoreRetriever\n[docs]class Vectara(VectorStore):\n \"\"\"Implementation of Vector Store using Vectara (https://vectara.com).\n Example:\n .. code-block:: python\n from langchain.vectorstores import Vectara\n vectorstore = Vectara(\n vectara_customer_id=vectara_customer_id,\n vectara_corpus_id=vectara_corpus_id,\n vectara_api_key=vectara_api_key\n )\n \"\"\"\n def __init__(\n self,\n vectara_customer_id: Optional[str] = None,\n vectara_corpus_id: Optional[str] = None,\n vectara_api_key: Optional[str] = None,\n ):\n \"\"\"Initialize with Vectara API.\"\"\"\n self._vectara_customer_id = vectara_customer_id or os.environ.get(\n \"VECTARA_CUSTOMER_ID\"\n )\n self._vectara_corpus_id = vectara_corpus_id or os.environ.get(\n \"VECTARA_CORPUS_ID\"\n )\n self._vectara_api_key = vectara_api_key or os.environ.get(\"VECTARA_API_KEY\")\n if (\n self._vectara_customer_id is None\n or self._vectara_corpus_id is None\n or self._vectara_api_key is None\n ):\n logging.warning(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"}
+{"id": "bc839bff3176-1", "text": "or self._vectara_api_key is None\n ):\n logging.warning(\n \"Cant find Vectara credentials, customer_id or corpus_id in \"\n \"environment.\"\n )\n else:\n logging.debug(f\"Using corpus id {self._vectara_corpus_id}\")\n self._session = requests.Session() # to reuse connections\n adapter = requests.adapters.HTTPAdapter(max_retries=3)\n self._session.mount(\"http://\", adapter)\n def _get_post_headers(self) -> dict:\n \"\"\"Returns headers that should be attached to each post request.\"\"\"\n return {\n \"x-api-key\": self._vectara_api_key,\n \"customer-id\": self._vectara_customer_id,\n \"Content-Type\": \"application/json\",\n }\n def _delete_doc(self, doc_id: str) -> bool:\n \"\"\"\n Delete a document from the Vectara corpus.\n Args:\n url (str): URL of the page to delete.\n doc_id (str): ID of the document to delete.\n Returns:\n bool: True if deletion was successful, False otherwise.\n \"\"\"\n body = {\n \"customer_id\": self._vectara_customer_id,\n \"corpus_id\": self._vectara_corpus_id,\n \"document_id\": doc_id,\n }\n response = self._session.post(\n \"https://api.vectara.io/v1/delete-doc\",\n data=json.dumps(body),\n verify=True,\n headers=self._get_post_headers(),\n )\n if response.status_code != 200:\n logging.error(\n f\"Delete request failed for doc_id = {doc_id} with status code \"\n f\"{response.status_code}, reason {response.reason}, text \"", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"}
+{"id": "bc839bff3176-2", "text": "f\"{response.status_code}, reason {response.reason}, text \"\n f\"{response.text}\"\n )\n return False\n return True\n def _index_doc(self, doc: dict) -> bool:\n request: dict[str, Any] = {}\n request[\"customer_id\"] = self._vectara_customer_id\n request[\"corpus_id\"] = self._vectara_corpus_id\n request[\"document\"] = doc\n response = self._session.post(\n headers=self._get_post_headers(),\n url=\"https://api.vectara.io/v1/core/index\",\n data=json.dumps(request),\n timeout=30,\n verify=True,\n )\n status_code = response.status_code\n result = response.json()\n status_str = result[\"status\"][\"code\"] if \"status\" in result else None\n if status_code == 409 or (status_str and status_str == \"ALREADY_EXISTS\"):\n return False\n else:\n return True\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n doc_hash = md5()\n for t in texts:\n doc_hash.update(t.encode())\n doc_id = doc_hash.hexdigest()\n if metadatas is None:\n metadatas = [{} for _ in texts]\n doc = {", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"}
+{"id": "bc839bff3176-3", "text": "metadatas = [{} for _ in texts]\n doc = {\n \"document_id\": doc_id,\n \"metadataJson\": json.dumps({\"source\": \"langchain\"}),\n \"parts\": [\n {\"text\": text, \"metadataJson\": json.dumps(md)}\n for text, md in zip(texts, metadatas)\n ],\n }\n succeeded = self._index_doc(doc)\n if not succeeded:\n self._delete_doc(doc_id)\n self._index_doc(doc)\n return [doc_id]\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 5,\n lambda_val: float = 0.025,\n filter: Optional[str] = None,\n n_sentence_context: int = 0,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return Vectara documents most similar to query, along with scores.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 5.\n lambda_val: lexical match parameter for hybrid search.\n filter: Dictionary of argument(s) to filter on metadata. For example a\n filter can be \"doc.rating > 3.0 and part.lang = 'deu'\"} see\n https://docs.vectara.com/docs/search-apis/sql/filter-overview\n for more details.\n n_sentence_context: number of sentences before/after the matching segment\n to add\n Returns:\n List of Documents most similar to the query and score for each.\n \"\"\"\n data = json.dumps(\n {\n \"query\": [\n {\n \"query\": query,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"}
+{"id": "bc839bff3176-4", "text": "{\n \"query\": [\n {\n \"query\": query,\n \"start\": 0,\n \"num_results\": k,\n \"context_config\": {\n \"sentences_before\": n_sentence_context,\n \"sentences_after\": n_sentence_context,\n },\n \"corpus_key\": [\n {\n \"customer_id\": self._vectara_customer_id,\n \"corpus_id\": self._vectara_corpus_id,\n \"metadataFilter\": filter,\n \"lexical_interpolation_config\": {\"lambda\": lambda_val},\n }\n ],\n }\n ]\n }\n )\n response = self._session.post(\n headers=self._get_post_headers(),\n url=\"https://api.vectara.io/v1/query\",\n data=data,\n timeout=10,\n )\n if response.status_code != 200:\n logging.error(\n \"Query failed %s\",\n f\"(code {response.status_code}, reason {response.reason}, details \"\n f\"{response.text})\",\n )\n return []\n result = response.json()\n responses = result[\"responseSet\"][0][\"response\"]\n vectara_default_metadata = [\"lang\", \"len\", \"offset\"]\n docs = [\n (\n Document(\n page_content=x[\"text\"],\n metadata={\n m[\"name\"]: m[\"value\"]\n for m in x[\"metadata\"]\n if m[\"name\"] not in vectara_default_metadata\n },\n ),\n x[\"score\"],\n )\n for x in responses\n ]\n return docs\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 5,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"}
+{"id": "bc839bff3176-5", "text": "self,\n query: str,\n k: int = 5,\n lambda_val: float = 0.025,\n filter: Optional[str] = None,\n n_sentence_context: int = 0,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return Vectara documents most similar to query, along with scores.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 5.\n filter: Dictionary of argument(s) to filter on metadata. For example a\n filter can be \"doc.rating > 3.0 and part.lang = 'deu'\"} see\n https://docs.vectara.com/docs/search-apis/sql/filter-overview for more\n details.\n n_sentence_context: number of sentences before/after the matching segment\n to add\n Returns:\n List of Documents most similar to the query\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(\n query,\n k=k,\n lamnbda_val=lambda_val,\n filter=filter,\n n_sentence_context=n_sentence_context,\n **kwargs,\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] @classmethod\n def from_texts(\n cls: Type[Vectara],\n texts: List[str],\n embedding: Optional[Embeddings] = None,\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> Vectara:\n \"\"\"Construct Vectara wrapper from raw documents.\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import Vectara", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"}
+{"id": "bc839bff3176-6", "text": "Example:\n .. code-block:: python\n from langchain import Vectara\n vectara = Vectara.from_texts(\n texts,\n vectara_customer_id=customer_id,\n vectara_corpus_id=corpus_id,\n vectara_api_key=api_key,\n )\n \"\"\"\n # Note: Vectara generates its own embeddings, so we ignore the provided\n # embeddings (required by interface)\n vectara = cls(**kwargs)\n vectara.add_texts(texts, metadatas)\n return vectara\n[docs] def as_retriever(self, **kwargs: Any) -> VectaraRetriever:\n return VectaraRetriever(vectorstore=self, **kwargs)\nclass VectaraRetriever(VectorStoreRetriever):\n vectorstore: Vectara\n search_kwargs: dict = Field(\n default_factory=lambda: {\n \"lambda_val\": 0.025,\n \"k\": 5,\n \"filter\": \"\",\n \"n_sentence_context\": \"0\",\n }\n )\n \"\"\"Search params.\n k: Number of Documents to return. Defaults to 5.\n lambda_val: lexical match parameter for hybrid search.\n filter: Dictionary of argument(s) to filter on metadata. For example a\n filter can be \"doc.rating > 3.0 and part.lang = 'deu'\"} see\n https://docs.vectara.com/docs/search-apis/sql/filter-overview\n for more details.\n n_sentence_context: number of sentences before/after the matching segment to add\n \"\"\"\n def add_texts(\n self, texts: List[str], metadatas: Optional[List[dict]] = None\n ) -> None:\n \"\"\"Add text to the Vectara vectorstore.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"}
+{"id": "bc839bff3176-7", "text": ") -> None:\n \"\"\"Add text to the Vectara vectorstore.\n Args:\n texts (List[str]): The text\n metadatas (List[dict]): Metadata dicts, must line up with existing store\n \"\"\"\n self.vectorstore.add_texts(texts, metadatas)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"}
+{"id": "e4b7ea968c42-0", "text": "Source code for langchain.vectorstores.analyticdb\n\"\"\"VectorStore wrapper around a Postgres/PGVector database.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport uuid\nfrom typing import Any, Dict, Iterable, List, Optional, Tuple\nimport sqlalchemy\nfrom sqlalchemy import REAL, Index\nfrom sqlalchemy.dialects.postgresql import ARRAY, JSON, UUID\ntry:\n from sqlalchemy.orm import declarative_base\nexcept ImportError:\n from sqlalchemy.ext.declarative import declarative_base\nfrom sqlalchemy.orm import Session, relationship\nfrom sqlalchemy.sql.expression import func\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nfrom langchain.vectorstores.base import VectorStore\nBase = declarative_base() # type: Any\nADA_TOKEN_COUNT = 1536\n_LANGCHAIN_DEFAULT_COLLECTION_NAME = \"langchain\"\nclass BaseModel(Base):\n __abstract__ = True\n uuid = sqlalchemy.Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)\nclass CollectionStore(BaseModel):\n __tablename__ = \"langchain_pg_collection\"\n name = sqlalchemy.Column(sqlalchemy.String)\n cmetadata = sqlalchemy.Column(JSON)\n embeddings = relationship(\n \"EmbeddingStore\",\n back_populates=\"collection\",\n passive_deletes=True,\n )\n @classmethod\n def get_by_name(cls, session: Session, name: str) -> Optional[\"CollectionStore\"]:\n return session.query(cls).filter(cls.name == name).first() # type: ignore\n @classmethod\n def get_or_create(\n cls,\n session: Session,\n name: str,\n cmetadata: Optional[dict] = None,\n ) -> Tuple[\"CollectionStore\", bool]:\n \"\"\"\n Get or create a collection.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"}
+{"id": "e4b7ea968c42-1", "text": "\"\"\"\n Get or create a collection.\n Returns [Collection, bool] where the bool is True if the collection was created.\n \"\"\"\n created = False\n collection = cls.get_by_name(session, name)\n if collection:\n return collection, created\n collection = cls(name=name, cmetadata=cmetadata)\n session.add(collection)\n session.commit()\n created = True\n return collection, created\nclass EmbeddingStore(BaseModel):\n __tablename__ = \"langchain_pg_embedding\"\n collection_id = sqlalchemy.Column(\n UUID(as_uuid=True),\n sqlalchemy.ForeignKey(\n f\"{CollectionStore.__tablename__}.uuid\",\n ondelete=\"CASCADE\",\n ),\n )\n collection = relationship(CollectionStore, back_populates=\"embeddings\")\n embedding: sqlalchemy.Column = sqlalchemy.Column(ARRAY(REAL))\n document = sqlalchemy.Column(sqlalchemy.String, nullable=True)\n cmetadata = sqlalchemy.Column(JSON, nullable=True)\n # custom_id : any user defined id\n custom_id = sqlalchemy.Column(sqlalchemy.String, nullable=True)\n # The following line creates an index named 'langchain_pg_embedding_vector_idx'\n langchain_pg_embedding_vector_idx = Index(\n \"langchain_pg_embedding_vector_idx\",\n embedding,\n postgresql_using=\"ann\",\n postgresql_with={\n \"distancemeasure\": \"L2\",\n \"dim\": 1536,\n \"pq_segments\": 64,\n \"hnsw_m\": 100,\n \"pq_centers\": 2048,\n },\n )\nclass QueryResult:\n EmbeddingStore: EmbeddingStore\n distance: float\n[docs]class AnalyticDB(VectorStore):\n \"\"\"\n VectorStore implementation using AnalyticDB.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"}
+{"id": "e4b7ea968c42-2", "text": "\"\"\"\n VectorStore implementation using AnalyticDB.\n AnalyticDB is a distributed full PostgresSQL syntax cloud-native database.\n - `connection_string` is a postgres connection string.\n - `embedding_function` any embedding function implementing\n `langchain.embeddings.base.Embeddings` interface.\n - `collection_name` is the name of the collection to use. (default: langchain)\n - NOTE: This is not the name of the table, but the name of the collection.\n The tables will be created when initializing the store (if not exists)\n So, make sure the user has the right permissions to create tables.\n - `pre_delete_collection` if True, will delete the collection if it exists.\n (default: False)\n - Useful for testing.\n \"\"\"\n def __init__(\n self,\n connection_string: str,\n embedding_function: Embeddings,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n collection_metadata: Optional[dict] = None,\n pre_delete_collection: bool = False,\n logger: Optional[logging.Logger] = None,\n ) -> None:\n self.connection_string = connection_string\n self.embedding_function = embedding_function\n self.collection_name = collection_name\n self.collection_metadata = collection_metadata\n self.pre_delete_collection = pre_delete_collection\n self.logger = logger or logging.getLogger(__name__)\n self.__post_init__()\n def __post_init__(\n self,\n ) -> None:\n \"\"\"\n Initialize the store.\n \"\"\"\n self._conn = self.connect()\n self.create_tables_if_not_exists()\n self.create_collection()\n[docs] def connect(self) -> sqlalchemy.engine.Connection:\n engine = sqlalchemy.create_engine(self.connection_string)", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"}
+{"id": "e4b7ea968c42-3", "text": "engine = sqlalchemy.create_engine(self.connection_string)\n conn = engine.connect()\n return conn\n[docs] def create_tables_if_not_exists(self) -> None:\n Base.metadata.create_all(self._conn)\n[docs] def drop_tables(self) -> None:\n Base.metadata.drop_all(self._conn)\n[docs] def create_collection(self) -> None:\n if self.pre_delete_collection:\n self.delete_collection()\n with Session(self._conn) as session:\n CollectionStore.get_or_create(\n session, self.collection_name, cmetadata=self.collection_metadata\n )\n[docs] def delete_collection(self) -> None:\n self.logger.debug(\"Trying to delete collection\")\n with Session(self._conn) as session:\n collection = self.get_collection(session)\n if not collection:\n self.logger.error(\"Collection not found\")\n return\n session.delete(collection)\n session.commit()\n[docs] def get_collection(self, session: Session) -> Optional[\"CollectionStore\"]:\n return CollectionStore.get_by_name(session, self.collection_name)\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n kwargs: vectorstore specific parameters\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n if ids is None:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"}
+{"id": "e4b7ea968c42-4", "text": "\"\"\"\n if ids is None:\n ids = [str(uuid.uuid1()) for _ in texts]\n embeddings = self.embedding_function.embed_documents(list(texts))\n if not metadatas:\n metadatas = [{} for _ in texts]\n with Session(self._conn) as session:\n collection = self.get_collection(session)\n if not collection:\n raise ValueError(\"Collection not found\")\n for text, metadata, embedding, id in zip(texts, metadatas, embeddings, ids):\n embedding_store = EmbeddingStore(\n embedding=embedding,\n document=text,\n cmetadata=metadata,\n custom_id=id,\n )\n collection.embeddings.append(embedding_store)\n session.add(embedding_store)\n session.commit()\n return ids\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n filter: Optional[dict] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Run similarity search with AnalyticDB with distance.\n Args:\n query (str): Query text to search for.\n k (int): Number of results to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n embedding = self.embedding_function.embed_query(text=query)\n return self.similarity_search_by_vector(\n embedding=embedding,\n k=k,\n filter=filter,\n )\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"}
+{"id": "e4b7ea968c42-5", "text": "self,\n query: str,\n k: int = 4,\n filter: Optional[dict] = None,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n embedding = self.embedding_function.embed_query(query)\n docs = self.similarity_search_with_score_by_vector(\n embedding=embedding, k=k, filter=filter\n )\n return docs\n[docs] def similarity_search_with_score_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n filter: Optional[dict] = None,\n ) -> List[Tuple[Document, float]]:\n with Session(self._conn) as session:\n collection = self.get_collection(session)\n if not collection:\n raise ValueError(\"Collection not found\")\n filter_by = EmbeddingStore.collection_id == collection.uuid\n if filter is not None:\n filter_clauses = []\n for key, value in filter.items():\n filter_by_metadata = EmbeddingStore.cmetadata[key].astext == str(value)\n filter_clauses.append(filter_by_metadata)\n filter_by = sqlalchemy.and_(filter_by, *filter_clauses)\n results: List[QueryResult] = (\n session.query(\n EmbeddingStore,\n func.l2_distance(EmbeddingStore.embedding, embedding).label(\"distance\"),\n )\n .filter(filter_by)", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"}
+{"id": "e4b7ea968c42-6", "text": ")\n .filter(filter_by)\n .order_by(EmbeddingStore.embedding.op(\"<->\")(embedding))\n .join(\n CollectionStore,\n EmbeddingStore.collection_id == CollectionStore.uuid,\n )\n .limit(k)\n .all()\n )\n docs = [\n (\n Document(\n page_content=result.EmbeddingStore.document,\n metadata=result.EmbeddingStore.cmetadata,\n ),\n result.distance if self.embedding_function is not None else None,\n )\n for result in results\n ]\n return docs\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n filter: Optional[dict] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents most similar to the query vector.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score_by_vector(\n embedding=embedding, k=k, filter=filter\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n ids: Optional[List[str]] = None,\n pre_delete_collection: bool = False,\n **kwargs: Any,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"}
+{"id": "e4b7ea968c42-7", "text": "pre_delete_collection: bool = False,\n **kwargs: Any,\n ) -> AnalyticDB:\n \"\"\"\n Return VectorStore initialized from texts and embeddings.\n Postgres connection string is required\n Either pass it as a parameter\n or set the PGVECTOR_CONNECTION_STRING environment variable.\n \"\"\"\n connection_string = cls.get_connection_string(kwargs)\n store = cls(\n connection_string=connection_string,\n collection_name=collection_name,\n embedding_function=embedding,\n pre_delete_collection=pre_delete_collection,\n )\n store.add_texts(texts=texts, metadatas=metadatas, ids=ids, **kwargs)\n return store\n[docs] @classmethod\n def get_connection_string(cls, kwargs: Dict[str, Any]) -> str:\n connection_string: str = get_from_dict_or_env(\n data=kwargs,\n key=\"connection_string\",\n env_key=\"PGVECTOR_CONNECTION_STRING\",\n )\n if not connection_string:\n raise ValueError(\n \"Postgres connection string is required\"\n \"Either pass it as a parameter\"\n \"or set the PGVECTOR_CONNECTION_STRING environment variable.\"\n )\n return connection_string\n[docs] @classmethod\n def from_documents(\n cls,\n documents: List[Document],\n embedding: Embeddings,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n ids: Optional[List[str]] = None,\n pre_delete_collection: bool = False,\n **kwargs: Any,\n ) -> AnalyticDB:\n \"\"\"\n Return VectorStore initialized from documents and embeddings.\n Postgres connection string is required\n Either pass it as a parameter\n or set the PGVECTOR_CONNECTION_STRING environment variable.\n \"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"}
+{"id": "e4b7ea968c42-8", "text": "or set the PGVECTOR_CONNECTION_STRING environment variable.\n \"\"\"\n texts = [d.page_content for d in documents]\n metadatas = [d.metadata for d in documents]\n connection_string = cls.get_connection_string(kwargs)\n kwargs[\"connection_string\"] = connection_string\n return cls.from_texts(\n texts=texts,\n pre_delete_collection=pre_delete_collection,\n embedding=embedding,\n metadatas=metadatas,\n ids=ids,\n collection_name=collection_name,\n **kwargs,\n )\n[docs] @classmethod\n def connection_string_from_db_params(\n cls,\n driver: str,\n host: str,\n port: int,\n database: str,\n user: str,\n password: str,\n ) -> str:\n \"\"\"Return connection string from database parameters.\"\"\"\n return f\"postgresql+{driver}://{user}:{password}@{host}:{port}/{database}\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"}
+{"id": "40fade8f9055-0", "text": "Source code for langchain.vectorstores.faiss\n\"\"\"Wrapper around FAISS vector database.\"\"\"\nfrom __future__ import annotations\nimport math\nimport os\nimport pickle\nimport uuid\nfrom pathlib import Path\nfrom typing import Any, Callable, Dict, Iterable, List, Optional, Tuple\nimport numpy as np\nfrom langchain.docstore.base import AddableMixin, Docstore\nfrom langchain.docstore.document import Document\nfrom langchain.docstore.in_memory import InMemoryDocstore\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\ndef dependable_faiss_import(no_avx2: Optional[bool] = None) -> Any:\n \"\"\"\n Import faiss if available, otherwise raise error.\n If FAISS_NO_AVX2 environment variable is set, it will be considered\n to load FAISS with no AVX2 optimization.\n Args:\n no_avx2: Load FAISS strictly with no AVX2 optimization\n so that the vectorstore is portable and compatible with other devices.\n \"\"\"\n if no_avx2 is None and \"FAISS_NO_AVX2\" in os.environ:\n no_avx2 = bool(os.getenv(\"FAISS_NO_AVX2\"))\n try:\n if no_avx2:\n from faiss import swigfaiss as faiss\n else:\n import faiss\n except ImportError:\n raise ValueError(\n \"Could not import faiss python package. \"\n \"Please install it with `pip install faiss` \"\n \"or `pip install faiss-cpu` (depending on Python version).\"\n )\n return faiss\ndef _default_relevance_score_fn(score: float) -> float:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"}
+{"id": "40fade8f9055-1", "text": "return faiss\ndef _default_relevance_score_fn(score: float) -> float:\n \"\"\"Return a similarity score on a scale [0, 1].\"\"\"\n # The 'correct' relevance function\n # may differ depending on a few things, including:\n # - the distance / similarity metric used by the VectorStore\n # - the scale of your embeddings (OpenAI's are unit normed. Many others are not!)\n # - embedding dimensionality\n # - etc.\n # This function converts the euclidean norm of normalized embeddings\n # (0 is most similar, sqrt(2) most dissimilar)\n # to a similarity function (0 to 1)\n return 1.0 - score / math.sqrt(2)\n[docs]class FAISS(VectorStore):\n \"\"\"Wrapper around FAISS vector database.\n To use, you should have the ``faiss`` python package installed.\n Example:\n .. code-block:: python\n from langchain import FAISS\n faiss = FAISS(embedding_function, index, docstore, index_to_docstore_id)\n \"\"\"\n def __init__(\n self,\n embedding_function: Callable,\n index: Any,\n docstore: Docstore,\n index_to_docstore_id: Dict[int, str],\n relevance_score_fn: Optional[\n Callable[[float], float]\n ] = _default_relevance_score_fn,\n normalize_L2: bool = False,\n ):\n \"\"\"Initialize with necessary components.\"\"\"\n self.embedding_function = embedding_function\n self.index = index\n self.docstore = docstore\n self.index_to_docstore_id = index_to_docstore_id\n self.relevance_score_fn = relevance_score_fn\n self._normalize_L2 = normalize_L2", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"}
+{"id": "40fade8f9055-2", "text": "self._normalize_L2 = normalize_L2\n def __add(\n self,\n texts: Iterable[str],\n embeddings: Iterable[List[float]],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n if not isinstance(self.docstore, AddableMixin):\n raise ValueError(\n \"If trying to add texts, the underlying docstore should support \"\n f\"adding items, which {self.docstore} does not\"\n )\n documents = []\n for i, text in enumerate(texts):\n metadata = metadatas[i] if metadatas else {}\n documents.append(Document(page_content=text, metadata=metadata))\n if ids is None:\n ids = [str(uuid.uuid4()) for _ in texts]\n # Add to the index, the index_to_id mapping, and the docstore.\n starting_len = len(self.index_to_docstore_id)\n faiss = dependable_faiss_import()\n vector = np.array(embeddings, dtype=np.float32)\n if self._normalize_L2:\n faiss.normalize_L2(vector)\n self.index.add(vector)\n # Get list of index, id, and docs.\n full_info = [(starting_len + i, ids[i], doc) for i, doc in enumerate(documents)]\n # Add information to docstore and index.\n self.docstore.add({_id: doc for _, _id, doc in full_info})\n index_to_id = {index: _id for index, _id, _ in full_info}\n self.index_to_docstore_id.update(index_to_id)\n return [_id for _, _id, _ in full_info]", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"}
+{"id": "40fade8f9055-3", "text": "return [_id for _, _id, _ in full_info]\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n ids: Optional list of unique IDs.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n if not isinstance(self.docstore, AddableMixin):\n raise ValueError(\n \"If trying to add texts, the underlying docstore should support \"\n f\"adding items, which {self.docstore} does not\"\n )\n # Embed and create the documents.\n embeddings = [self.embedding_function(text) for text in texts]\n return self.__add(texts, embeddings, metadatas=metadatas, ids=ids, **kwargs)\n[docs] def add_embeddings(\n self,\n text_embeddings: Iterable[Tuple[str, List[float]]],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n text_embeddings: Iterable pairs of string and embedding to\n add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n ids: Optional list of unique IDs.\n Returns:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"}
+{"id": "40fade8f9055-4", "text": "ids: Optional list of unique IDs.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n if not isinstance(self.docstore, AddableMixin):\n raise ValueError(\n \"If trying to add texts, the underlying docstore should support \"\n f\"adding items, which {self.docstore} does not\"\n )\n # Embed and create the documents.\n texts, embeddings = zip(*text_embeddings)\n return self.__add(texts, embeddings, metadatas=metadatas, ids=ids, **kwargs)\n[docs] def similarity_search_with_score_by_vector(\n self, embedding: List[float], k: int = 4\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n embedding: Embedding vector to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of documents most similar to the query text and L2 distance\n in float for each. Lower score represents more similarity.\n \"\"\"\n faiss = dependable_faiss_import()\n vector = np.array([embedding], dtype=np.float32)\n if self._normalize_L2:\n faiss.normalize_L2(vector)\n scores, indices = self.index.search(vector, k)\n docs = []\n for j, i in enumerate(indices[0]):\n if i == -1:\n # This happens when not enough docs are returned.\n continue\n _id = self.index_to_docstore_id[i]\n doc = self.docstore.search(_id)\n if not isinstance(doc, Document):\n raise ValueError(f\"Could not find document for id {_id}, got {doc}\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"}
+{"id": "40fade8f9055-5", "text": "raise ValueError(f\"Could not find document for id {_id}, got {doc}\")\n docs.append((doc, scores[0][j]))\n return docs\n[docs] def similarity_search_with_score(\n self, query: str, k: int = 4\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of documents most similar to the query text with\n L2 distance in float. Lower score represents more similarity.\n \"\"\"\n embedding = self.embedding_function(query)\n docs = self.similarity_search_with_score_by_vector(embedding, k)\n return docs\n[docs] def similarity_search_by_vector(\n self, embedding: List[float], k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the embedding.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score_by_vector(embedding, k)\n return [doc for doc, _ in docs_and_scores]\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"}
+{"id": "40fade8f9055-6", "text": "k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(query, k)\n return [doc for doc, _ in docs_and_scores]\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n _, indices = self.index.search(np.array([embedding], dtype=np.float32), fetch_k)\n # -1 happens when not enough docs are returned.\n embeddings = [self.index.reconstruct(int(i)) for i in indices[0] if i != -1]\n mmr_selected = maximal_marginal_relevance(\n np.array([embedding], dtype=np.float32),\n embeddings,\n k=k,\n lambda_mult=lambda_mult,\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"}
+{"id": "40fade8f9055-7", "text": "embeddings,\n k=k,\n lambda_mult=lambda_mult,\n )\n selected_indices = [indices[0][i] for i in mmr_selected]\n docs = []\n for i in selected_indices:\n if i == -1:\n # This happens when not enough docs are returned.\n continue\n _id = self.index_to_docstore_id[i]\n doc = self.docstore.search(_id)\n if not isinstance(doc, Document):\n raise ValueError(f\"Could not find document for id {_id}, got {doc}\")\n docs.append(doc)\n return docs\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n embedding = self.embedding_function(query)\n docs = self.max_marginal_relevance_search_by_vector(\n embedding, k, fetch_k, lambda_mult=lambda_mult\n )\n return docs", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"}
+{"id": "40fade8f9055-8", "text": "embedding, k, fetch_k, lambda_mult=lambda_mult\n )\n return docs\n[docs] def merge_from(self, target: FAISS) -> None:\n \"\"\"Merge another FAISS object with the current one.\n Add the target FAISS to the current one.\n Args:\n target: FAISS object you wish to merge into the current one\n Returns:\n None.\n \"\"\"\n if not isinstance(self.docstore, AddableMixin):\n raise ValueError(\"Cannot merge with this type of docstore\")\n # Numerical index for target docs are incremental on existing ones\n starting_len = len(self.index_to_docstore_id)\n # Merge two IndexFlatL2\n self.index.merge_from(target.index)\n # Get id and docs from target FAISS object\n full_info = []\n for i, target_id in target.index_to_docstore_id.items():\n doc = target.docstore.search(target_id)\n if not isinstance(doc, Document):\n raise ValueError(\"Document should be returned\")\n full_info.append((starting_len + i, target_id, doc))\n # Add information to docstore and index_to_docstore_id.\n self.docstore.add({_id: doc for _, _id, doc in full_info})\n index_to_id = {index: _id for index, _id, _ in full_info}\n self.index_to_docstore_id.update(index_to_id)\n @classmethod\n def __from(\n cls,\n texts: List[str],\n embeddings: List[List[float]],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n normalize_L2: bool = False,\n **kwargs: Any,\n ) -> FAISS:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"}
+{"id": "40fade8f9055-9", "text": "**kwargs: Any,\n ) -> FAISS:\n faiss = dependable_faiss_import()\n index = faiss.IndexFlatL2(len(embeddings[0]))\n vector = np.array(embeddings, dtype=np.float32)\n if normalize_L2:\n faiss.normalize_L2(vector)\n index.add(vector)\n documents = []\n if ids is None:\n ids = [str(uuid.uuid4()) for _ in texts]\n for i, text in enumerate(texts):\n metadata = metadatas[i] if metadatas else {}\n documents.append(Document(page_content=text, metadata=metadata))\n index_to_id = dict(enumerate(ids))\n docstore = InMemoryDocstore(dict(zip(index_to_id.values(), documents)))\n return cls(\n embedding.embed_query,\n index,\n docstore,\n index_to_id,\n normalize_L2=normalize_L2,\n **kwargs,\n )\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> FAISS:\n \"\"\"Construct FAISS wrapper from raw documents.\n This is a user friendly interface that:\n 1. Embeds documents.\n 2. Creates an in memory docstore\n 3. Initializes the FAISS database\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import FAISS\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"}
+{"id": "40fade8f9055-10", "text": "embeddings = OpenAIEmbeddings()\n faiss = FAISS.from_texts(texts, embeddings)\n \"\"\"\n embeddings = embedding.embed_documents(texts)\n return cls.__from(\n texts,\n embeddings,\n embedding,\n metadatas=metadatas,\n ids=ids,\n **kwargs,\n )\n[docs] @classmethod\n def from_embeddings(\n cls,\n text_embeddings: List[Tuple[str, List[float]]],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> FAISS:\n \"\"\"Construct FAISS wrapper from raw documents.\n This is a user friendly interface that:\n 1. Embeds documents.\n 2. Creates an in memory docstore\n 3. Initializes the FAISS database\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import FAISS\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n text_embeddings = embeddings.embed_documents(texts)\n text_embedding_pairs = list(zip(texts, text_embeddings))\n faiss = FAISS.from_embeddings(text_embedding_pairs, embeddings)\n \"\"\"\n texts = [t[0] for t in text_embeddings]\n embeddings = [t[1] for t in text_embeddings]\n return cls.__from(\n texts,\n embeddings,\n embedding,\n metadatas=metadatas,\n ids=ids,\n **kwargs,\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"}
+{"id": "40fade8f9055-11", "text": "ids=ids,\n **kwargs,\n )\n[docs] def save_local(self, folder_path: str, index_name: str = \"index\") -> None:\n \"\"\"Save FAISS index, docstore, and index_to_docstore_id to disk.\n Args:\n folder_path: folder path to save index, docstore,\n and index_to_docstore_id to.\n index_name: for saving with a specific index file name\n \"\"\"\n path = Path(folder_path)\n path.mkdir(exist_ok=True, parents=True)\n # save index separately since it is not picklable\n faiss = dependable_faiss_import()\n faiss.write_index(\n self.index, str(path / \"{index_name}.faiss\".format(index_name=index_name))\n )\n # save docstore and index_to_docstore_id\n with open(path / \"{index_name}.pkl\".format(index_name=index_name), \"wb\") as f:\n pickle.dump((self.docstore, self.index_to_docstore_id), f)\n[docs] @classmethod\n def load_local(\n cls, folder_path: str, embeddings: Embeddings, index_name: str = \"index\"\n ) -> FAISS:\n \"\"\"Load FAISS index, docstore, and index_to_docstore_id from disk.\n Args:\n folder_path: folder path to load index, docstore,\n and index_to_docstore_id from.\n embeddings: Embeddings to use when generating queries\n index_name: for saving with a specific index file name\n \"\"\"\n path = Path(folder_path)\n # load index separately since it is not picklable\n faiss = dependable_faiss_import()\n index = faiss.read_index(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"}
+{"id": "40fade8f9055-12", "text": "faiss = dependable_faiss_import()\n index = faiss.read_index(\n str(path / \"{index_name}.faiss\".format(index_name=index_name))\n )\n # load docstore and index_to_docstore_id\n with open(path / \"{index_name}.pkl\".format(index_name=index_name), \"rb\") as f:\n docstore, index_to_docstore_id = pickle.load(f)\n return cls(embeddings.embed_query, index, docstore, index_to_docstore_id)\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and their similarity scores on a scale from 0 to 1.\"\"\"\n if self.relevance_score_fn is None:\n raise ValueError(\n \"normalize_score_fn must be provided to\"\n \" FAISS constructor to normalize scores\"\n )\n docs_and_scores = self.similarity_search_with_score(query, k=k)\n return [(doc, self.relevance_score_fn(score)) for doc, score in docs_and_scores]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"}
+{"id": "a52838bcc4b3-0", "text": "Source code for langchain.vectorstores.supabase\nfrom __future__ import annotations\nfrom itertools import repeat\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Iterable,\n List,\n Optional,\n Tuple,\n Type,\n Union,\n)\nimport numpy as np\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nif TYPE_CHECKING:\n import supabase\n[docs]class SupabaseVectorStore(VectorStore):\n \"\"\"VectorStore for a Supabase postgres database. Assumes you have the `pgvector`\n extension installed and a `match_documents` (or similar) function. For more details:\n https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabase\n You can implement your own `match_documents` function in order to limit the search\n space to a subset of documents based on your own authorization or business logic.\n Note that the Supabase Python client does not yet support async operations.\n If you'd like to use `max_marginal_relevance_search`, please review the instructions\n below on modifying the `match_documents` function to return matched embeddings.\n \"\"\"\n _client: supabase.client.Client\n # This is the embedding function. Don't confuse with the embedding vectors.\n # We should perhaps rename the underlying Embedding base class to EmbeddingFunction\n # or something\n _embedding: Embeddings\n table_name: str\n query_name: str\n def __init__(\n self,\n client: supabase.client.Client,\n embedding: Embeddings,\n table_name: str,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"}
+{"id": "a52838bcc4b3-1", "text": "embedding: Embeddings,\n table_name: str,\n query_name: Union[str, None] = None,\n ) -> None:\n \"\"\"Initialize with supabase client.\"\"\"\n try:\n import supabase # noqa: F401\n except ImportError:\n raise ValueError(\n \"Could not import supabase python package. \"\n \"Please install it with `pip install supabase`.\"\n )\n self._client = client\n self._embedding: Embeddings = embedding\n self.table_name = table_name or \"documents\"\n self.query_name = query_name or \"match_documents\"\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict[Any, Any]]] = None,\n **kwargs: Any,\n ) -> List[str]:\n docs = self._texts_to_documents(texts, metadatas)\n vectors = self._embedding.embed_documents(list(texts))\n return self.add_vectors(vectors, docs)\n[docs] @classmethod\n def from_texts(\n cls: Type[\"SupabaseVectorStore\"],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n client: Optional[supabase.client.Client] = None,\n table_name: Optional[str] = \"documents\",\n query_name: Union[str, None] = \"match_documents\",\n **kwargs: Any,\n ) -> \"SupabaseVectorStore\":\n \"\"\"Return VectorStore initialized from texts and embeddings.\"\"\"\n if not client:\n raise ValueError(\"Supabase client is required.\")\n if not table_name:\n raise ValueError(\"Supabase document table_name is required.\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"}
+{"id": "a52838bcc4b3-2", "text": "raise ValueError(\"Supabase document table_name is required.\")\n embeddings = embedding.embed_documents(texts)\n docs = cls._texts_to_documents(texts, metadatas)\n _ids = cls._add_vectors(client, table_name, embeddings, docs)\n return cls(\n client=client,\n embedding=embedding,\n table_name=table_name,\n query_name=query_name,\n )\n[docs] def add_vectors(\n self, vectors: List[List[float]], documents: List[Document]\n ) -> List[str]:\n return self._add_vectors(self._client, self.table_name, vectors, documents)\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n vectors = self._embedding.embed_documents([query])\n return self.similarity_search_by_vector(vectors[0], k)\n[docs] def similarity_search_by_vector(\n self, embedding: List[float], k: int = 4, **kwargs: Any\n ) -> List[Document]:\n result = self.similarity_search_by_vector_with_relevance_scores(embedding, k)\n documents = [doc for doc, _ in result]\n return documents\n[docs] def similarity_search_with_relevance_scores(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n vectors = self._embedding.embed_documents([query])\n return self.similarity_search_by_vector_with_relevance_scores(vectors[0], k)\n[docs] def similarity_search_by_vector_with_relevance_scores(\n self, query: List[float], k: int\n ) -> List[Tuple[Document, float]]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"}
+{"id": "a52838bcc4b3-3", "text": ") -> List[Tuple[Document, float]]:\n match_documents_params = dict(query_embedding=query, match_count=k)\n res = self._client.rpc(self.query_name, match_documents_params).execute()\n match_result = [\n (\n Document(\n metadata=search.get(\"metadata\", {}), # type: ignore\n page_content=search.get(\"content\", \"\"),\n ),\n search.get(\"similarity\", 0.0),\n )\n for search in res.data\n if search.get(\"content\")\n ]\n return match_result\n[docs] def similarity_search_by_vector_returning_embeddings(\n self, query: List[float], k: int\n ) -> List[Tuple[Document, float, np.ndarray[np.float32, Any]]]:\n match_documents_params = dict(query_embedding=query, match_count=k)\n res = self._client.rpc(self.query_name, match_documents_params).execute()\n match_result = [\n (\n Document(\n metadata=search.get(\"metadata\", {}), # type: ignore\n page_content=search.get(\"content\", \"\"),\n ),\n search.get(\"similarity\", 0.0),\n # Supabase returns a vector type as its string represation (!).\n # This is a hack to convert the string to numpy array.\n np.fromstring(\n search.get(\"embedding\", \"\").strip(\"[]\"), np.float32, sep=\",\"\n ),\n )\n for search in res.data\n if search.get(\"content\")\n ]\n return match_result\n @staticmethod\n def _texts_to_documents(\n texts: Iterable[str],\n metadatas: Optional[Iterable[dict[Any, Any]]] = None,\n ) -> List[Document]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"}
+{"id": "a52838bcc4b3-4", "text": ") -> List[Document]:\n \"\"\"Return list of Documents from list of texts and metadatas.\"\"\"\n if metadatas is None:\n metadatas = repeat({})\n docs = [\n Document(page_content=text, metadata=metadata)\n for text, metadata in zip(texts, metadatas)\n ]\n return docs\n @staticmethod\n def _add_vectors(\n client: supabase.client.Client,\n table_name: str,\n vectors: List[List[float]],\n documents: List[Document],\n ) -> List[str]:\n \"\"\"Add vectors to Supabase table.\"\"\"\n rows: List[dict[str, Any]] = [\n {\n \"content\": documents[idx].page_content,\n \"embedding\": embedding,\n \"metadata\": documents[idx].metadata, # type: ignore\n }\n for idx, embedding in enumerate(vectors)\n ]\n # According to the SupabaseVectorStore JS implementation, the best chunk size\n # is 500\n chunk_size = 500\n id_list: List[str] = []\n for i in range(0, len(rows), chunk_size):\n chunk = rows[i : i + chunk_size]\n result = client.from_(table_name).insert(chunk).execute() # type: ignore\n if len(result.data) == 0:\n raise Exception(\"Error inserting: No rows added\")\n # VectorStore.add_vectors returns ids as strings\n ids = [str(i.get(\"id\")) for i in result.data if i.get(\"id\")]\n id_list.extend(ids)\n return id_list\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"}
+{"id": "a52838bcc4b3-5", "text": "self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n result = self.similarity_search_by_vector_returning_embeddings(\n embedding, fetch_k\n )\n matched_documents = [doc_tuple[0] for doc_tuple in result]\n matched_embeddings = [doc_tuple[2] for doc_tuple in result]\n mmr_selected = maximal_marginal_relevance(\n np.array([embedding], dtype=np.float32),\n matched_embeddings,\n k=k,\n lambda_mult=lambda_mult,\n )\n filtered_documents = [matched_documents[i] for i in mmr_selected]\n return filtered_documents\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"}
+{"id": "a52838bcc4b3-6", "text": "**kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n `max_marginal_relevance_search` requires that `query_name` returns matched\n embeddings alongside the match documents. The following function\n demonstrates how to do this:\n ```sql\n CREATE FUNCTION match_documents_embeddings(query_embedding vector(1536),\n match_count int)\n RETURNS TABLE(\n id bigint,\n content text,\n metadata jsonb,\n embedding vector(1536),\n similarity float)\n LANGUAGE plpgsql\n AS $$\n # variable_conflict use_column\n BEGIN\n RETURN query\n SELECT\n id,\n content,\n metadata,\n embedding,\n 1 -(docstore.embedding <=> query_embedding) AS similarity\n FROM\n docstore\n ORDER BY\n docstore.embedding <=> query_embedding\n LIMIT match_count;\n END;\n $$;\n ```\n \"\"\"\n embedding = self._embedding.embed_documents([query])\n docs = self.max_marginal_relevance_search_by_vector(\n embedding[0], k, fetch_k, lambda_mult=lambda_mult\n )\n return docs", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"}
+{"id": "a52838bcc4b3-7", "text": ")\n return docs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"}
+{"id": "6199e161a542-0", "text": "Source code for langchain.vectorstores.lancedb\n\"\"\"Wrapper around LanceDB vector database\"\"\"\nfrom __future__ import annotations\nimport uuid\nfrom typing import Any, Iterable, List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\n[docs]class LanceDB(VectorStore):\n \"\"\"Wrapper around LanceDB vector database.\n To use, you should have ``lancedb`` python package installed.\n Example:\n .. code-block:: python\n db = lancedb.connect('./lancedb')\n table = db.open_table('my_table')\n vectorstore = LanceDB(table, embedding_function)\n vectorstore.add_texts(['text1', 'text2'])\n result = vectorstore.similarity_search('text1')\n \"\"\"\n def __init__(\n self,\n connection: Any,\n embedding: Embeddings,\n vector_key: Optional[str] = \"vector\",\n id_key: Optional[str] = \"id\",\n text_key: Optional[str] = \"text\",\n ):\n \"\"\"Initialize with Lance DB connection\"\"\"\n try:\n import lancedb\n except ImportError:\n raise ValueError(\n \"Could not import lancedb python package. \"\n \"Please install it with `pip install lancedb`.\"\n )\n if not isinstance(connection, lancedb.db.LanceTable):\n raise ValueError(\n \"connection should be an instance of lancedb.db.LanceTable, \",\n f\"got {type(connection)}\",\n )\n self._connection = connection\n self._embedding = embedding\n self._vector_key = vector_key\n self._id_key = id_key\n self._text_key = text_key", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/lancedb.html"}
+{"id": "6199e161a542-1", "text": "self._id_key = id_key\n self._text_key = text_key\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Turn texts into embedding and add it to the database\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n ids: Optional list of ids to associate with the texts.\n Returns:\n List of ids of the added texts.\n \"\"\"\n # Embed texts and create documents\n docs = []\n ids = ids or [str(uuid.uuid4()) for _ in texts]\n embeddings = self._embedding.embed_documents(list(texts))\n for idx, text in enumerate(texts):\n embedding = embeddings[idx]\n metadata = metadatas[idx] if metadatas else {}\n docs.append(\n {\n self._vector_key: embedding,\n self._id_key: ids[idx],\n self._text_key: text,\n **metadata,\n }\n )\n self._connection.add(docs)\n return ids\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return documents most similar to the query\n Args:\n query: String to query the vectorstore with.\n k: Number of documents to return.\n Returns:\n List of documents most similar to the query.\n \"\"\"\n embedding = self._embedding.embed_query(query)", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/lancedb.html"}
+{"id": "6199e161a542-2", "text": "\"\"\"\n embedding = self._embedding.embed_query(query)\n docs = self._connection.search(embedding).limit(k).to_df()\n return [\n Document(\n page_content=row[self._text_key],\n metadata=row[docs.columns != self._text_key],\n )\n for _, row in docs.iterrows()\n ]\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n connection: Any = None,\n vector_key: Optional[str] = \"vector\",\n id_key: Optional[str] = \"id\",\n text_key: Optional[str] = \"text\",\n **kwargs: Any,\n ) -> LanceDB:\n instance = LanceDB(\n connection,\n embedding,\n vector_key,\n id_key,\n text_key,\n )\n instance.add_texts(texts, metadatas=metadatas, **kwargs)\n return instance\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/lancedb.html"}
+{"id": "ff2ee3f76089-0", "text": "Source code for langchain.vectorstores.qdrant\n\"\"\"Wrapper around Qdrant vector database.\"\"\"\nfrom __future__ import annotations\nimport uuid\nimport warnings\nfrom itertools import islice\nfrom operator import itemgetter\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Dict,\n Iterable,\n List,\n Optional,\n Sequence,\n Tuple,\n Type,\n Union,\n)\nimport numpy as np\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nif TYPE_CHECKING:\n from qdrant_client.conversions import common_types\n from qdrant_client.http import models as rest\n DictFilter = Dict[str, Union[str, int, bool, dict, list]]\n MetadataFilter = Union[DictFilter, common_types.Filter]\n[docs]class Qdrant(VectorStore):\n \"\"\"Wrapper around Qdrant vector database.\n To use you should have the ``qdrant-client`` package installed.\n Example:\n .. code-block:: python\n from qdrant_client import QdrantClient\n from langchain import Qdrant\n client = QdrantClient()\n collection_name = \"MyCollection\"\n qdrant = Qdrant(client, collection_name, embedding_function)\n \"\"\"\n CONTENT_KEY = \"page_content\"\n METADATA_KEY = \"metadata\"\n def __init__(\n self,\n client: Any,\n collection_name: str,\n embeddings: Optional[Embeddings] = None,\n content_payload_key: str = CONTENT_KEY,\n metadata_payload_key: str = METADATA_KEY,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"}
+{"id": "ff2ee3f76089-1", "text": "metadata_payload_key: str = METADATA_KEY,\n embedding_function: Optional[Callable] = None, # deprecated\n ):\n \"\"\"Initialize with necessary components.\"\"\"\n try:\n import qdrant_client\n except ImportError:\n raise ValueError(\n \"Could not import qdrant-client python package. \"\n \"Please install it with `pip install qdrant-client`.\"\n )\n if not isinstance(client, qdrant_client.QdrantClient):\n raise ValueError(\n f\"client should be an instance of qdrant_client.QdrantClient, \"\n f\"got {type(client)}\"\n )\n if embeddings is None and embedding_function is None:\n raise ValueError(\n \"`embeddings` value can't be None. Pass `Embeddings` instance.\"\n )\n if embeddings is not None and embedding_function is not None:\n raise ValueError(\n \"Both `embeddings` and `embedding_function` are passed. \"\n \"Use `embeddings` only.\"\n )\n self.embeddings = embeddings\n self._embeddings_function = embedding_function\n self.client: qdrant_client.QdrantClient = client\n self.collection_name = collection_name\n self.content_payload_key = content_payload_key or self.CONTENT_KEY\n self.metadata_payload_key = metadata_payload_key or self.METADATA_KEY\n if embedding_function is not None:\n warnings.warn(\n \"Using `embedding_function` is deprecated. \"\n \"Pass `Embeddings` instance to `embeddings` instead.\"\n )\n if not isinstance(embeddings, Embeddings):\n warnings.warn(\n \"`embeddings` should be an instance of `Embeddings`.\"\n \"Using `embeddings` as `embedding_function` which is deprecated\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"}
+{"id": "ff2ee3f76089-2", "text": "\"Using `embeddings` as `embedding_function` which is deprecated\"\n )\n self._embeddings_function = embeddings\n self.embeddings = None\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[Sequence[str]] = None,\n batch_size: int = 64,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n ids:\n Optional list of ids to associate with the texts. Ids have to be\n uuid-like strings.\n batch_size:\n How many vectors upload per-request.\n Default: 64\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n from qdrant_client.http import models as rest\n added_ids = []\n texts_iterator = iter(texts)\n metadatas_iterator = iter(metadatas or [])\n ids_iterator = iter(ids or [uuid.uuid4().hex for _ in iter(texts)])\n while batch_texts := list(islice(texts_iterator, batch_size)):\n # Take the corresponding metadata and id for each text in a batch\n batch_metadatas = list(islice(metadatas_iterator, batch_size)) or None\n batch_ids = list(islice(ids_iterator, batch_size))\n self.client.upsert(\n collection_name=self.collection_name,\n points=rest.Batch.construct(\n ids=batch_ids,\n vectors=self._embed_texts(batch_texts),", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"}
+{"id": "ff2ee3f76089-3", "text": "ids=batch_ids,\n vectors=self._embed_texts(batch_texts),\n payloads=self._build_payloads(\n batch_texts,\n batch_metadatas,\n self.content_payload_key,\n self.metadata_payload_key,\n ),\n ),\n )\n added_ids.extend(batch_ids)\n return added_ids\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n filter: Optional[MetadataFilter] = None,\n search_params: Optional[common_types.SearchParams] = None,\n offset: int = 0,\n score_threshold: Optional[float] = None,\n consistency: Optional[common_types.ReadConsistency] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter: Filter by metadata. Defaults to None.\n search_params: Additional search params\n offset:\n Offset of the first result to return.\n May be used to paginate results.\n Note: large offset values may cause performance issues.\n score_threshold:\n Define a minimal score threshold for the result.\n If defined, less similar results will not be returned.\n Score of the returned result might be higher or smaller than the\n threshold depending on the Distance function used.\n E.g. for cosine similarity only higher scores will be returned.\n consistency:\n Read consistency of the search. Defines how many replicas should be\n queried before returning the result.\n Values:\n - int - number of replicas to query, values should present in all\n queried replicas", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"}
+{"id": "ff2ee3f76089-4", "text": "- int - number of replicas to query, values should present in all\n queried replicas\n - 'majority' - query all replicas, but return values present in the\n majority of replicas\n - 'quorum' - query the majority of replicas, return values present in\n all of them\n - 'all' - query all replicas, and return values present in all replicas\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n results = self.similarity_search_with_score(\n query,\n k,\n filter=filter,\n search_params=search_params,\n offset=offset,\n score_threshold=score_threshold,\n consistency=consistency,\n **kwargs,\n )\n return list(map(itemgetter(0), results))\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n filter: Optional[MetadataFilter] = None,\n search_params: Optional[common_types.SearchParams] = None,\n offset: int = 0,\n score_threshold: Optional[float] = None,\n consistency: Optional[common_types.ReadConsistency] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter: Filter by metadata. Defaults to None.\n search_params: Additional search params\n offset:\n Offset of the first result to return.\n May be used to paginate results.\n Note: large offset values may cause performance issues.\n score_threshold:\n Define a minimal score threshold for the result.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"}
+{"id": "ff2ee3f76089-5", "text": "score_threshold:\n Define a minimal score threshold for the result.\n If defined, less similar results will not be returned.\n Score of the returned result might be higher or smaller than the\n threshold depending on the Distance function used.\n E.g. for cosine similarity only higher scores will be returned.\n consistency:\n Read consistency of the search. Defines how many replicas should be\n queried before returning the result.\n Values:\n - int - number of replicas to query, values should present in all\n queried replicas\n - 'majority' - query all replicas, but return values present in the\n majority of replicas\n - 'quorum' - query the majority of replicas, return values present in\n all of them\n - 'all' - query all replicas, and return values present in all replicas\n Returns:\n List of documents most similar to the query text and cosine\n distance in float for each.\n Lower score represents more similarity.\n \"\"\"\n if filter is not None and isinstance(filter, dict):\n warnings.warn(\n \"Using dict as a `filter` is deprecated. Please use qdrant-client \"\n \"filters directly: \"\n \"https://qdrant.tech/documentation/concepts/filtering/\",\n DeprecationWarning,\n )\n qdrant_filter = self._qdrant_filter_from_dict(filter)\n else:\n qdrant_filter = filter\n results = self.client.search(\n collection_name=self.collection_name,\n query_vector=self._embed_query(query),\n query_filter=qdrant_filter,\n search_params=search_params,\n limit=k,\n offset=offset,\n with_payload=True,\n with_vectors=False, # Langchain does not expect vectors to be returned", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"}
+{"id": "ff2ee3f76089-6", "text": "with_vectors=False, # Langchain does not expect vectors to be returned\n score_threshold=score_threshold,\n consistency=consistency,\n **kwargs,\n )\n return [\n (\n self._document_from_scored_point(\n result, self.content_payload_key, self.metadata_payload_key\n ),\n result.score,\n )\n for result in results\n ]\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores in the range [0, 1].\n 0 is dissimilar, 1 is most similar.\n Args:\n query: input text\n k: Number of Documents to return. Defaults to 4.\n **kwargs: kwargs to be passed to similarity search. Should include:\n score_threshold: Optional, a floating point value between 0 to 1 to\n filter the resulting set of retrieved docs\n Returns:\n List of Tuples of (doc, similarity_score)\n \"\"\"\n return self.similarity_search_with_score(query, k, **kwargs)\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"}
+{"id": "ff2ee3f76089-7", "text": "Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n Defaults to 20.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n embedding = self._embed_query(query)\n results = self.client.search(\n collection_name=self.collection_name,\n query_vector=embedding,\n with_payload=True,\n with_vectors=True,\n limit=fetch_k,\n )\n embeddings = [result.vector for result in results]\n mmr_selected = maximal_marginal_relevance(\n np.array(embedding), embeddings, k=k, lambda_mult=lambda_mult\n )\n return [\n self._document_from_scored_point(\n results[i], self.content_payload_key, self.metadata_payload_key\n )\n for i in mmr_selected\n ]\n[docs] @classmethod\n def from_texts(\n cls: Type[Qdrant],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[Sequence[str]] = None,\n location: Optional[str] = None,\n url: Optional[str] = None,\n port: Optional[int] = 6333,\n grpc_port: int = 6334,\n prefer_grpc: bool = False,\n https: Optional[bool] = None,\n api_key: Optional[str] = None,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"}
+{"id": "ff2ee3f76089-8", "text": "api_key: Optional[str] = None,\n prefix: Optional[str] = None,\n timeout: Optional[float] = None,\n host: Optional[str] = None,\n path: Optional[str] = None,\n collection_name: Optional[str] = None,\n distance_func: str = \"Cosine\",\n content_payload_key: str = CONTENT_KEY,\n metadata_payload_key: str = METADATA_KEY,\n batch_size: int = 64,\n shard_number: Optional[int] = None,\n replication_factor: Optional[int] = None,\n write_consistency_factor: Optional[int] = None,\n on_disk_payload: Optional[bool] = None,\n hnsw_config: Optional[common_types.HnswConfigDiff] = None,\n optimizers_config: Optional[common_types.OptimizersConfigDiff] = None,\n wal_config: Optional[common_types.WalConfigDiff] = None,\n quantization_config: Optional[common_types.QuantizationConfig] = None,\n init_from: Optional[common_types.InitFrom] = None,\n **kwargs: Any,\n ) -> Qdrant:\n \"\"\"Construct Qdrant wrapper from a list of texts.\n Args:\n texts: A list of texts to be indexed in Qdrant.\n embedding: A subclass of `Embeddings`, responsible for text vectorization.\n metadatas:\n An optional list of metadata. If provided it has to be of the same\n length as a list of texts.\n ids:\n Optional list of ids to associate with the texts. Ids have to be\n uuid-like strings.\n location:\n If `:memory:` - use in-memory Qdrant instance.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"}
+{"id": "ff2ee3f76089-9", "text": "location:\n If `:memory:` - use in-memory Qdrant instance.\n If `str` - use it as a `url` parameter.\n If `None` - fallback to relying on `host` and `port` parameters.\n url: either host or str of \"Optional[scheme], host, Optional[port],\n Optional[prefix]\". Default: `None`\n port: Port of the REST API interface. Default: 6333\n grpc_port: Port of the gRPC interface. Default: 6334\n prefer_grpc:\n If true - use gPRC interface whenever possible in custom methods.\n Default: False\n https: If true - use HTTPS(SSL) protocol. Default: None\n api_key: API key for authentication in Qdrant Cloud. Default: None\n prefix:\n If not None - add prefix to the REST URL path.\n Example: service/v1 will result in\n http://localhost:6333/service/v1/{qdrant-endpoint} for REST API.\n Default: None\n timeout:\n Timeout for REST and gRPC API requests.\n Default: 5.0 seconds for REST and unlimited for gRPC\n host:\n Host name of Qdrant service. If url and host are None, set to\n 'localhost'. Default: None\n path:\n Path in which the vectors will be stored while using local mode.\n Default: None\n collection_name:\n Name of the Qdrant collection to be used. If not provided,\n it will be created randomly. Default: None\n distance_func:\n Distance function. One of: \"Cosine\" / \"Euclid\" / \"Dot\".\n Default: \"Cosine\"\n content_payload_key:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"}
+{"id": "ff2ee3f76089-10", "text": "Default: \"Cosine\"\n content_payload_key:\n A payload key used to store the content of the document.\n Default: \"page_content\"\n metadata_payload_key:\n A payload key used to store the metadata of the document.\n Default: \"metadata\"\n batch_size:\n How many vectors upload per-request.\n Default: 64\n shard_number: Number of shards in collection. Default is 1, minimum is 1.\n replication_factor:\n Replication factor for collection. Default is 1, minimum is 1.\n Defines how many copies of each shard will be created.\n Have effect only in distributed mode.\n write_consistency_factor:\n Write consistency factor for collection. Default is 1, minimum is 1.\n Defines how many replicas should apply the operation for us to consider\n it successful. Increasing this number will make the collection more\n resilient to inconsistencies, but will also make it fail if not enough\n replicas are available.\n Does not have any performance impact.\n Have effect only in distributed mode.\n on_disk_payload:\n If true - point`s payload will not be stored in memory.\n It will be read from the disk every time it is requested.\n This setting saves RAM by (slightly) increasing the response time.\n Note: those payload values that are involved in filtering and are\n indexed - remain in RAM.\n hnsw_config: Params for HNSW index\n optimizers_config: Params for optimizer\n wal_config: Params for Write-Ahead-Log\n quantization_config:\n Params for quantization, if None - quantization will be disabled\n init_from:\n Use data stored in another collection to initialize this collection\n **kwargs:\n Additional arguments passed directly into REST client initialization", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"}
+{"id": "ff2ee3f76089-11", "text": "**kwargs:\n Additional arguments passed directly into REST client initialization\n This is a user-friendly interface that:\n 1. Creates embeddings, one for each text\n 2. Initializes the Qdrant database as an in-memory docstore by default\n (and overridable to a remote docstore)\n 3. Adds the text embeddings to the Qdrant database\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import Qdrant\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n qdrant = Qdrant.from_texts(texts, embeddings, \"localhost\")\n \"\"\"\n try:\n import qdrant_client\n except ImportError:\n raise ValueError(\n \"Could not import qdrant-client python package. \"\n \"Please install it with `pip install qdrant-client`.\"\n )\n from qdrant_client.http import models as rest\n # Just do a single quick embedding to get vector size\n partial_embeddings = embedding.embed_documents(texts[:1])\n vector_size = len(partial_embeddings[0])\n collection_name = collection_name or uuid.uuid4().hex\n distance_func = distance_func.upper()\n client = qdrant_client.QdrantClient(\n location=location,\n url=url,\n port=port,\n grpc_port=grpc_port,\n prefer_grpc=prefer_grpc,\n https=https,\n api_key=api_key,\n prefix=prefix,\n timeout=timeout,\n host=host,\n path=path,\n **kwargs,\n )\n client.recreate_collection(\n collection_name=collection_name,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"}
+{"id": "ff2ee3f76089-12", "text": ")\n client.recreate_collection(\n collection_name=collection_name,\n vectors_config=rest.VectorParams(\n size=vector_size,\n distance=rest.Distance[distance_func],\n ),\n shard_number=shard_number,\n replication_factor=replication_factor,\n write_consistency_factor=write_consistency_factor,\n on_disk_payload=on_disk_payload,\n hnsw_config=hnsw_config,\n optimizers_config=optimizers_config,\n wal_config=wal_config,\n quantization_config=quantization_config,\n init_from=init_from,\n timeout=timeout, # type: ignore[arg-type]\n )\n texts_iterator = iter(texts)\n metadatas_iterator = iter(metadatas or [])\n ids_iterator = iter(ids or [uuid.uuid4().hex for _ in iter(texts)])\n while batch_texts := list(islice(texts_iterator, batch_size)):\n # Take the corresponding metadata and id for each text in a batch\n batch_metadatas = list(islice(metadatas_iterator, batch_size)) or None\n batch_ids = list(islice(ids_iterator, batch_size))\n # Generate the embeddings for all the texts in a batch\n batch_embeddings = embedding.embed_documents(batch_texts)\n client.upsert(\n collection_name=collection_name,\n points=rest.Batch.construct(\n ids=batch_ids,\n vectors=batch_embeddings,\n payloads=cls._build_payloads(\n batch_texts,\n batch_metadatas,\n content_payload_key,\n metadata_payload_key,\n ),\n ),\n )\n return cls(\n client=client,\n collection_name=collection_name,\n embeddings=embedding,\n content_payload_key=content_payload_key,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"}
+{"id": "ff2ee3f76089-13", "text": "embeddings=embedding,\n content_payload_key=content_payload_key,\n metadata_payload_key=metadata_payload_key,\n )\n @classmethod\n def _build_payloads(\n cls,\n texts: Iterable[str],\n metadatas: Optional[List[dict]],\n content_payload_key: str,\n metadata_payload_key: str,\n ) -> List[dict]:\n payloads = []\n for i, text in enumerate(texts):\n if text is None:\n raise ValueError(\n \"At least one of the texts is None. Please remove it before \"\n \"calling .from_texts or .add_texts on Qdrant instance.\"\n )\n metadata = metadatas[i] if metadatas is not None else None\n payloads.append(\n {\n content_payload_key: text,\n metadata_payload_key: metadata,\n }\n )\n return payloads\n @classmethod\n def _document_from_scored_point(\n cls,\n scored_point: Any,\n content_payload_key: str,\n metadata_payload_key: str,\n ) -> Document:\n return Document(\n page_content=scored_point.payload.get(content_payload_key),\n metadata=scored_point.payload.get(metadata_payload_key) or {},\n )\n def _build_condition(self, key: str, value: Any) -> List[rest.FieldCondition]:\n from qdrant_client.http import models as rest\n out = []\n if isinstance(value, dict):\n for _key, value in value.items():\n out.extend(self._build_condition(f\"{key}.{_key}\", value))\n elif isinstance(value, list):\n for _value in value:\n if isinstance(_value, dict):", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"}
+{"id": "ff2ee3f76089-14", "text": "for _value in value:\n if isinstance(_value, dict):\n out.extend(self._build_condition(f\"{key}[]\", _value))\n else:\n out.extend(self._build_condition(f\"{key}\", _value))\n else:\n out.append(\n rest.FieldCondition(\n key=f\"{self.metadata_payload_key}.{key}\",\n match=rest.MatchValue(value=value),\n )\n )\n return out\n def _qdrant_filter_from_dict(\n self, filter: Optional[DictFilter]\n ) -> Optional[rest.Filter]:\n from qdrant_client.http import models as rest\n if not filter:\n return None\n return rest.Filter(\n must=[\n condition\n for key, value in filter.items()\n for condition in self._build_condition(key, value)\n ]\n )\n def _embed_query(self, query: str) -> List[float]:\n \"\"\"Embed query text.\n Used to provide backward compatibility with `embedding_function` argument.\n Args:\n query: Query text.\n Returns:\n List of floats representing the query embedding.\n \"\"\"\n if self.embeddings is not None:\n embedding = self.embeddings.embed_query(query)\n else:\n if self._embeddings_function is not None:\n embedding = self._embeddings_function(query)\n else:\n raise ValueError(\"Neither of embeddings or embedding_function is set\")\n return embedding.tolist() if hasattr(embedding, \"tolist\") else embedding\n def _embed_texts(self, texts: Iterable[str]) -> List[List[float]]:\n \"\"\"Embed search texts.\n Used to provide backward compatibility with `embedding_function` argument.\n Args:\n texts: Iterable of texts to embed.\n Returns:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"}
+{"id": "ff2ee3f76089-15", "text": "Args:\n texts: Iterable of texts to embed.\n Returns:\n List of floats representing the texts embedding.\n \"\"\"\n if self.embeddings is not None:\n embeddings = self.embeddings.embed_documents(list(texts))\n if hasattr(embeddings, \"tolist\"):\n embeddings = embeddings.tolist()\n elif self._embeddings_function is not None:\n embeddings = []\n for text in texts:\n embedding = self._embeddings_function(text)\n if hasattr(embeddings, \"tolist\"):\n embedding = embedding.tolist()\n embeddings.append(embedding)\n else:\n raise ValueError(\"Neither of embeddings or embedding_function is set\")\n return embeddings\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"}
+{"id": "388733eb6075-0", "text": "Source code for langchain.vectorstores.base\n\"\"\"Interface for vector stores.\"\"\"\nfrom __future__ import annotations\nimport asyncio\nimport warnings\nfrom abc import ABC, abstractmethod\nfrom functools import partial\nfrom typing import (\n Any,\n ClassVar,\n Collection,\n Dict,\n Iterable,\n List,\n Optional,\n Tuple,\n Type,\n TypeVar,\n)\nfrom pydantic import BaseModel, Field, root_validator\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import BaseRetriever\nVST = TypeVar(\"VST\", bound=\"VectorStore\")\n[docs]class VectorStore(ABC):\n \"\"\"Interface for vector stores.\"\"\"\n[docs] @abstractmethod\n def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n kwargs: vectorstore specific parameters\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n[docs] async def aadd_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\"\"\"\n raise NotImplementedError\n[docs] def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"}
+{"id": "388733eb6075-1", "text": "\"\"\"Run more documents through the embeddings and add to the vectorstore.\n Args:\n documents (List[Document]: Documents to add to the vectorstore.\n Returns:\n List[str]: List of IDs of the added texts.\n \"\"\"\n # TODO: Handle the case where the user doesn't provide ids on the Collection\n texts = [doc.page_content for doc in documents]\n metadatas = [doc.metadata for doc in documents]\n return self.add_texts(texts, metadatas, **kwargs)\n[docs] async def aadd_documents(\n self, documents: List[Document], **kwargs: Any\n ) -> List[str]:\n \"\"\"Run more documents through the embeddings and add to the vectorstore.\n Args:\n documents (List[Document]: Documents to add to the vectorstore.\n Returns:\n List[str]: List of IDs of the added texts.\n \"\"\"\n texts = [doc.page_content for doc in documents]\n metadatas = [doc.metadata for doc in documents]\n return await self.aadd_texts(texts, metadatas, **kwargs)\n[docs] def search(self, query: str, search_type: str, **kwargs: Any) -> List[Document]:\n \"\"\"Return docs most similar to query using specified search type.\"\"\"\n if search_type == \"similarity\":\n return self.similarity_search(query, **kwargs)\n elif search_type == \"mmr\":\n return self.max_marginal_relevance_search(query, **kwargs)\n else:\n raise ValueError(\n f\"search_type of {search_type} not allowed. Expected \"\n \"search_type to be 'similarity' or 'mmr'.\"\n )\n[docs] async def asearch(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"}
+{"id": "388733eb6075-2", "text": ")\n[docs] async def asearch(\n self, query: str, search_type: str, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query using specified search type.\"\"\"\n if search_type == \"similarity\":\n return await self.asimilarity_search(query, **kwargs)\n elif search_type == \"mmr\":\n return await self.amax_marginal_relevance_search(query, **kwargs)\n else:\n raise ValueError(\n f\"search_type of {search_type} not allowed. Expected \"\n \"search_type to be 'similarity' or 'mmr'.\"\n )\n[docs] @abstractmethod\n def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\"\"\"\n[docs] def similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores in the range [0, 1].\n 0 is dissimilar, 1 is most similar.\n Args:\n query: input text\n k: Number of Documents to return. Defaults to 4.\n **kwargs: kwargs to be passed to similarity search. Should include:\n score_threshold: Optional, a floating point value between 0 to 1 to\n filter the resulting set of retrieved docs\n Returns:\n List of Tuples of (doc, similarity_score)\n \"\"\"\n docs_and_similarities = self._similarity_search_with_relevance_scores(\n query, k=k, **kwargs\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"}
+{"id": "388733eb6075-3", "text": "query, k=k, **kwargs\n )\n if any(\n similarity < 0.0 or similarity > 1.0\n for _, similarity in docs_and_similarities\n ):\n warnings.warn(\n \"Relevance scores must be between\"\n f\" 0 and 1, got {docs_and_similarities}\"\n )\n score_threshold = kwargs.get(\"score_threshold\")\n if score_threshold is not None:\n docs_and_similarities = [\n (doc, similarity)\n for doc, similarity in docs_and_similarities\n if similarity >= score_threshold\n ]\n if len(docs_and_similarities) == 0:\n warnings.warn(\n f\"No relevant docs were retrieved using the relevance score\\\n threshold {score_threshold}\"\n )\n return docs_and_similarities\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores, normalized on a scale from 0 to 1.\n 0 is dissimilar, 1 is most similar.\n \"\"\"\n raise NotImplementedError\n[docs] async def asimilarity_search_with_relevance_scores(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\"\"\"\n # This is a temporary workaround to make the similarity search\n # asynchronous. The proper solution is to make the similarity search\n # asynchronous in the vector store implementations.\n func = partial(self.similarity_search_with_relevance_scores, query, k, **kwargs)", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"}
+{"id": "388733eb6075-4", "text": "return await asyncio.get_event_loop().run_in_executor(None, func)\n[docs] async def asimilarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\"\"\"\n # This is a temporary workaround to make the similarity search\n # asynchronous. The proper solution is to make the similarity search\n # asynchronous in the vector store implementations.\n func = partial(self.similarity_search, query, k, **kwargs)\n return await asyncio.get_event_loop().run_in_executor(None, func)\n[docs] def similarity_search_by_vector(\n self, embedding: List[float], k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query vector.\n \"\"\"\n raise NotImplementedError\n[docs] async def asimilarity_search_by_vector(\n self, embedding: List[float], k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\"\"\"\n # This is a temporary workaround to make the similarity search\n # asynchronous. The proper solution is to make the similarity search\n # asynchronous in the vector store implementations.\n func = partial(self.similarity_search_by_vector, embedding, k, **kwargs)\n return await asyncio.get_event_loop().run_in_executor(None, func)\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"}
+{"id": "388733eb6075-5", "text": "self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n raise NotImplementedError\n[docs] async def amax_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\"\"\"\n # This is a temporary workaround to make the similarity search\n # asynchronous. The proper solution is to make the similarity search\n # asynchronous in the vector store implementations.\n func = partial(\n self.max_marginal_relevance_search, query, k, fetch_k, lambda_mult, **kwargs\n )\n return await asyncio.get_event_loop().run_in_executor(None, func)\n[docs] def max_marginal_relevance_search_by_vector(\n self,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"}
+{"id": "388733eb6075-6", "text": "[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n raise NotImplementedError\n[docs] async def amax_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\"\"\"\n raise NotImplementedError\n[docs] @classmethod\n def from_documents(\n cls: Type[VST],\n documents: List[Document],\n embedding: Embeddings,\n **kwargs: Any,\n ) -> VST:\n \"\"\"Return VectorStore initialized from documents and embeddings.\"\"\"\n texts = [d.page_content for d in documents]", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"}
+{"id": "388733eb6075-7", "text": "texts = [d.page_content for d in documents]\n metadatas = [d.metadata for d in documents]\n return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)\n[docs] @classmethod\n async def afrom_documents(\n cls: Type[VST],\n documents: List[Document],\n embedding: Embeddings,\n **kwargs: Any,\n ) -> VST:\n \"\"\"Return VectorStore initialized from documents and embeddings.\"\"\"\n texts = [d.page_content for d in documents]\n metadatas = [d.metadata for d in documents]\n return await cls.afrom_texts(texts, embedding, metadatas=metadatas, **kwargs)\n[docs] @classmethod\n @abstractmethod\n def from_texts(\n cls: Type[VST],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> VST:\n \"\"\"Return VectorStore initialized from texts and embeddings.\"\"\"\n[docs] @classmethod\n async def afrom_texts(\n cls: Type[VST],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> VST:\n \"\"\"Return VectorStore initialized from texts and embeddings.\"\"\"\n raise NotImplementedError\n[docs] def as_retriever(self, **kwargs: Any) -> VectorStoreRetriever:\n return VectorStoreRetriever(vectorstore=self, **kwargs)\nclass VectorStoreRetriever(BaseRetriever, BaseModel):\n vectorstore: VectorStore\n search_type: str = \"similarity\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"}
+{"id": "388733eb6075-8", "text": "vectorstore: VectorStore\n search_type: str = \"similarity\"\n search_kwargs: dict = Field(default_factory=dict)\n allowed_search_types: ClassVar[Collection[str]] = (\n \"similarity\",\n \"similarity_score_threshold\",\n \"mmr\",\n )\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n @root_validator()\n def validate_search_type(cls, values: Dict) -> Dict:\n \"\"\"Validate search type.\"\"\"\n search_type = values[\"search_type\"]\n if search_type not in cls.allowed_search_types:\n raise ValueError(\n f\"search_type of {search_type} not allowed. Valid values are: \"\n f\"{cls.allowed_search_types}\"\n )\n if search_type == \"similarity_score_threshold\":\n score_threshold = values[\"search_kwargs\"].get(\"score_threshold\")\n if (score_threshold is None) or (not isinstance(score_threshold, float)):\n raise ValueError(\n \"`score_threshold` is not specified with a float value(0~1) \"\n \"in `search_kwargs`.\"\n )\n return values\n def get_relevant_documents(self, query: str) -> List[Document]:\n if self.search_type == \"similarity\":\n docs = self.vectorstore.similarity_search(query, **self.search_kwargs)\n elif self.search_type == \"similarity_score_threshold\":\n docs_and_similarities = (\n self.vectorstore.similarity_search_with_relevance_scores(\n query, **self.search_kwargs\n )\n )\n docs = [doc for doc, _ in docs_and_similarities]\n elif self.search_type == \"mmr\":\n docs = self.vectorstore.max_marginal_relevance_search(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"}
+{"id": "388733eb6075-9", "text": "docs = self.vectorstore.max_marginal_relevance_search(\n query, **self.search_kwargs\n )\n else:\n raise ValueError(f\"search_type of {self.search_type} not allowed.\")\n return docs\n async def aget_relevant_documents(self, query: str) -> List[Document]:\n if self.search_type == \"similarity\":\n docs = await self.vectorstore.asimilarity_search(\n query, **self.search_kwargs\n )\n elif self.search_type == \"similarity_score_threshold\":\n docs_and_similarities = (\n await self.vectorstore.asimilarity_search_with_relevance_scores(\n query, **self.search_kwargs\n )\n )\n docs = [doc for doc, _ in docs_and_similarities]\n elif self.search_type == \"mmr\":\n docs = await self.vectorstore.amax_marginal_relevance_search(\n query, **self.search_kwargs\n )\n else:\n raise ValueError(f\"search_type of {self.search_type} not allowed.\")\n return docs\n def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:\n \"\"\"Add documents to vectorstore.\"\"\"\n return self.vectorstore.add_documents(documents, **kwargs)\n async def aadd_documents(\n self, documents: List[Document], **kwargs: Any\n ) -> List[str]:\n \"\"\"Add documents to vectorstore.\"\"\"\n return await self.vectorstore.aadd_documents(documents, **kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"}
+{"id": "e8414c2133c8-0", "text": "Source code for langchain.vectorstores.atlas\n\"\"\"Wrapper around Atlas by Nomic.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport uuid\nfrom typing import Any, Iterable, List, Optional, Type\nimport numpy as np\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger(__name__)\n[docs]class AtlasDB(VectorStore):\n \"\"\"Wrapper around Atlas: Nomic's neural database and rhizomatic instrument.\n To use, you should have the ``nomic`` python package installed.\n Example:\n .. code-block:: python\n from langchain.vectorstores import AtlasDB\n from langchain.embeddings.openai import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n vectorstore = AtlasDB(\"my_project\", embeddings.embed_query)\n \"\"\"\n _ATLAS_DEFAULT_ID_FIELD = \"atlas_id\"\n def __init__(\n self,\n name: str,\n embedding_function: Optional[Embeddings] = None,\n api_key: Optional[str] = None,\n description: str = \"A description for your project\",\n is_public: bool = True,\n reset_project_if_exists: bool = False,\n ) -> None:\n \"\"\"\n Initialize the Atlas Client\n Args:\n name (str): The name of your project. If the project already exists,\n it will be loaded.\n embedding_function (Optional[Callable]): An optional function used for\n embedding your data. If None, data will be embedded with\n Nomic's embed model.\n api_key (str): Your nomic API key\n description (str): A description for your project.\n is_public (bool): Whether your project is publicly accessible.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"}
+{"id": "e8414c2133c8-1", "text": "is_public (bool): Whether your project is publicly accessible.\n True by default.\n reset_project_if_exists (bool): Whether to reset this project if it\n already exists. Default False.\n Generally userful during development and testing.\n \"\"\"\n try:\n import nomic\n from nomic import AtlasProject\n except ImportError:\n raise ValueError(\n \"Could not import nomic python package. \"\n \"Please install it with `pip install nomic`.\"\n )\n if api_key is None:\n raise ValueError(\"No API key provided. Sign up at atlas.nomic.ai!\")\n nomic.login(api_key)\n self._embedding_function = embedding_function\n modality = \"text\"\n if self._embedding_function is not None:\n modality = \"embedding\"\n # Check if the project exists, create it if not\n self.project = AtlasProject(\n name=name,\n description=description,\n modality=modality,\n is_public=is_public,\n reset_project_if_exists=reset_project_if_exists,\n unique_id_field=AtlasDB._ATLAS_DEFAULT_ID_FIELD,\n )\n self.project._latest_project_state()\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n refresh: bool = True,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts (Iterable[str]): Texts to add to the vectorstore.\n metadatas (Optional[List[dict]], optional): Optional list of metadatas.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"}
+{"id": "e8414c2133c8-2", "text": "metadatas (Optional[List[dict]], optional): Optional list of metadatas.\n ids (Optional[List[str]]): An optional list of ids.\n refresh(bool): Whether or not to refresh indices with the updated data.\n Default True.\n Returns:\n List[str]: List of IDs of the added texts.\n \"\"\"\n if (\n metadatas is not None\n and len(metadatas) > 0\n and \"text\" in metadatas[0].keys()\n ):\n raise ValueError(\"Cannot accept key text in metadata!\")\n texts = list(texts)\n if ids is None:\n ids = [str(uuid.uuid1()) for _ in texts]\n # Embedding upload case\n if self._embedding_function is not None:\n _embeddings = self._embedding_function.embed_documents(texts)\n embeddings = np.stack(_embeddings)\n if metadatas is None:\n data = [\n {AtlasDB._ATLAS_DEFAULT_ID_FIELD: ids[i], \"text\": texts[i]}\n for i, _ in enumerate(texts)\n ]\n else:\n for i in range(len(metadatas)):\n metadatas[i][AtlasDB._ATLAS_DEFAULT_ID_FIELD] = ids[i]\n metadatas[i][\"text\"] = texts[i]\n data = metadatas\n self.project._validate_map_data_inputs(\n [], id_field=AtlasDB._ATLAS_DEFAULT_ID_FIELD, data=data\n )\n with self.project.wait_for_project_lock():\n self.project.add_embeddings(embeddings=embeddings, data=data)\n # Text upload case\n else:\n if metadatas is None:\n data = [", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"}
+{"id": "e8414c2133c8-3", "text": "else:\n if metadatas is None:\n data = [\n {\"text\": text, AtlasDB._ATLAS_DEFAULT_ID_FIELD: ids[i]}\n for i, text in enumerate(texts)\n ]\n else:\n for i, text in enumerate(texts):\n metadatas[i][\"text\"] = texts\n metadatas[i][AtlasDB._ATLAS_DEFAULT_ID_FIELD] = ids[i]\n data = metadatas\n self.project._validate_map_data_inputs(\n [], id_field=AtlasDB._ATLAS_DEFAULT_ID_FIELD, data=data\n )\n with self.project.wait_for_project_lock():\n self.project.add_text(data)\n if refresh:\n if len(self.project.indices) > 0:\n with self.project.wait_for_project_lock():\n self.project.rebuild_maps()\n return ids\n[docs] def create_index(self, **kwargs: Any) -> Any:\n \"\"\"Creates an index in your project.\n See\n https://docs.nomic.ai/atlas_api.html#nomic.project.AtlasProject.create_index\n for full detail.\n \"\"\"\n with self.project.wait_for_project_lock():\n return self.project.create_index(**kwargs)\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Run similarity search with AtlasDB\n Args:\n query (str): Query text to search for.\n k (int): Number of results to return. Defaults to 4.\n Returns:\n List[Document]: List of documents most similar to the query text.\n \"\"\"\n if self._embedding_function is None:\n raise NotImplementedError(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"}
+{"id": "e8414c2133c8-4", "text": "\"\"\"\n if self._embedding_function is None:\n raise NotImplementedError(\n \"AtlasDB requires an embedding_function for text similarity search!\"\n )\n _embedding = self._embedding_function.embed_documents([query])[0]\n embedding = np.array(_embedding).reshape(1, -1)\n with self.project.wait_for_project_lock():\n neighbors, _ = self.project.projections[0].vector_search(\n queries=embedding, k=k\n )\n datas = self.project.get_data(ids=neighbors[0])\n docs = [\n Document(page_content=datas[i][\"text\"], metadata=datas[i])\n for i, neighbor in enumerate(neighbors)\n ]\n return docs\n[docs] @classmethod\n def from_texts(\n cls: Type[AtlasDB],\n texts: List[str],\n embedding: Optional[Embeddings] = None,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n name: Optional[str] = None,\n api_key: Optional[str] = None,\n description: str = \"A description for your project\",\n is_public: bool = True,\n reset_project_if_exists: bool = False,\n index_kwargs: Optional[dict] = None,\n **kwargs: Any,\n ) -> AtlasDB:\n \"\"\"Create an AtlasDB vectorstore from a raw documents.\n Args:\n texts (List[str]): The list of texts to ingest.\n name (str): Name of the project to create.\n api_key (str): Your nomic API key,\n embedding (Optional[Embeddings]): Embedding function. Defaults to None.\n metadatas (Optional[List[dict]]): List of metadatas. Defaults to None.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"}
+{"id": "e8414c2133c8-5", "text": "ids (Optional[List[str]]): Optional list of document IDs. If None,\n ids will be auto created\n description (str): A description for your project.\n is_public (bool): Whether your project is publicly accessible.\n True by default.\n reset_project_if_exists (bool): Whether to reset this project if it\n already exists. Default False.\n Generally userful during development and testing.\n index_kwargs (Optional[dict]): Dict of kwargs for index creation.\n See https://docs.nomic.ai/atlas_api.html\n Returns:\n AtlasDB: Nomic's neural database and finest rhizomatic instrument\n \"\"\"\n if name is None or api_key is None:\n raise ValueError(\"`name` and `api_key` cannot be None.\")\n # Inject relevant kwargs\n all_index_kwargs = {\"name\": name + \"_index\", \"indexed_field\": \"text\"}\n if index_kwargs is not None:\n for k, v in index_kwargs.items():\n all_index_kwargs[k] = v\n # Build project\n atlasDB = cls(\n name,\n embedding_function=embedding,\n api_key=api_key,\n description=\"A description for your project\",\n is_public=is_public,\n reset_project_if_exists=reset_project_if_exists,\n )\n with atlasDB.project.wait_for_project_lock():\n atlasDB.add_texts(texts=texts, metadatas=metadatas, ids=ids)\n atlasDB.create_index(**all_index_kwargs)\n return atlasDB\n[docs] @classmethod\n def from_documents(\n cls: Type[AtlasDB],\n documents: List[Document],\n embedding: Optional[Embeddings] = None,\n ids: Optional[List[str]] = None,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"}
+{"id": "e8414c2133c8-6", "text": "ids: Optional[List[str]] = None,\n name: Optional[str] = None,\n api_key: Optional[str] = None,\n persist_directory: Optional[str] = None,\n description: str = \"A description for your project\",\n is_public: bool = True,\n reset_project_if_exists: bool = False,\n index_kwargs: Optional[dict] = None,\n **kwargs: Any,\n ) -> AtlasDB:\n \"\"\"Create an AtlasDB vectorstore from a list of documents.\n Args:\n name (str): Name of the collection to create.\n api_key (str): Your nomic API key,\n documents (List[Document]): List of documents to add to the vectorstore.\n embedding (Optional[Embeddings]): Embedding function. Defaults to None.\n ids (Optional[List[str]]): Optional list of document IDs. If None,\n ids will be auto created\n description (str): A description for your project.\n is_public (bool): Whether your project is publicly accessible.\n True by default.\n reset_project_if_exists (bool): Whether to reset this project if\n it already exists. Default False.\n Generally userful during development and testing.\n index_kwargs (Optional[dict]): Dict of kwargs for index creation.\n See https://docs.nomic.ai/atlas_api.html\n Returns:\n AtlasDB: Nomic's neural database and finest rhizomatic instrument\n \"\"\"\n if name is None or api_key is None:\n raise ValueError(\"`name` and `api_key` cannot be None.\")\n texts = [doc.page_content for doc in documents]\n metadatas = [doc.metadata for doc in documents]\n return cls.from_texts(\n name=name,\n api_key=api_key,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"}
+{"id": "e8414c2133c8-7", "text": "return cls.from_texts(\n name=name,\n api_key=api_key,\n texts=texts,\n embedding=embedding,\n metadatas=metadatas,\n ids=ids,\n description=description,\n is_public=is_public,\n reset_project_if_exists=reset_project_if_exists,\n index_kwargs=index_kwargs,\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"}
+{"id": "cf1e7617c4b5-0", "text": "Source code for langchain.vectorstores.myscale\n\"\"\"Wrapper around MyScale vector database.\"\"\"\nfrom __future__ import annotations\nimport json\nimport logging\nfrom hashlib import sha1\nfrom threading import Thread\nfrom typing import Any, Dict, Iterable, List, Optional, Tuple\nfrom pydantic import BaseSettings\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger()\ndef has_mul_sub_str(s: str, *args: Any) -> bool:\n for a in args:\n if a not in s:\n return False\n return True\n[docs]class MyScaleSettings(BaseSettings):\n \"\"\"MyScale Client Configuration\n Attribute:\n myscale_host (str) : An URL to connect to MyScale backend.\n Defaults to 'localhost'.\n myscale_port (int) : URL port to connect with HTTP. Defaults to 8443.\n username (str) : Username to login. Defaults to None.\n password (str) : Password to login. Defaults to None.\n index_type (str): index type string.\n index_param (dict): index build parameter.\n database (str) : Database name to find the table. Defaults to 'default'.\n table (str) : Table name to operate on.\n Defaults to 'vector_table'.\n metric (str) : Metric to compute distance,\n supported are ('l2', 'cosine', 'ip'). Defaults to 'cosine'.\n column_map (Dict) : Column type map to project column name onto langchain\n semantics. Must have keys: `text`, `id`, `vector`,\n must be same size to number of columns. For example:\n .. code-block:: python\n {", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"}
+{"id": "cf1e7617c4b5-1", "text": ".. code-block:: python\n {\n 'id': 'text_id',\n 'vector': 'text_embedding',\n 'text': 'text_plain',\n 'metadata': 'metadata_dictionary_in_json',\n }\n Defaults to identity map.\n \"\"\"\n host: str = \"localhost\"\n port: int = 8443\n username: Optional[str] = None\n password: Optional[str] = None\n index_type: str = \"IVFFLAT\"\n index_param: Optional[Dict[str, str]] = None\n column_map: Dict[str, str] = {\n \"id\": \"id\",\n \"text\": \"text\",\n \"vector\": \"vector\",\n \"metadata\": \"metadata\",\n }\n database: str = \"default\"\n table: str = \"langchain\"\n metric: str = \"cosine\"\n def __getitem__(self, item: str) -> Any:\n return getattr(self, item)\n class Config:\n env_file = \".env\"\n env_prefix = \"myscale_\"\n env_file_encoding = \"utf-8\"\n[docs]class MyScale(VectorStore):\n \"\"\"Wrapper around MyScale vector database\n You need a `clickhouse-connect` python package, and a valid account\n to connect to MyScale.\n MyScale can not only search with simple vector indexes,\n it also supports complex query with multiple conditions,\n constraints and even sub-queries.\n For more information, please visit\n [myscale official site](https://docs.myscale.com/en/overview/)\n \"\"\"\n def __init__(\n self,\n embedding: Embeddings,\n config: Optional[MyScaleSettings] = None,\n **kwargs: Any,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"}
+{"id": "cf1e7617c4b5-2", "text": "config: Optional[MyScaleSettings] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"MyScale Wrapper to LangChain\n embedding_function (Embeddings):\n config (MyScaleSettings): Configuration to MyScale Client\n Other keyword arguments will pass into\n [clickhouse-connect](https://docs.myscale.com/)\n \"\"\"\n try:\n from clickhouse_connect import get_client\n except ImportError:\n raise ValueError(\n \"Could not import clickhouse connect python package. \"\n \"Please install it with `pip install clickhouse-connect`.\"\n )\n try:\n from tqdm import tqdm\n self.pgbar = tqdm\n except ImportError:\n # Just in case if tqdm is not installed\n self.pgbar = lambda x: x\n super().__init__()\n if config is not None:\n self.config = config\n else:\n self.config = MyScaleSettings()\n assert self.config\n assert self.config.host and self.config.port\n assert (\n self.config.column_map\n and self.config.database\n and self.config.table\n and self.config.metric\n )\n for k in [\"id\", \"vector\", \"text\", \"metadata\"]:\n assert k in self.config.column_map\n assert self.config.metric in [\"ip\", \"cosine\", \"l2\"]\n # initialize the schema\n dim = len(embedding.embed_query(\"try this out\"))\n index_params = (\n \", \" + \",\".join([f\"'{k}={v}'\" for k, v in self.config.index_param.items()])\n if self.config.index_param\n else \"\"\n )\n schema_ = f\"\"\"\n CREATE TABLE IF NOT EXISTS {self.config.database}.{self.config.table}(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"}
+{"id": "cf1e7617c4b5-3", "text": "CREATE TABLE IF NOT EXISTS {self.config.database}.{self.config.table}(\n {self.config.column_map['id']} String,\n {self.config.column_map['text']} String,\n {self.config.column_map['vector']} Array(Float32),\n {self.config.column_map['metadata']} JSON,\n CONSTRAINT cons_vec_len CHECK length(\\\n {self.config.column_map['vector']}) = {dim},\n VECTOR INDEX vidx {self.config.column_map['vector']} \\\n TYPE {self.config.index_type}(\\\n 'metric_type={self.config.metric}'{index_params})\n ) ENGINE = MergeTree ORDER BY {self.config.column_map['id']}\n \"\"\"\n self.dim = dim\n self.BS = \"\\\\\"\n self.must_escape = (\"\\\\\", \"'\")\n self.embedding_function = embedding.embed_query\n self.dist_order = \"ASC\" if self.config.metric in [\"cosine\", \"l2\"] else \"DESC\"\n # Create a connection to myscale\n self.client = get_client(\n host=self.config.host,\n port=self.config.port,\n username=self.config.username,\n password=self.config.password,\n **kwargs,\n )\n self.client.command(\"SET allow_experimental_object_type=1\")\n self.client.command(schema_)\n[docs] def escape_str(self, value: str) -> str:\n return \"\".join(f\"{self.BS}{c}\" if c in self.must_escape else c for c in value)\n def _build_istr(self, transac: Iterable, column_names: Iterable[str]) -> str:\n ks = \",\".join(column_names)\n _data = []\n for n in transac:\n n = \",\".join([f\"'{self.escape_str(str(_n))}'\" for _n in n])", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"}
+{"id": "cf1e7617c4b5-4", "text": "_data.append(f\"({n})\")\n i_str = f\"\"\"\n INSERT INTO TABLE \n {self.config.database}.{self.config.table}({ks})\n VALUES\n {','.join(_data)}\n \"\"\"\n return i_str\n def _insert(self, transac: Iterable, column_names: Iterable[str]) -> None:\n _i_str = self._build_istr(transac, column_names)\n self.client.command(_i_str)\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n batch_size: int = 32,\n ids: Optional[Iterable[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n ids: Optional list of ids to associate with the texts.\n batch_size: Batch size of insertion\n metadata: Optional column data to be inserted\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n # Embed and create the documents\n ids = ids or [sha1(t.encode(\"utf-8\")).hexdigest() for t in texts]\n colmap_ = self.config.column_map\n transac = []\n column_names = {\n colmap_[\"id\"]: ids,\n colmap_[\"text\"]: texts,\n colmap_[\"vector\"]: map(self.embedding_function, texts),\n }\n metadatas = metadatas or [{} for _ in texts]\n column_names[colmap_[\"metadata\"]] = map(json.dumps, metadatas)", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"}
+{"id": "cf1e7617c4b5-5", "text": "column_names[colmap_[\"metadata\"]] = map(json.dumps, metadatas)\n assert len(set(colmap_) - set(column_names)) >= 0\n keys, values = zip(*column_names.items())\n try:\n t = None\n for v in self.pgbar(\n zip(*values), desc=\"Inserting data...\", total=len(metadatas)\n ):\n assert len(v[keys.index(self.config.column_map[\"vector\"])]) == self.dim\n transac.append(v)\n if len(transac) == batch_size:\n if t:\n t.join()\n t = Thread(target=self._insert, args=[transac, keys])\n t.start()\n transac = []\n if len(transac) > 0:\n if t:\n t.join()\n self._insert(transac, keys)\n return [i for i in ids]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[Dict[Any, Any]]] = None,\n config: Optional[MyScaleSettings] = None,\n text_ids: Optional[Iterable[str]] = None,\n batch_size: int = 32,\n **kwargs: Any,\n ) -> MyScale:\n \"\"\"Create Myscale wrapper with existing texts\n Args:\n embedding_function (Embeddings): Function to extract text embedding\n texts (Iterable[str]): List or tuple of strings to be added", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"}
+{"id": "cf1e7617c4b5-6", "text": "texts (Iterable[str]): List or tuple of strings to be added\n config (MyScaleSettings, Optional): Myscale configuration\n text_ids (Optional[Iterable], optional): IDs for the texts.\n Defaults to None.\n batch_size (int, optional): Batchsize when transmitting data to MyScale.\n Defaults to 32.\n metadata (List[dict], optional): metadata to texts. Defaults to None.\n Other keyword arguments will pass into\n [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)\n Returns:\n MyScale Index\n \"\"\"\n ctx = cls(embedding, config, **kwargs)\n ctx.add_texts(texts, ids=text_ids, batch_size=batch_size, metadatas=metadatas)\n return ctx\n def __repr__(self) -> str:\n \"\"\"Text representation for myscale, prints backends, username and schemas.\n Easy to use with `str(Myscale())`\n Returns:\n repr: string to show connection info and data schema\n \"\"\"\n _repr = f\"\\033[92m\\033[1m{self.config.database}.{self.config.table} @ \"\n _repr += f\"{self.config.host}:{self.config.port}\\033[0m\\n\\n\"\n _repr += f\"\\033[1musername: {self.config.username}\\033[0m\\n\\nTable Schema:\\n\"\n _repr += \"-\" * 51 + \"\\n\"\n for r in self.client.query(\n f\"DESC {self.config.database}.{self.config.table}\"\n ).named_results():\n _repr += (", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"}
+{"id": "cf1e7617c4b5-7", "text": ").named_results():\n _repr += (\n f\"|\\033[94m{r['name']:24s}\\033[0m|\\033[96m{r['type']:24s}\\033[0m|\\n\"\n )\n _repr += \"-\" * 51 + \"\\n\"\n return _repr\n def _build_qstr(\n self, q_emb: List[float], topk: int, where_str: Optional[str] = None\n ) -> str:\n q_emb_str = \",\".join(map(str, q_emb))\n if where_str:\n where_str = f\"PREWHERE {where_str}\"\n else:\n where_str = \"\"\n q_str = f\"\"\"\n SELECT {self.config.column_map['text']}, \n {self.config.column_map['metadata']}, dist\n FROM {self.config.database}.{self.config.table}\n {where_str}\n ORDER BY distance({self.config.column_map['vector']}, [{q_emb_str}]) \n AS dist {self.dist_order}\n LIMIT {topk}\n \"\"\"\n return q_str\n[docs] def similarity_search(\n self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Perform a similarity search with MyScale\n Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"}
+{"id": "cf1e7617c4b5-8", "text": "of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of Documents\n \"\"\"\n return self.similarity_search_by_vector(\n self.embedding_function(query), k, where_str, **kwargs\n )\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n where_str: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Perform a similarity search with MyScale by vectors\n Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of (Document, similarity)\n \"\"\"\n q_str = self._build_qstr(embedding, k, where_str)\n try:\n return [\n Document(\n page_content=r[self.config.column_map[\"text\"]],\n metadata=r[self.config.column_map[\"metadata\"]],\n )\n for r in self.client.query(q_str).named_results()\n ]\n except Exception as e:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"}
+{"id": "cf1e7617c4b5-9", "text": "]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] def similarity_search_with_relevance_scores(\n self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n \"\"\"Perform a similarity search with MyScale\n Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of documents most similar to the query text\n and cosine distance in float for each.\n Lower score represents more similarity.\n \"\"\"\n q_str = self._build_qstr(self.embedding_function(query), k, where_str)\n try:\n return [\n (\n Document(\n page_content=r[self.config.column_map[\"text\"]],\n metadata=r[self.config.column_map[\"metadata\"]],\n ),\n r[\"dist\"],\n )\n for r in self.client.query(q_str).named_results()\n ]\n except Exception as e:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"}
+{"id": "cf1e7617c4b5-10", "text": "]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] def drop(self) -> None:\n \"\"\"\n Helper function: Drop data\n \"\"\"\n self.client.command(\n f\"DROP TABLE IF EXISTS {self.config.database}.{self.config.table}\"\n )\n @property\n def metadata_column(self) -> str:\n return self.config.column_map[\"metadata\"]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"}
+{"id": "02fc74fe0b96-0", "text": "Source code for langchain.vectorstores.opensearch_vector_search\n\"\"\"Wrapper around OpenSearch vector database.\"\"\"\nfrom __future__ import annotations\nimport uuid\nfrom typing import Any, Dict, Iterable, List, Optional, Tuple\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nfrom langchain.vectorstores.base import VectorStore\nIMPORT_OPENSEARCH_PY_ERROR = (\n \"Could not import OpenSearch. Please install it with `pip install opensearch-py`.\"\n)\nSCRIPT_SCORING_SEARCH = \"script_scoring\"\nPAINLESS_SCRIPTING_SEARCH = \"painless_scripting\"\nMATCH_ALL_QUERY = {\"match_all\": {}} # type: Dict\ndef _import_opensearch() -> Any:\n \"\"\"Import OpenSearch if available, otherwise raise error.\"\"\"\n try:\n from opensearchpy import OpenSearch\n except ImportError:\n raise ValueError(IMPORT_OPENSEARCH_PY_ERROR)\n return OpenSearch\ndef _import_bulk() -> Any:\n \"\"\"Import bulk if available, otherwise raise error.\"\"\"\n try:\n from opensearchpy.helpers import bulk\n except ImportError:\n raise ValueError(IMPORT_OPENSEARCH_PY_ERROR)\n return bulk\ndef _import_not_found_error() -> Any:\n \"\"\"Import not found error if available, otherwise raise error.\"\"\"\n try:\n from opensearchpy.exceptions import NotFoundError\n except ImportError:\n raise ValueError(IMPORT_OPENSEARCH_PY_ERROR)\n return NotFoundError\ndef _get_opensearch_client(opensearch_url: str, **kwargs: Any) -> Any:\n \"\"\"Get OpenSearch client from the opensearch_url, otherwise raise error.\"\"\"\n try:\n opensearch = _import_opensearch()", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"}
+{"id": "02fc74fe0b96-1", "text": "try:\n opensearch = _import_opensearch()\n client = opensearch(opensearch_url, **kwargs)\n except ValueError as e:\n raise ValueError(\n f\"OpenSearch client string provided is not in proper format. \"\n f\"Got error: {e} \"\n )\n return client\ndef _validate_embeddings_and_bulk_size(embeddings_length: int, bulk_size: int) -> None:\n \"\"\"Validate Embeddings Length and Bulk Size.\"\"\"\n if embeddings_length == 0:\n raise RuntimeError(\"Embeddings size is zero\")\n if bulk_size < embeddings_length:\n raise RuntimeError(\n f\"The embeddings count, {embeddings_length} is more than the \"\n f\"[bulk_size], {bulk_size}. Increase the value of [bulk_size].\"\n )\ndef _bulk_ingest_embeddings(\n client: Any,\n index_name: str,\n embeddings: List[List[float]],\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n vector_field: str = \"vector_field\",\n text_field: str = \"text\",\n mapping: Dict = {},\n) -> List[str]:\n \"\"\"Bulk Ingest Embeddings into given index.\"\"\"\n bulk = _import_bulk()\n not_found_error = _import_not_found_error()\n requests = []\n ids = []\n mapping = mapping\n try:\n client.indices.get(index=index_name)\n except not_found_error:\n client.indices.create(index=index_name, body=mapping)\n for i, text in enumerate(texts):\n metadata = metadatas[i] if metadatas else {}\n _id = str(uuid.uuid4())\n request = {\n \"_op_type\": \"index\",", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"}
+{"id": "02fc74fe0b96-2", "text": "request = {\n \"_op_type\": \"index\",\n \"_index\": index_name,\n vector_field: embeddings[i],\n text_field: text,\n \"metadata\": metadata,\n \"_id\": _id,\n }\n requests.append(request)\n ids.append(_id)\n bulk(client, requests)\n client.indices.refresh(index=index_name)\n return ids\ndef _default_scripting_text_mapping(\n dim: int,\n vector_field: str = \"vector_field\",\n) -> Dict:\n \"\"\"For Painless Scripting or Script Scoring,the default mapping to create index.\"\"\"\n return {\n \"mappings\": {\n \"properties\": {\n vector_field: {\"type\": \"knn_vector\", \"dimension\": dim},\n }\n }\n }\ndef _default_text_mapping(\n dim: int,\n engine: str = \"nmslib\",\n space_type: str = \"l2\",\n ef_search: int = 512,\n ef_construction: int = 512,\n m: int = 16,\n vector_field: str = \"vector_field\",\n) -> Dict:\n \"\"\"For Approximate k-NN Search, this is the default mapping to create index.\"\"\"\n return {\n \"settings\": {\"index\": {\"knn\": True, \"knn.algo_param.ef_search\": ef_search}},\n \"mappings\": {\n \"properties\": {\n vector_field: {\n \"type\": \"knn_vector\",\n \"dimension\": dim,\n \"method\": {\n \"name\": \"hnsw\",\n \"space_type\": space_type,\n \"engine\": engine,\n \"parameters\": {\"ef_construction\": ef_construction, \"m\": m},", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"}
+{"id": "02fc74fe0b96-3", "text": "\"parameters\": {\"ef_construction\": ef_construction, \"m\": m},\n },\n }\n }\n },\n }\ndef _default_approximate_search_query(\n query_vector: List[float],\n k: int = 4,\n vector_field: str = \"vector_field\",\n) -> Dict:\n \"\"\"For Approximate k-NN Search, this is the default query.\"\"\"\n return {\n \"size\": k,\n \"query\": {\"knn\": {vector_field: {\"vector\": query_vector, \"k\": k}}},\n }\ndef _approximate_search_query_with_boolean_filter(\n query_vector: List[float],\n boolean_filter: Dict,\n k: int = 4,\n vector_field: str = \"vector_field\",\n subquery_clause: str = \"must\",\n) -> Dict:\n \"\"\"For Approximate k-NN Search, with Boolean Filter.\"\"\"\n return {\n \"size\": k,\n \"query\": {\n \"bool\": {\n \"filter\": boolean_filter,\n subquery_clause: [\n {\"knn\": {vector_field: {\"vector\": query_vector, \"k\": k}}}\n ],\n }\n },\n }\ndef _approximate_search_query_with_lucene_filter(\n query_vector: List[float],\n lucene_filter: Dict,\n k: int = 4,\n vector_field: str = \"vector_field\",\n) -> Dict:\n \"\"\"For Approximate k-NN Search, with Lucene Filter.\"\"\"\n search_query = _default_approximate_search_query(\n query_vector, k=k, vector_field=vector_field\n )\n search_query[\"query\"][\"knn\"][vector_field][\"filter\"] = lucene_filter", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"}
+{"id": "02fc74fe0b96-4", "text": "search_query[\"query\"][\"knn\"][vector_field][\"filter\"] = lucene_filter\n return search_query\ndef _default_script_query(\n query_vector: List[float],\n space_type: str = \"l2\",\n pre_filter: Dict = MATCH_ALL_QUERY,\n vector_field: str = \"vector_field\",\n) -> Dict:\n \"\"\"For Script Scoring Search, this is the default query.\"\"\"\n return {\n \"query\": {\n \"script_score\": {\n \"query\": pre_filter,\n \"script\": {\n \"source\": \"knn_score\",\n \"lang\": \"knn\",\n \"params\": {\n \"field\": vector_field,\n \"query_value\": query_vector,\n \"space_type\": space_type,\n },\n },\n }\n }\n }\ndef __get_painless_scripting_source(\n space_type: str, query_vector: List[float], vector_field: str = \"vector_field\"\n) -> str:\n \"\"\"For Painless Scripting, it returns the script source based on space type.\"\"\"\n source_value = (\n \"(1.0 + \"\n + space_type\n + \"(\"\n + str(query_vector)\n + \", doc['\"\n + vector_field\n + \"']))\"\n )\n if space_type == \"cosineSimilarity\":\n return source_value\n else:\n return \"1/\" + source_value\ndef _default_painless_scripting_query(\n query_vector: List[float],\n space_type: str = \"l2Squared\",\n pre_filter: Dict = MATCH_ALL_QUERY,\n vector_field: str = \"vector_field\",\n) -> Dict:\n \"\"\"For Painless Scripting Search, this is the default query.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"}
+{"id": "02fc74fe0b96-5", "text": "\"\"\"For Painless Scripting Search, this is the default query.\"\"\"\n source = __get_painless_scripting_source(space_type, query_vector)\n return {\n \"query\": {\n \"script_score\": {\n \"query\": pre_filter,\n \"script\": {\n \"source\": source,\n \"params\": {\n \"field\": vector_field,\n \"query_value\": query_vector,\n },\n },\n }\n }\n }\ndef _get_kwargs_value(kwargs: Any, key: str, default_value: Any) -> Any:\n \"\"\"Get the value of the key if present. Else get the default_value.\"\"\"\n if key in kwargs:\n return kwargs.get(key)\n return default_value\n[docs]class OpenSearchVectorSearch(VectorStore):\n \"\"\"Wrapper around OpenSearch as a vector database.\n Example:\n .. code-block:: python\n from langchain import OpenSearchVectorSearch\n opensearch_vector_search = OpenSearchVectorSearch(\n \"http://localhost:9200\",\n \"embeddings\",\n embedding_function\n )\n \"\"\"\n def __init__(\n self,\n opensearch_url: str,\n index_name: str,\n embedding_function: Embeddings,\n **kwargs: Any,\n ):\n \"\"\"Initialize with necessary components.\"\"\"\n self.embedding_function = embedding_function\n self.index_name = index_name\n self.client = _get_opensearch_client(opensearch_url, **kwargs)\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n bulk_size: int = 500,\n **kwargs: Any,\n ) -> List[str]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"}
+{"id": "02fc74fe0b96-6", "text": "**kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n bulk_size: Bulk API request count; Default: 500\n Returns:\n List of ids from adding the texts into the vectorstore.\n Optional Args:\n vector_field: Document field embeddings are stored in. Defaults to\n \"vector_field\".\n text_field: Document field the text of the document is stored in. Defaults\n to \"text\".\n \"\"\"\n embeddings = self.embedding_function.embed_documents(list(texts))\n _validate_embeddings_and_bulk_size(len(embeddings), bulk_size)\n text_field = _get_kwargs_value(kwargs, \"text_field\", \"text\")\n dim = len(embeddings[0])\n engine = _get_kwargs_value(kwargs, \"engine\", \"nmslib\")\n space_type = _get_kwargs_value(kwargs, \"space_type\", \"l2\")\n ef_search = _get_kwargs_value(kwargs, \"ef_search\", 512)\n ef_construction = _get_kwargs_value(kwargs, \"ef_construction\", 512)\n m = _get_kwargs_value(kwargs, \"m\", 16)\n vector_field = _get_kwargs_value(kwargs, \"vector_field\", \"vector_field\")\n mapping = _default_text_mapping(\n dim, engine, space_type, ef_search, ef_construction, m, vector_field\n )\n return _bulk_ingest_embeddings(\n self.client,\n self.index_name,\n embeddings,\n texts,\n metadatas,\n vector_field,\n text_field,\n mapping,\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"}
+{"id": "02fc74fe0b96-7", "text": "vector_field,\n text_field,\n mapping,\n )\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n By default supports Approximate Search.\n Also supports Script Scoring and Painless Scripting.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query.\n Optional Args:\n vector_field: Document field embeddings are stored in. Defaults to\n \"vector_field\".\n text_field: Document field the text of the document is stored in. Defaults\n to \"text\".\n metadata_field: Document field that metadata is stored in. Defaults to\n \"metadata\".\n Can be set to a special value \"*\" to include the entire document.\n Optional Args for Approximate Search:\n search_type: \"approximate_search\"; default: \"approximate_search\"\n boolean_filter: A Boolean filter consists of a Boolean query that\n contains a k-NN query and a filter.\n subquery_clause: Query clause on the knn vector field; default: \"must\"\n lucene_filter: the Lucene algorithm decides whether to perform an exact\n k-NN search with pre-filtering or an approximate search with modified\n post-filtering.\n Optional Args for Script Scoring Search:\n search_type: \"script_scoring\"; default: \"approximate_search\"\n space_type: \"l2\", \"l1\", \"linf\", \"cosinesimil\", \"innerproduct\",\n \"hammingbit\"; default: \"l2\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"}
+{"id": "02fc74fe0b96-8", "text": "\"hammingbit\"; default: \"l2\"\n pre_filter: script_score query to pre-filter documents before identifying\n nearest neighbors; default: {\"match_all\": {}}\n Optional Args for Painless Scripting Search:\n search_type: \"painless_scripting\"; default: \"approximate_search\"\n space_type: \"l2Squared\", \"l1Norm\", \"cosineSimilarity\"; default: \"l2Squared\"\n pre_filter: script_score query to pre-filter documents before identifying\n nearest neighbors; default: {\"match_all\": {}}\n \"\"\"\n docs_with_scores = self.similarity_search_with_score(query, k, **kwargs)\n return [doc[0] for doc in docs_with_scores]\n[docs] def similarity_search_with_score(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and it's scores most similar to query.\n By default supports Approximate Search.\n Also supports Script Scoring and Painless Scripting.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents along with its scores most similar to the query.\n Optional Args:\n same as `similarity_search`\n \"\"\"\n embedding = self.embedding_function.embed_query(query)\n search_type = _get_kwargs_value(kwargs, \"search_type\", \"approximate_search\")\n text_field = _get_kwargs_value(kwargs, \"text_field\", \"text\")\n metadata_field = _get_kwargs_value(kwargs, \"metadata_field\", \"metadata\")\n vector_field = _get_kwargs_value(kwargs, \"vector_field\", \"vector_field\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"}
+{"id": "02fc74fe0b96-9", "text": "vector_field = _get_kwargs_value(kwargs, \"vector_field\", \"vector_field\")\n if search_type == \"approximate_search\":\n boolean_filter = _get_kwargs_value(kwargs, \"boolean_filter\", {})\n subquery_clause = _get_kwargs_value(kwargs, \"subquery_clause\", \"must\")\n lucene_filter = _get_kwargs_value(kwargs, \"lucene_filter\", {})\n if boolean_filter != {} and lucene_filter != {}:\n raise ValueError(\n \"Both `boolean_filter` and `lucene_filter` are provided which \"\n \"is invalid\"\n )\n if boolean_filter != {}:\n search_query = _approximate_search_query_with_boolean_filter(\n embedding,\n boolean_filter,\n k=k,\n vector_field=vector_field,\n subquery_clause=subquery_clause,\n )\n elif lucene_filter != {}:\n search_query = _approximate_search_query_with_lucene_filter(\n embedding, lucene_filter, k=k, vector_field=vector_field\n )\n else:\n search_query = _default_approximate_search_query(\n embedding, k=k, vector_field=vector_field\n )\n elif search_type == SCRIPT_SCORING_SEARCH:\n space_type = _get_kwargs_value(kwargs, \"space_type\", \"l2\")\n pre_filter = _get_kwargs_value(kwargs, \"pre_filter\", MATCH_ALL_QUERY)\n search_query = _default_script_query(\n embedding, space_type, pre_filter, vector_field\n )\n elif search_type == PAINLESS_SCRIPTING_SEARCH:\n space_type = _get_kwargs_value(kwargs, \"space_type\", \"l2Squared\")\n pre_filter = _get_kwargs_value(kwargs, \"pre_filter\", MATCH_ALL_QUERY)\n search_query = _default_painless_scripting_query(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"}
+{"id": "02fc74fe0b96-10", "text": "search_query = _default_painless_scripting_query(\n embedding, space_type, pre_filter, vector_field\n )\n else:\n raise ValueError(\"Invalid `search_type` provided as an argument\")\n response = self.client.search(index=self.index_name, body=search_query)\n hits = [hit for hit in response[\"hits\"][\"hits\"][:k]]\n documents_with_scores = [\n (\n Document(\n page_content=hit[\"_source\"][text_field],\n metadata=hit[\"_source\"]\n if metadata_field == \"*\" or metadata_field not in hit[\"_source\"]\n else hit[\"_source\"][metadata_field],\n ),\n hit[\"_score\"],\n )\n for hit in hits\n ]\n return documents_with_scores\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n bulk_size: int = 500,\n **kwargs: Any,\n ) -> OpenSearchVectorSearch:\n \"\"\"Construct OpenSearchVectorSearch wrapper from raw documents.\n Example:\n .. code-block:: python\n from langchain import OpenSearchVectorSearch\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n opensearch_vector_search = OpenSearchVectorSearch.from_texts(\n texts,\n embeddings,\n opensearch_url=\"http://localhost:9200\"\n )\n OpenSearch by default supports Approximate Search powered by nmslib, faiss\n and lucene engines recommended for large datasets. Also supports brute force\n search through Script Scoring and Painless Scripting.\n Optional Args:\n vector_field: Document field embeddings are stored in. Defaults to", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"}
+{"id": "02fc74fe0b96-11", "text": "Optional Args:\n vector_field: Document field embeddings are stored in. Defaults to\n \"vector_field\".\n text_field: Document field the text of the document is stored in. Defaults\n to \"text\".\n Optional Keyword Args for Approximate Search:\n engine: \"nmslib\", \"faiss\", \"lucene\"; default: \"nmslib\"\n space_type: \"l2\", \"l1\", \"cosinesimil\", \"linf\", \"innerproduct\"; default: \"l2\"\n ef_search: Size of the dynamic list used during k-NN searches. Higher values\n lead to more accurate but slower searches; default: 512\n ef_construction: Size of the dynamic list used during k-NN graph creation.\n Higher values lead to more accurate graph but slower indexing speed;\n default: 512\n m: Number of bidirectional links created for each new element. Large impact\n on memory consumption. Between 2 and 100; default: 16\n Keyword Args for Script Scoring or Painless Scripting:\n is_appx_search: False\n \"\"\"\n opensearch_url = get_from_dict_or_env(\n kwargs, \"opensearch_url\", \"OPENSEARCH_URL\"\n )\n # List of arguments that needs to be removed from kwargs\n # before passing kwargs to get opensearch client\n keys_list = [\n \"opensearch_url\",\n \"index_name\",\n \"is_appx_search\",\n \"vector_field\",\n \"text_field\",\n \"engine\",\n \"space_type\",\n \"ef_search\",\n \"ef_construction\",\n \"m\",\n ]\n embeddings = embedding.embed_documents(texts)\n _validate_embeddings_and_bulk_size(len(embeddings), bulk_size)", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"}
+{"id": "02fc74fe0b96-12", "text": "_validate_embeddings_and_bulk_size(len(embeddings), bulk_size)\n dim = len(embeddings[0])\n # Get the index name from either from kwargs or ENV Variable\n # before falling back to random generation\n index_name = get_from_dict_or_env(\n kwargs, \"index_name\", \"OPENSEARCH_INDEX_NAME\", default=uuid.uuid4().hex\n )\n is_appx_search = _get_kwargs_value(kwargs, \"is_appx_search\", True)\n vector_field = _get_kwargs_value(kwargs, \"vector_field\", \"vector_field\")\n text_field = _get_kwargs_value(kwargs, \"text_field\", \"text\")\n if is_appx_search:\n engine = _get_kwargs_value(kwargs, \"engine\", \"nmslib\")\n space_type = _get_kwargs_value(kwargs, \"space_type\", \"l2\")\n ef_search = _get_kwargs_value(kwargs, \"ef_search\", 512)\n ef_construction = _get_kwargs_value(kwargs, \"ef_construction\", 512)\n m = _get_kwargs_value(kwargs, \"m\", 16)\n mapping = _default_text_mapping(\n dim, engine, space_type, ef_search, ef_construction, m, vector_field\n )\n else:\n mapping = _default_scripting_text_mapping(dim)\n [kwargs.pop(key, None) for key in keys_list]\n client = _get_opensearch_client(opensearch_url, **kwargs)\n _bulk_ingest_embeddings(\n client,\n index_name,\n embeddings,\n texts,\n metadatas,\n vector_field,\n text_field,\n mapping,\n )\n return cls(opensearch_url, index_name, embedding, **kwargs)\nBy Harrison Chase", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"}
+{"id": "02fc74fe0b96-13", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"}
+{"id": "5fa5ef04557d-0", "text": "Source code for langchain.vectorstores.milvus\n\"\"\"Wrapper around the Milvus vector database.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Iterable, List, Optional, Tuple, Union\nfrom uuid import uuid4\nimport numpy as np\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nlogger = logging.getLogger(__name__)\nDEFAULT_MILVUS_CONNECTION = {\n \"host\": \"localhost\",\n \"port\": \"19530\",\n \"user\": \"\",\n \"password\": \"\",\n \"secure\": False,\n}\n[docs]class Milvus(VectorStore):\n \"\"\"Wrapper around the Milvus vector database.\"\"\"\n def __init__(\n self,\n embedding_function: Embeddings,\n collection_name: str = \"LangChainCollection\",\n connection_args: Optional[dict[str, Any]] = None,\n consistency_level: str = \"Session\",\n index_params: Optional[dict] = None,\n search_params: Optional[dict] = None,\n drop_old: Optional[bool] = False,\n ):\n \"\"\"Initialize wrapper around the milvus vector database.\n In order to use this you need to have `pymilvus` installed and a\n running Milvus/Zilliz Cloud instance.\n See the following documentation for how to run a Milvus instance:\n https://milvus.io/docs/install_standalone-docker.md\n If looking for a hosted Milvus, take a looka this documentation:\n https://zilliz.com/cloud\n IF USING L2/IP metric IT IS HIGHLY SUGGESTED TO NORMALIZE YOUR DATA.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"}
+{"id": "5fa5ef04557d-1", "text": "The connection args used for this class comes in the form of a dict,\n here are a few of the options:\n address (str): The actual address of Milvus\n instance. Example address: \"localhost:19530\"\n uri (str): The uri of Milvus instance. Example uri:\n \"http://randomwebsite:19530\",\n \"tcp:foobarsite:19530\",\n \"https://ok.s3.south.com:19530\".\n host (str): The host of Milvus instance. Default at \"localhost\",\n PyMilvus will fill in the default host if only port is provided.\n port (str/int): The port of Milvus instance. Default at 19530, PyMilvus\n will fill in the default port if only host is provided.\n user (str): Use which user to connect to Milvus instance. If user and\n password are provided, we will add related header in every RPC call.\n password (str): Required when user is provided. The password\n corresponding to the user.\n secure (bool): Default is false. If set to true, tls will be enabled.\n client_key_path (str): If use tls two-way authentication, need to\n write the client.key path.\n client_pem_path (str): If use tls two-way authentication, need to\n write the client.pem path.\n ca_pem_path (str): If use tls two-way authentication, need to write\n the ca.pem path.\n server_pem_path (str): If use tls one-way authentication, need to\n write the server.pem path.\n server_name (str): If use tls, need to write the common name.\n Args:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"}
+{"id": "5fa5ef04557d-2", "text": "Args:\n embedding_function (Embeddings): Function used to embed the text.\n collection_name (str): Which Milvus collection to use. Defaults to\n \"LangChainCollection\".\n connection_args (Optional[dict[str, any]]): The arguments for connection to\n Milvus/Zilliz instance. Defaults to DEFAULT_MILVUS_CONNECTION.\n consistency_level (str): The consistency level to use for a collection.\n Defaults to \"Session\".\n index_params (Optional[dict]): Which index params to use. Defaults to\n HNSW/AUTOINDEX depending on service.\n search_params (Optional[dict]): Which search params to use. Defaults to\n default of index.\n drop_old (Optional[bool]): Whether to drop the current collection. Defaults\n to False.\n \"\"\"\n try:\n from pymilvus import Collection, utility\n except ImportError:\n raise ValueError(\n \"Could not import pymilvus python package. \"\n \"Please install it with `pip install pymilvus`.\"\n )\n # Default search params when one is not provided.\n self.default_search_params = {\n \"IVF_FLAT\": {\"metric_type\": \"L2\", \"params\": {\"nprobe\": 10}},\n \"IVF_SQ8\": {\"metric_type\": \"L2\", \"params\": {\"nprobe\": 10}},\n \"IVF_PQ\": {\"metric_type\": \"L2\", \"params\": {\"nprobe\": 10}},\n \"HNSW\": {\"metric_type\": \"L2\", \"params\": {\"ef\": 10}},\n \"RHNSW_FLAT\": {\"metric_type\": \"L2\", \"params\": {\"ef\": 10}},", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"}
+{"id": "5fa5ef04557d-3", "text": "\"RHNSW_SQ\": {\"metric_type\": \"L2\", \"params\": {\"ef\": 10}},\n \"RHNSW_PQ\": {\"metric_type\": \"L2\", \"params\": {\"ef\": 10}},\n \"IVF_HNSW\": {\"metric_type\": \"L2\", \"params\": {\"nprobe\": 10, \"ef\": 10}},\n \"ANNOY\": {\"metric_type\": \"L2\", \"params\": {\"search_k\": 10}},\n \"AUTOINDEX\": {\"metric_type\": \"L2\", \"params\": {}},\n }\n self.embedding_func = embedding_function\n self.collection_name = collection_name\n self.index_params = index_params\n self.search_params = search_params\n self.consistency_level = consistency_level\n # In order for a collection to be compatible, pk needs to be auto'id and int\n self._primary_field = \"pk\"\n # In order for compatiblility, the text field will need to be called \"text\"\n self._text_field = \"text\"\n # In order for compatbility, the vector field needs to be called \"vector\"\n self._vector_field = \"vector\"\n self.fields: list[str] = []\n # Create the connection to the server\n if connection_args is None:\n connection_args = DEFAULT_MILVUS_CONNECTION\n self.alias = self._create_connection_alias(connection_args)\n self.col: Optional[Collection] = None\n # Grab the existing colection if it exists\n if utility.has_collection(self.collection_name, using=self.alias):\n self.col = Collection(\n self.collection_name,\n using=self.alias,\n )\n # If need to drop old, drop it\n if drop_old and isinstance(self.col, Collection):", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"}
+{"id": "5fa5ef04557d-4", "text": "if drop_old and isinstance(self.col, Collection):\n self.col.drop()\n self.col = None\n # Initialize the vector store\n self._init()\n def _create_connection_alias(self, connection_args: dict) -> str:\n \"\"\"Create the connection to the Milvus server.\"\"\"\n from pymilvus import MilvusException, connections\n # Grab the connection arguments that are used for checking existing connection\n host: str = connection_args.get(\"host\", None)\n port: Union[str, int] = connection_args.get(\"port\", None)\n address: str = connection_args.get(\"address\", None)\n uri: str = connection_args.get(\"uri\", None)\n user = connection_args.get(\"user\", None)\n # Order of use is host/port, uri, address\n if host is not None and port is not None:\n given_address = str(host) + \":\" + str(port)\n elif uri is not None:\n given_address = uri.split(\"https://\")[1]\n elif address is not None:\n given_address = address\n else:\n given_address = None\n logger.debug(\"Missing standard address type for reuse atttempt\")\n # User defaults to empty string when getting connection info\n if user is not None:\n tmp_user = user\n else:\n tmp_user = \"\"\n # If a valid address was given, then check if a connection exists\n if given_address is not None:\n for con in connections.list_connections():\n addr = connections.get_connection_addr(con[0])\n if (\n con[1]\n and (\"address\" in addr)\n and (addr[\"address\"] == given_address)\n and (\"user\" in addr)\n and (addr[\"user\"] == tmp_user)", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"}
+{"id": "5fa5ef04557d-5", "text": "and (addr[\"user\"] == tmp_user)\n ):\n logger.debug(\"Using previous connection: %s\", con[0])\n return con[0]\n # Generate a new connection if one doesnt exist\n alias = uuid4().hex\n try:\n connections.connect(alias=alias, **connection_args)\n logger.debug(\"Created new connection using: %s\", alias)\n return alias\n except MilvusException as e:\n logger.error(\"Failed to create new connection using: %s\", alias)\n raise e\n def _init(\n self, embeddings: Optional[list] = None, metadatas: Optional[list[dict]] = None\n ) -> None:\n if embeddings is not None:\n self._create_collection(embeddings, metadatas)\n self._extract_fields()\n self._create_index()\n self._create_search_params()\n self._load()\n def _create_collection(\n self, embeddings: list, metadatas: Optional[list[dict]] = None\n ) -> None:\n from pymilvus import (\n Collection,\n CollectionSchema,\n DataType,\n FieldSchema,\n MilvusException,\n )\n from pymilvus.orm.types import infer_dtype_bydata\n # Determine embedding dim\n dim = len(embeddings[0])\n fields = []\n # Determine metadata schema\n if metadatas:\n # Create FieldSchema for each entry in metadata.\n for key, value in metadatas[0].items():\n # Infer the corresponding datatype of the metadata\n dtype = infer_dtype_bydata(value)\n # Datatype isnt compatible\n if dtype == DataType.UNKNOWN or dtype == DataType.NONE:\n logger.error(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"}
+{"id": "5fa5ef04557d-6", "text": "if dtype == DataType.UNKNOWN or dtype == DataType.NONE:\n logger.error(\n \"Failure to create collection, unrecognized dtype for key: %s\",\n key,\n )\n raise ValueError(f\"Unrecognized datatype for {key}.\")\n # Dataype is a string/varchar equivalent\n elif dtype == DataType.VARCHAR:\n fields.append(FieldSchema(key, DataType.VARCHAR, max_length=65_535))\n else:\n fields.append(FieldSchema(key, dtype))\n # Create the text field\n fields.append(\n FieldSchema(self._text_field, DataType.VARCHAR, max_length=65_535)\n )\n # Create the primary key field\n fields.append(\n FieldSchema(\n self._primary_field, DataType.INT64, is_primary=True, auto_id=True\n )\n )\n # Create the vector field, supports binary or float vectors\n fields.append(\n FieldSchema(self._vector_field, infer_dtype_bydata(embeddings[0]), dim=dim)\n )\n # Create the schema for the collection\n schema = CollectionSchema(fields)\n # Create the collection\n try:\n self.col = Collection(\n name=self.collection_name,\n schema=schema,\n consistency_level=self.consistency_level,\n using=self.alias,\n )\n except MilvusException as e:\n logger.error(\n \"Failed to create collection: %s error: %s\", self.collection_name, e\n )\n raise e\n def _extract_fields(self) -> None:\n \"\"\"Grab the existing fields from the Collection\"\"\"\n from pymilvus import Collection\n if isinstance(self.col, Collection):\n schema = self.col.schema\n for x in schema.fields:\n self.fields.append(x.name)", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"}
+{"id": "5fa5ef04557d-7", "text": "for x in schema.fields:\n self.fields.append(x.name)\n # Since primary field is auto-id, no need to track it\n self.fields.remove(self._primary_field)\n def _get_index(self) -> Optional[dict[str, Any]]:\n \"\"\"Return the vector index information if it exists\"\"\"\n from pymilvus import Collection\n if isinstance(self.col, Collection):\n for x in self.col.indexes:\n if x.field_name == self._vector_field:\n return x.to_dict()\n return None\n def _create_index(self) -> None:\n \"\"\"Create a index on the collection\"\"\"\n from pymilvus import Collection, MilvusException\n if isinstance(self.col, Collection) and self._get_index() is None:\n try:\n # If no index params, use a default HNSW based one\n if self.index_params is None:\n self.index_params = {\n \"metric_type\": \"L2\",\n \"index_type\": \"HNSW\",\n \"params\": {\"M\": 8, \"efConstruction\": 64},\n }\n try:\n self.col.create_index(\n self._vector_field,\n index_params=self.index_params,\n using=self.alias,\n )\n # If default did not work, most likely on Zilliz Cloud\n except MilvusException:\n # Use AUTOINDEX based index\n self.index_params = {\n \"metric_type\": \"L2\",\n \"index_type\": \"AUTOINDEX\",\n \"params\": {},\n }\n self.col.create_index(\n self._vector_field,\n index_params=self.index_params,\n using=self.alias,\n )\n logger.debug(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"}
+{"id": "5fa5ef04557d-8", "text": "using=self.alias,\n )\n logger.debug(\n \"Successfully created an index on collection: %s\",\n self.collection_name,\n )\n except MilvusException as e:\n logger.error(\n \"Failed to create an index on collection: %s\", self.collection_name\n )\n raise e\n def _create_search_params(self) -> None:\n \"\"\"Generate search params based on the current index type\"\"\"\n from pymilvus import Collection\n if isinstance(self.col, Collection) and self.search_params is None:\n index = self._get_index()\n if index is not None:\n index_type: str = index[\"index_param\"][\"index_type\"]\n metric_type: str = index[\"index_param\"][\"metric_type\"]\n self.search_params = self.default_search_params[index_type]\n self.search_params[\"metric_type\"] = metric_type\n def _load(self) -> None:\n \"\"\"Load the collection if available.\"\"\"\n from pymilvus import Collection\n if isinstance(self.col, Collection) and self._get_index() is not None:\n self.col.load()\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n timeout: Optional[int] = None,\n batch_size: int = 1000,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Insert text data into Milvus.\n Inserting data when the collection has not be made yet will result\n in creating a new Collection. The data of the first entity decides\n the schema of the new collection, the dim is extracted from the first\n embedding and the columns are decided by the first metadata dict.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"}
+{"id": "5fa5ef04557d-9", "text": "embedding and the columns are decided by the first metadata dict.\n Metada keys will need to be present for all inserted values. At\n the moment there is no None equivalent in Milvus.\n Args:\n texts (Iterable[str]): The texts to embed, it is assumed\n that they all fit in memory.\n metadatas (Optional[List[dict]]): Metadata dicts attached to each of\n the texts. Defaults to None.\n timeout (Optional[int]): Timeout for each batch insert. Defaults\n to None.\n batch_size (int, optional): Batch size to use for insertion.\n Defaults to 1000.\n Raises:\n MilvusException: Failure to add texts\n Returns:\n List[str]: The resulting keys for each inserted element.\n \"\"\"\n from pymilvus import Collection, MilvusException\n texts = list(texts)\n try:\n embeddings = self.embedding_func.embed_documents(texts)\n except NotImplementedError:\n embeddings = [self.embedding_func.embed_query(x) for x in texts]\n if len(embeddings) == 0:\n logger.debug(\"Nothing to insert, skipping.\")\n return []\n # If the collection hasnt been initialized yet, perform all steps to do so\n if not isinstance(self.col, Collection):\n self._init(embeddings, metadatas)\n # Dict to hold all insert columns\n insert_dict: dict[str, list] = {\n self._text_field: texts,\n self._vector_field: embeddings,\n }\n # Collect the metadata into the insert dict.\n if metadatas is not None:\n for d in metadatas:\n for key, value in d.items():\n if key in self.fields:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"}
+{"id": "5fa5ef04557d-10", "text": "for key, value in d.items():\n if key in self.fields:\n insert_dict.setdefault(key, []).append(value)\n # Total insert count\n vectors: list = insert_dict[self._vector_field]\n total_count = len(vectors)\n pks: list[str] = []\n assert isinstance(self.col, Collection)\n for i in range(0, total_count, batch_size):\n # Grab end index\n end = min(i + batch_size, total_count)\n # Convert dict to list of lists batch for insertion\n insert_list = [insert_dict[x][i:end] for x in self.fields]\n # Insert into the collection.\n try:\n res: Collection\n res = self.col.insert(insert_list, timeout=timeout, **kwargs)\n pks.extend(res.primary_keys)\n except MilvusException as e:\n logger.error(\n \"Failed to insert batch starting at entity: %s/%s\", i, total_count\n )\n raise e\n return pks\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n param: Optional[dict] = None,\n expr: Optional[str] = None,\n timeout: Optional[int] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Perform a similarity search against the query string.\n Args:\n query (str): The text to search.\n k (int, optional): How many results to return. Defaults to 4.\n param (dict, optional): The search params for the index type.\n Defaults to None.\n expr (str, optional): Filtering expression. Defaults to None.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"}
+{"id": "5fa5ef04557d-11", "text": "expr (str, optional): Filtering expression. Defaults to None.\n timeout (int, optional): How long to wait before timeout error.\n Defaults to None.\n kwargs: Collection.search() keyword arguments.\n Returns:\n List[Document]: Document results for search.\n \"\"\"\n if self.col is None:\n logger.debug(\"No existing collection to search.\")\n return []\n res = self.similarity_search_with_score(\n query=query, k=k, param=param, expr=expr, timeout=timeout, **kwargs\n )\n return [doc for doc, _ in res]\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n param: Optional[dict] = None,\n expr: Optional[str] = None,\n timeout: Optional[int] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Perform a similarity search against the query string.\n Args:\n embedding (List[float]): The embedding vector to search.\n k (int, optional): How many results to return. Defaults to 4.\n param (dict, optional): The search params for the index type.\n Defaults to None.\n expr (str, optional): Filtering expression. Defaults to None.\n timeout (int, optional): How long to wait before timeout error.\n Defaults to None.\n kwargs: Collection.search() keyword arguments.\n Returns:\n List[Document]: Document results for search.\n \"\"\"\n if self.col is None:\n logger.debug(\"No existing collection to search.\")\n return []\n res = self.similarity_search_with_score_by_vector(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"}
+{"id": "5fa5ef04557d-12", "text": "return []\n res = self.similarity_search_with_score_by_vector(\n embedding=embedding, k=k, param=param, expr=expr, timeout=timeout, **kwargs\n )\n return [doc for doc, _ in res]\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n param: Optional[dict] = None,\n expr: Optional[str] = None,\n timeout: Optional[int] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Perform a search on a query string and return results with score.\n For more information about the search parameters, take a look at the pymilvus\n documentation found here:\n https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md\n Args:\n query (str): The text being searched.\n k (int, optional): The amount of results ot return. Defaults to 4.\n param (dict): The search params for the specified index.\n Defaults to None.\n expr (str, optional): Filtering expression. Defaults to None.\n timeout (int, optional): How long to wait before timeout error.\n Defaults to None.\n kwargs: Collection.search() keyword arguments.\n Returns:\n List[float], List[Tuple[Document, any, any]]:\n \"\"\"\n if self.col is None:\n logger.debug(\"No existing collection to search.\")\n return []\n # Embed the query text.\n embedding = self.embedding_func.embed_query(query)\n res = self.similarity_search_with_score_by_vector(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"}
+{"id": "5fa5ef04557d-13", "text": "res = self.similarity_search_with_score_by_vector(\n embedding=embedding, k=k, param=param, expr=expr, timeout=timeout, **kwargs\n )\n return res\n[docs] def similarity_search_with_score_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n param: Optional[dict] = None,\n expr: Optional[str] = None,\n timeout: Optional[int] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Perform a search on a query string and return results with score.\n For more information about the search parameters, take a look at the pymilvus\n documentation found here:\n https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md\n Args:\n embedding (List[float]): The embedding vector being searched.\n k (int, optional): The amount of results ot return. Defaults to 4.\n param (dict): The search params for the specified index.\n Defaults to None.\n expr (str, optional): Filtering expression. Defaults to None.\n timeout (int, optional): How long to wait before timeout error.\n Defaults to None.\n kwargs: Collection.search() keyword arguments.\n Returns:\n List[Tuple[Document, float]]: Result doc and score.\n \"\"\"\n if self.col is None:\n logger.debug(\"No existing collection to search.\")\n return []\n if param is None:\n param = self.search_params\n # Determine result metadata fields.\n output_fields = self.fields[:]\n output_fields.remove(self._vector_field)\n # Perform the search.\n res = self.col.search(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"}
+{"id": "5fa5ef04557d-14", "text": "# Perform the search.\n res = self.col.search(\n data=[embedding],\n anns_field=self._vector_field,\n param=param,\n limit=k,\n expr=expr,\n output_fields=output_fields,\n timeout=timeout,\n **kwargs,\n )\n # Organize results.\n ret = []\n for result in res[0]:\n meta = {x: result.entity.get(x) for x in output_fields}\n doc = Document(page_content=meta.pop(self._text_field), metadata=meta)\n pair = (doc, result.score)\n ret.append(pair)\n return ret\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n param: Optional[dict] = None,\n expr: Optional[str] = None,\n timeout: Optional[int] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Perform a search and return results that are reordered by MMR.\n Args:\n query (str): The text being searched.\n k (int, optional): How many results to give. Defaults to 4.\n fetch_k (int, optional): Total results to select k from.\n Defaults to 20.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5\n param (dict, optional): The search params for the specified index.\n Defaults to None.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"}
+{"id": "5fa5ef04557d-15", "text": "Defaults to None.\n expr (str, optional): Filtering expression. Defaults to None.\n timeout (int, optional): How long to wait before timeout error.\n Defaults to None.\n kwargs: Collection.search() keyword arguments.\n Returns:\n List[Document]: Document results for search.\n \"\"\"\n if self.col is None:\n logger.debug(\"No existing collection to search.\")\n return []\n embedding = self.embedding_func.embed_query(query)\n return self.max_marginal_relevance_search_by_vector(\n embedding=embedding,\n k=k,\n fetch_k=fetch_k,\n lambda_mult=lambda_mult,\n param=param,\n expr=expr,\n timeout=timeout,\n **kwargs,\n )\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: list[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n param: Optional[dict] = None,\n expr: Optional[str] = None,\n timeout: Optional[int] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Perform a search and return results that are reordered by MMR.\n Args:\n embedding (str): The embedding vector being searched.\n k (int, optional): How many results to give. Defaults to 4.\n fetch_k (int, optional): Total results to select k from.\n Defaults to 20.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"}
+{"id": "5fa5ef04557d-16", "text": "to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5\n param (dict, optional): The search params for the specified index.\n Defaults to None.\n expr (str, optional): Filtering expression. Defaults to None.\n timeout (int, optional): How long to wait before timeout error.\n Defaults to None.\n kwargs: Collection.search() keyword arguments.\n Returns:\n List[Document]: Document results for search.\n \"\"\"\n if self.col is None:\n logger.debug(\"No existing collection to search.\")\n return []\n if param is None:\n param = self.search_params\n # Determine result metadata fields.\n output_fields = self.fields[:]\n output_fields.remove(self._vector_field)\n # Perform the search.\n res = self.col.search(\n data=[embedding],\n anns_field=self._vector_field,\n param=param,\n limit=fetch_k,\n expr=expr,\n output_fields=output_fields,\n timeout=timeout,\n **kwargs,\n )\n # Organize results.\n ids = []\n documents = []\n scores = []\n for result in res[0]:\n meta = {x: result.entity.get(x) for x in output_fields}\n doc = Document(page_content=meta.pop(self._text_field), metadata=meta)\n documents.append(doc)\n scores.append(result.score)\n ids.append(result.id)\n vectors = self.col.query(\n expr=f\"{self._primary_field} in {ids}\",\n output_fields=[self._primary_field, self._vector_field],\n timeout=timeout,\n )\n # Reorganize the results from query to match search order.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"}
+{"id": "5fa5ef04557d-17", "text": ")\n # Reorganize the results from query to match search order.\n vectors = {x[self._primary_field]: x[self._vector_field] for x in vectors}\n ordered_result_embeddings = [vectors[x] for x in ids]\n # Get the new order of results.\n new_ordering = maximal_marginal_relevance(\n np.array(embedding), ordered_result_embeddings, k=k, lambda_mult=lambda_mult\n )\n # Reorder the values and return.\n ret = []\n for x in new_ordering:\n # Function can return -1 index\n if x == -1:\n break\n else:\n ret.append(documents[x])\n return ret\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n collection_name: str = \"LangChainCollection\",\n connection_args: dict[str, Any] = DEFAULT_MILVUS_CONNECTION,\n consistency_level: str = \"Session\",\n index_params: Optional[dict] = None,\n search_params: Optional[dict] = None,\n drop_old: bool = False,\n **kwargs: Any,\n ) -> Milvus:\n \"\"\"Create a Milvus collection, indexes it with HNSW, and insert data.\n Args:\n texts (List[str]): Text data.\n embedding (Embeddings): Embedding function.\n metadatas (Optional[List[dict]]): Metadata for each text if it exists.\n Defaults to None.\n collection_name (str, optional): Collection name to use. Defaults to\n \"LangChainCollection\".", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"}
+{"id": "5fa5ef04557d-18", "text": "\"LangChainCollection\".\n connection_args (dict[str, Any], optional): Connection args to use. Defaults\n to DEFAULT_MILVUS_CONNECTION.\n consistency_level (str, optional): Which consistency level to use. Defaults\n to \"Session\".\n index_params (Optional[dict], optional): Which index_params to use. Defaults\n to None.\n search_params (Optional[dict], optional): Which search params to use.\n Defaults to None.\n drop_old (Optional[bool], optional): Whether to drop the collection with\n that name if it exists. Defaults to False.\n Returns:\n Milvus: Milvus Vector Store\n \"\"\"\n vector_db = cls(\n embedding_function=embedding,\n collection_name=collection_name,\n connection_args=connection_args,\n consistency_level=consistency_level,\n index_params=index_params,\n search_params=search_params,\n drop_old=drop_old,\n **kwargs,\n )\n vector_db.add_texts(texts=texts, metadatas=metadatas)\n return vector_db\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"}
+{"id": "77f234948b38-0", "text": "Source code for langchain.vectorstores.weaviate\n\"\"\"Wrapper around weaviate vector database.\"\"\"\nfrom __future__ import annotations\nimport datetime\nfrom typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Type\nfrom uuid import uuid4\nimport numpy as np\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\ndef _default_schema(index_name: str) -> Dict:\n return {\n \"class\": index_name,\n \"properties\": [\n {\n \"name\": \"text\",\n \"dataType\": [\"text\"],\n }\n ],\n }\ndef _create_weaviate_client(**kwargs: Any) -> Any:\n client = kwargs.get(\"client\")\n if client is not None:\n return client\n weaviate_url = get_from_dict_or_env(kwargs, \"weaviate_url\", \"WEAVIATE_URL\")\n try:\n # the weaviate api key param should not be mandatory\n weaviate_api_key = get_from_dict_or_env(\n kwargs, \"weaviate_api_key\", \"WEAVIATE_API_KEY\", None\n )\n except ValueError:\n weaviate_api_key = None\n try:\n import weaviate\n except ImportError:\n raise ValueError(\n \"Could not import weaviate python package. \"\n \"Please install it with `pip instal weaviate-client`\"\n )\n auth = (\n weaviate.auth.AuthApiKey(api_key=weaviate_api_key)\n if weaviate_api_key is not None\n else None\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"}
+{"id": "77f234948b38-1", "text": "if weaviate_api_key is not None\n else None\n )\n client = weaviate.Client(weaviate_url, auth_client_secret=auth)\n return client\ndef _default_score_normalizer(val: float) -> float:\n return 1 - 1 / (1 + np.exp(val))\ndef _json_serializable(value: Any) -> Any:\n if isinstance(value, datetime.datetime):\n return value.isoformat()\n return value\n[docs]class Weaviate(VectorStore):\n \"\"\"Wrapper around Weaviate vector database.\n To use, you should have the ``weaviate-client`` python package installed.\n Example:\n .. code-block:: python\n import weaviate\n from langchain.vectorstores import Weaviate\n client = weaviate.Client(url=os.environ[\"WEAVIATE_URL\"], ...)\n weaviate = Weaviate(client, index_name, text_key)\n \"\"\"\n def __init__(\n self,\n client: Any,\n index_name: str,\n text_key: str,\n embedding: Optional[Embeddings] = None,\n attributes: Optional[List[str]] = None,\n relevance_score_fn: Optional[\n Callable[[float], float]\n ] = _default_score_normalizer,\n by_text: bool = True,\n ):\n \"\"\"Initialize with Weaviate client.\"\"\"\n try:\n import weaviate\n except ImportError:\n raise ValueError(\n \"Could not import weaviate python package. \"\n \"Please install it with `pip install weaviate-client`.\"\n )\n if not isinstance(client, weaviate.Client):\n raise ValueError(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"}
+{"id": "77f234948b38-2", "text": ")\n if not isinstance(client, weaviate.Client):\n raise ValueError(\n f\"client should be an instance of weaviate.Client, got {type(client)}\"\n )\n self._client = client\n self._index_name = index_name\n self._embedding = embedding\n self._text_key = text_key\n self._query_attrs = [self._text_key]\n self._relevance_score_fn = relevance_score_fn\n self._by_text = by_text\n if attributes is not None:\n self._query_attrs.extend(attributes)\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Upload texts with metadata (properties) to Weaviate.\"\"\"\n from weaviate.util import get_valid_uuid\n ids = []\n with self._client.batch as batch:\n for i, text in enumerate(texts):\n data_properties = {self._text_key: text}\n if metadatas is not None:\n for key, val in metadatas[i].items():\n data_properties[key] = _json_serializable(val)\n # If the UUID of one of the objects already exists\n # then the existing object will be replaced by the new object.\n _id = (\n kwargs[\"uuids\"][i] if \"uuids\" in kwargs else get_valid_uuid(uuid4())\n )\n if self._embedding is not None:\n vector = self._embedding.embed_documents([text])[0]\n else:\n vector = None\n batch.add_data_object(\n data_object=data_properties,\n class_name=self._index_name,\n uuid=_id,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"}
+{"id": "77f234948b38-3", "text": "class_name=self._index_name,\n uuid=_id,\n vector=vector,\n )\n ids.append(_id)\n return ids\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n if self._by_text:\n return self.similarity_search_by_text(query, k, **kwargs)\n else:\n if self._embedding is None:\n raise ValueError(\n \"_embedding cannot be None for similarity_search when \"\n \"_by_text=False\"\n )\n embedding = self._embedding.embed_query(query)\n return self.similarity_search_by_vector(embedding, k, **kwargs)\n[docs] def similarity_search_by_text(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n content: Dict[str, Any] = {\"concepts\": [query]}\n if kwargs.get(\"search_distance\"):\n content[\"certainty\"] = kwargs.get(\"search_distance\")\n query_obj = self._client.query.get(self._index_name, self._query_attrs)\n if kwargs.get(\"where_filter\"):", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"}
+{"id": "77f234948b38-4", "text": "if kwargs.get(\"where_filter\"):\n query_obj = query_obj.with_where(kwargs.get(\"where_filter\"))\n if kwargs.get(\"additional\"):\n query_obj = query_obj.with_additional(kwargs.get(\"additional\"))\n result = query_obj.with_near_text(content).with_limit(k).do()\n if \"errors\" in result:\n raise ValueError(f\"Error during query: {result['errors']}\")\n docs = []\n for res in result[\"data\"][\"Get\"][self._index_name]:\n text = res.pop(self._text_key)\n docs.append(Document(page_content=text, metadata=res))\n return docs\n[docs] def similarity_search_by_vector(\n self, embedding: List[float], k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Look up similar documents by embedding vector in Weaviate.\"\"\"\n vector = {\"vector\": embedding}\n query_obj = self._client.query.get(self._index_name, self._query_attrs)\n if kwargs.get(\"where_filter\"):\n query_obj = query_obj.with_where(kwargs.get(\"where_filter\"))\n if kwargs.get(\"additional\"):\n query_obj = query_obj.with_additional(kwargs.get(\"additional\"))\n result = query_obj.with_near_vector(vector).with_limit(k).do()\n if \"errors\" in result:\n raise ValueError(f\"Error during query: {result['errors']}\")\n docs = []\n for res in result[\"data\"][\"Get\"][self._index_name]:\n text = res.pop(self._text_key)\n docs.append(Document(page_content=text, metadata=res))\n return docs\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"}
+{"id": "77f234948b38-5", "text": "k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n if self._embedding is not None:\n embedding = self._embedding.embed_query(query)\n else:\n raise ValueError(\n \"max_marginal_relevance_search requires a suitable Embeddings object\"\n )\n return self.max_marginal_relevance_search_by_vector(\n embedding, k=k, fetch_k=fetch_k, lambda_mult=lambda_mult, **kwargs\n )\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"}
+{"id": "77f234948b38-6", "text": "among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n vector = {\"vector\": embedding}\n query_obj = self._client.query.get(self._index_name, self._query_attrs)\n if kwargs.get(\"where_filter\"):\n query_obj = query_obj.with_where(kwargs.get(\"where_filter\"))\n results = (\n query_obj.with_additional(\"vector\")\n .with_near_vector(vector)\n .with_limit(fetch_k)\n .do()\n )\n payload = results[\"data\"][\"Get\"][self._index_name]\n embeddings = [result[\"_additional\"][\"vector\"] for result in payload]\n mmr_selected = maximal_marginal_relevance(\n np.array(embedding), embeddings, k=k, lambda_mult=lambda_mult\n )\n docs = []\n for idx in mmr_selected:\n text = payload[idx].pop(self._text_key)\n payload[idx].pop(\"_additional\")\n meta = payload[idx]\n docs.append(Document(page_content=text, metadata=meta))\n return docs\n[docs] def similarity_search_with_score(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n \"\"\"\n Return list of documents most similar to the query", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"}
+{"id": "77f234948b38-7", "text": "\"\"\"\n Return list of documents most similar to the query\n text and cosine distance in float for each.\n Lower score represents more similarity.\n \"\"\"\n if self._embedding is None:\n raise ValueError(\n \"_embedding cannot be None for similarity_search_with_score\"\n )\n content: Dict[str, Any] = {\"concepts\": [query]}\n if kwargs.get(\"search_distance\"):\n content[\"certainty\"] = kwargs.get(\"search_distance\")\n query_obj = self._client.query.get(self._index_name, self._query_attrs)\n if not self._by_text:\n embedding = self._embedding.embed_query(query)\n vector = {\"vector\": embedding}\n result = (\n query_obj.with_near_vector(vector)\n .with_limit(k)\n .with_additional(\"vector\")\n .do()\n )\n else:\n result = (\n query_obj.with_near_text(content)\n .with_limit(k)\n .with_additional(\"vector\")\n .do()\n )\n if \"errors\" in result:\n raise ValueError(f\"Error during query: {result['errors']}\")\n docs_and_scores = []\n for res in result[\"data\"][\"Get\"][self._index_name]:\n text = res.pop(self._text_key)\n score = np.dot(\n res[\"_additional\"][\"vector\"], self._embedding.embed_query(query)\n )\n docs_and_scores.append((Document(page_content=text, metadata=res), score))\n return docs_and_scores\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"}
+{"id": "77f234948b38-8", "text": "**kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores, normalized on a scale from 0 to 1.\n 0 is dissimilar, 1 is most similar.\n \"\"\"\n if self._relevance_score_fn is None:\n raise ValueError(\n \"relevance_score_fn must be provided to\"\n \" Weaviate constructor to normalize scores\"\n )\n docs_and_scores = self.similarity_search_with_score(query, k=k, **kwargs)\n return [\n (doc, self._relevance_score_fn(score)) for doc, score in docs_and_scores\n ]\n[docs] @classmethod\n def from_texts(\n cls: Type[Weaviate],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> Weaviate:\n \"\"\"Construct Weaviate wrapper from raw documents.\n This is a user-friendly interface that:\n 1. Embeds documents.\n 2. Creates a new index for the embeddings in the Weaviate instance.\n 3. Adds the documents to the newly created Weaviate index.\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain.vectorstores.weaviate import Weaviate\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n weaviate = Weaviate.from_texts(\n texts,\n embeddings,\n weaviate_url=\"http://localhost:8080\"\n )\n \"\"\"\n client = _create_weaviate_client(**kwargs)", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"}
+{"id": "77f234948b38-9", "text": ")\n \"\"\"\n client = _create_weaviate_client(**kwargs)\n from weaviate.util import get_valid_uuid\n index_name = kwargs.get(\"index_name\", f\"LangChain_{uuid4().hex}\")\n embeddings = embedding.embed_documents(texts) if embedding else None\n text_key = \"text\"\n schema = _default_schema(index_name)\n attributes = list(metadatas[0].keys()) if metadatas else None\n # check whether the index already exists\n if not client.schema.contains(schema):\n client.schema.create_class(schema)\n with client.batch as batch:\n for i, text in enumerate(texts):\n data_properties = {\n text_key: text,\n }\n if metadatas is not None:\n for key in metadatas[i].keys():\n data_properties[key] = metadatas[i][key]\n # If the UUID of one of the objects already exists\n # then the existing objectwill be replaced by the new object.\n if \"uuids\" in kwargs:\n _id = kwargs[\"uuids\"][i]\n else:\n _id = get_valid_uuid(uuid4())\n # if an embedding strategy is not provided, we let\n # weaviate create the embedding. Note that this will only\n # work if weaviate has been installed with a vectorizer module\n # like text2vec-contextionary for example\n params = {\n \"uuid\": _id,\n \"data_object\": data_properties,\n \"class_name\": index_name,\n }\n if embeddings is not None:\n params[\"vector\"] = embeddings[i]\n batch.add_data_object(**params)\n batch.flush()", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"}
+{"id": "77f234948b38-10", "text": "batch.add_data_object(**params)\n batch.flush()\n relevance_score_fn = kwargs.get(\"relevance_score_fn\")\n by_text: bool = kwargs.get(\"by_text\", False)\n return cls(\n client,\n index_name,\n text_key,\n embedding=embedding,\n attributes=attributes,\n relevance_score_fn=relevance_score_fn,\n by_text=by_text,\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"}
+{"id": "7dedb9f9abb6-0", "text": "Source code for langchain.vectorstores.zilliz\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, List, Optional\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.milvus import Milvus\nlogger = logging.getLogger(__name__)\n[docs]class Zilliz(Milvus):\n def _create_index(self) -> None:\n \"\"\"Create a index on the collection\"\"\"\n from pymilvus import Collection, MilvusException\n if isinstance(self.col, Collection) and self._get_index() is None:\n try:\n # If no index params, use a default AutoIndex based one\n if self.index_params is None:\n self.index_params = {\n \"metric_type\": \"L2\",\n \"index_type\": \"AUTOINDEX\",\n \"params\": {},\n }\n try:\n self.col.create_index(\n self._vector_field,\n index_params=self.index_params,\n using=self.alias,\n )\n # If default did not work, most likely Milvus self-hosted\n except MilvusException:\n # Use HNSW based index\n self.index_params = {\n \"metric_type\": \"L2\",\n \"index_type\": \"HNSW\",\n \"params\": {\"M\": 8, \"efConstruction\": 64},\n }\n self.col.create_index(\n self._vector_field,\n index_params=self.index_params,\n using=self.alias,\n )\n logger.debug(\n \"Successfully created an index on collection: %s\",\n self.collection_name,\n )\n except MilvusException as e:\n logger.error(\n \"Failed to create an index on collection: %s\", self.collection_name", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/zilliz.html"}
+{"id": "7dedb9f9abb6-1", "text": "\"Failed to create an index on collection: %s\", self.collection_name\n )\n raise e\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n collection_name: str = \"LangChainCollection\",\n connection_args: dict[str, Any] = {},\n consistency_level: str = \"Session\",\n index_params: Optional[dict] = None,\n search_params: Optional[dict] = None,\n drop_old: bool = False,\n **kwargs: Any,\n ) -> Zilliz:\n \"\"\"Create a Zilliz collection, indexes it with HNSW, and insert data.\n Args:\n texts (List[str]): Text data.\n embedding (Embeddings): Embedding function.\n metadatas (Optional[List[dict]]): Metadata for each text if it exists.\n Defaults to None.\n collection_name (str, optional): Collection name to use. Defaults to\n \"LangChainCollection\".\n connection_args (dict[str, Any], optional): Connection args to use. Defaults\n to DEFAULT_MILVUS_CONNECTION.\n consistency_level (str, optional): Which consistency level to use. Defaults\n to \"Session\".\n index_params (Optional[dict], optional): Which index_params to use.\n Defaults to None.\n search_params (Optional[dict], optional): Which search params to use.\n Defaults to None.\n drop_old (Optional[bool], optional): Whether to drop the collection with\n that name if it exists. Defaults to False.\n Returns:\n Zilliz: Zilliz Vector Store\n \"\"\"\n vector_db = cls(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/zilliz.html"}
+{"id": "7dedb9f9abb6-2", "text": "\"\"\"\n vector_db = cls(\n embedding_function=embedding,\n collection_name=collection_name,\n connection_args=connection_args,\n consistency_level=consistency_level,\n index_params=index_params,\n search_params=search_params,\n drop_old=drop_old,\n **kwargs,\n )\n vector_db.add_texts(texts=texts, metadatas=metadatas)\n return vector_db\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/zilliz.html"}
+{"id": "9ca766fcc14e-0", "text": "Source code for langchain.vectorstores.tigris\nfrom __future__ import annotations\nimport itertools\nfrom typing import TYPE_CHECKING, Any, Iterable, List, Optional, Tuple\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import Document\nfrom langchain.vectorstores import VectorStore\nif TYPE_CHECKING:\n from tigrisdb import TigrisClient\n from tigrisdb import VectorStore as TigrisVectorStore\n from tigrisdb.types.filters import Filter as TigrisFilter\n from tigrisdb.types.vector import Document as TigrisDocument\n[docs]class Tigris(VectorStore):\n def __init__(self, client: TigrisClient, embeddings: Embeddings, index_name: str):\n \"\"\"Initialize Tigris vector store\"\"\"\n try:\n import tigrisdb # noqa: F401\n except ImportError:\n raise ValueError(\n \"Could not import tigrisdb python package. \"\n \"Please install it with `pip install tigrisdb`\"\n )\n self._embed_fn = embeddings\n self._vector_store = TigrisVectorStore(client.get_search(), index_name)\n @property\n def search_index(self) -> TigrisVectorStore:\n return self._vector_store\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/tigris.html"}
+{"id": "9ca766fcc14e-1", "text": "metadatas: Optional list of metadatas associated with the texts.\n ids: Optional list of ids for documents.\n Ids will be autogenerated if not provided.\n kwargs: vectorstore specific parameters\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n docs = self._prep_docs(texts, metadatas, ids)\n result = self.search_index.add_documents(docs)\n return [r.id for r in result]\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n filter: Optional[TigrisFilter] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\"\"\"\n docs_with_scores = self.similarity_search_with_score(query, k, filter)\n return [doc for doc, _ in docs_with_scores]\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n filter: Optional[TigrisFilter] = None,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Run similarity search with Chroma with distance.\n Args:\n query (str): Query text to search for.\n k (int): Number of results to return. Defaults to 4.\n filter (Optional[TigrisFilter]): Filter by metadata. Defaults to None.\n Returns:\n List[Tuple[Document, float]]: List of documents most similar to the query\n text with distance in float.\n \"\"\"\n vector = self._embed_fn.embed_query(query)\n result = self.search_index.similarity_search(\n vector=vector, k=k, filter_by=filter\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/tigris.html"}
+{"id": "9ca766fcc14e-2", "text": "vector=vector, k=k, filter_by=filter\n )\n docs: List[Tuple[Document, float]] = []\n for r in result:\n docs.append(\n (\n Document(\n page_content=r.doc[\"text\"], metadata=r.doc.get(\"metadata\")\n ),\n r.score,\n )\n )\n return docs\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n client: Optional[TigrisClient] = None,\n index_name: Optional[str] = None,\n **kwargs: Any,\n ) -> Tigris:\n \"\"\"Return VectorStore initialized from texts and embeddings.\"\"\"\n if not index_name:\n raise ValueError(\"`index_name` is required\")\n if not client:\n client = TigrisClient()\n store = cls(client, embedding, index_name)\n store.add_texts(texts=texts, metadatas=metadatas, ids=ids)\n return store\n def _prep_docs(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]],\n ids: Optional[List[str]],\n ) -> List[TigrisDocument]:\n embeddings: List[List[float]] = self._embed_fn.embed_documents(list(texts))\n docs: List[TigrisDocument] = []\n for t, m, e, _id in itertools.zip_longest(\n texts, metadatas or [], embeddings or [], ids or []\n ):\n doc: TigrisDocument = {\n \"text\": t,\n \"embeddings\": e or [],", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/tigris.html"}
+{"id": "9ca766fcc14e-3", "text": "\"text\": t,\n \"embeddings\": e or [],\n \"metadata\": m or {},\n }\n if _id:\n doc[\"id\"] = _id\n docs.append(doc)\n return docs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/tigris.html"}
+{"id": "5ec5f29c6d32-0", "text": "Source code for langchain.vectorstores.chroma\n\"\"\"Wrapper around ChromaDB embeddings platform.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport uuid\nfrom typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Tuple, Type\nimport numpy as np\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import xor_args\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nif TYPE_CHECKING:\n import chromadb\n import chromadb.config\nlogger = logging.getLogger()\nDEFAULT_K = 4 # Number of Documents to return.\ndef _results_to_docs(results: Any) -> List[Document]:\n return [doc for doc, _ in _results_to_docs_and_scores(results)]\ndef _results_to_docs_and_scores(results: Any) -> List[Tuple[Document, float]]:\n return [\n # TODO: Chroma can do batch querying,\n # we shouldn't hard code to the 1st result\n (Document(page_content=result[0], metadata=result[1] or {}), result[2])\n for result in zip(\n results[\"documents\"][0],\n results[\"metadatas\"][0],\n results[\"distances\"][0],\n )\n ]\n[docs]class Chroma(VectorStore):\n \"\"\"Wrapper around ChromaDB embeddings platform.\n To use, you should have the ``chromadb`` python package installed.\n Example:\n .. code-block:: python\n from langchain.vectorstores import Chroma\n from langchain.embeddings.openai import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n vectorstore = Chroma(\"langchain_store\", embeddings)\n \"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"}
+{"id": "5ec5f29c6d32-1", "text": "vectorstore = Chroma(\"langchain_store\", embeddings)\n \"\"\"\n _LANGCHAIN_DEFAULT_COLLECTION_NAME = \"langchain\"\n def __init__(\n self,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n embedding_function: Optional[Embeddings] = None,\n persist_directory: Optional[str] = None,\n client_settings: Optional[chromadb.config.Settings] = None,\n collection_metadata: Optional[Dict] = None,\n client: Optional[chromadb.Client] = None,\n ) -> None:\n \"\"\"Initialize with Chroma client.\"\"\"\n try:\n import chromadb\n import chromadb.config\n except ImportError:\n raise ValueError(\n \"Could not import chromadb python package. \"\n \"Please install it with `pip install chromadb`.\"\n )\n if client is not None:\n self._client = client\n else:\n if client_settings:\n self._client_settings = client_settings\n else:\n self._client_settings = chromadb.config.Settings()\n if persist_directory is not None:\n self._client_settings = chromadb.config.Settings(\n chroma_db_impl=\"duckdb+parquet\",\n persist_directory=persist_directory,\n )\n self._client = chromadb.Client(self._client_settings)\n self._embedding_function = embedding_function\n self._persist_directory = persist_directory\n self._collection = self._client.get_or_create_collection(\n name=collection_name,\n embedding_function=self._embedding_function.embed_documents\n if self._embedding_function is not None\n else None,\n metadata=collection_metadata,\n )\n @xor_args((\"query_texts\", \"query_embeddings\"))\n def __query_collection(\n self,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"}
+{"id": "5ec5f29c6d32-2", "text": "def __query_collection(\n self,\n query_texts: Optional[List[str]] = None,\n query_embeddings: Optional[List[List[float]]] = None,\n n_results: int = 4,\n where: Optional[Dict[str, str]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Query the chroma collection.\"\"\"\n try:\n import chromadb\n except ImportError:\n raise ValueError(\n \"Could not import chromadb python package. \"\n \"Please install it with `pip install chromadb`.\"\n )\n for i in range(n_results, 0, -1):\n try:\n return self._collection.query(\n query_texts=query_texts,\n query_embeddings=query_embeddings,\n n_results=i,\n where=where,\n **kwargs,\n )\n except chromadb.errors.NotEnoughElementsException:\n logger.error(\n f\"Chroma collection {self._collection.name} \"\n f\"contains fewer than {i} elements.\"\n )\n raise chromadb.errors.NotEnoughElementsException(\n f\"No documents found for Chroma collection {self._collection.name}\"\n )\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts (Iterable[str]): Texts to add to the vectorstore.\n metadatas (Optional[List[dict]], optional): Optional list of metadatas.\n ids (Optional[List[str]], optional): Optional list of IDs.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"}
+{"id": "5ec5f29c6d32-3", "text": "ids (Optional[List[str]], optional): Optional list of IDs.\n Returns:\n List[str]: List of IDs of the added texts.\n \"\"\"\n # TODO: Handle the case where the user doesn't provide ids on the Collection\n if ids is None:\n ids = [str(uuid.uuid1()) for _ in texts]\n embeddings = None\n if self._embedding_function is not None:\n embeddings = self._embedding_function.embed_documents(list(texts))\n self._collection.add(\n metadatas=metadatas, embeddings=embeddings, documents=texts, ids=ids\n )\n return ids\n[docs] def similarity_search(\n self,\n query: str,\n k: int = DEFAULT_K,\n filter: Optional[Dict[str, str]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Run similarity search with Chroma.\n Args:\n query (str): Query text to search for.\n k (int): Number of results to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List[Document]: List of documents most similar to the query text.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)\n return [doc for doc, _ in docs_and_scores]\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = DEFAULT_K,\n filter: Optional[Dict[str, str]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"}
+{"id": "5ec5f29c6d32-4", "text": "\"\"\"Return docs most similar to embedding vector.\n Args:\n embedding (str): Embedding to look up documents similar to.\n k (int): Number of Documents to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents most similar to the query vector.\n \"\"\"\n results = self.__query_collection(\n query_embeddings=embedding, n_results=k, where=filter\n )\n return _results_to_docs(results)\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = DEFAULT_K,\n filter: Optional[Dict[str, str]] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Run similarity search with Chroma with distance.\n Args:\n query (str): Query text to search for.\n k (int): Number of results to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List[Tuple[Document, float]]: List of documents most similar to\n the query text and cosine distance in float for each.\n Lower score represents more similarity.\n \"\"\"\n if self._embedding_function is None:\n results = self.__query_collection(\n query_texts=[query], n_results=k, where=filter\n )\n else:\n query_embedding = self._embedding_function.embed_query(query)\n results = self.__query_collection(\n query_embeddings=[query_embedding], n_results=k, where=filter\n )\n return _results_to_docs_and_scores(results)\n def _similarity_search_with_relevance_scores(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"}
+{"id": "5ec5f29c6d32-5", "text": "def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n return self.similarity_search_with_score(query, k)\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = DEFAULT_K,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n filter: Optional[Dict[str, str]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n results = self.__query_collection(\n query_embeddings=embedding,\n n_results=fetch_k,\n where=filter,\n include=[\"metadatas\", \"documents\", \"distances\", \"embeddings\"],\n )\n mmr_selected = maximal_marginal_relevance(\n np.array(embedding, dtype=np.float32),", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"}
+{"id": "5ec5f29c6d32-6", "text": "np.array(embedding, dtype=np.float32),\n results[\"embeddings\"][0],\n k=k,\n lambda_mult=lambda_mult,\n )\n candidates = _results_to_docs(results)\n selected_results = [r for i, r in enumerate(candidates) if i in mmr_selected]\n return selected_results\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = DEFAULT_K,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n filter: Optional[Dict[str, str]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n if self._embedding_function is None:\n raise ValueError(\n \"For MMR search, you must specify an embedding function on\" \"creation.\"\n )\n embedding = self._embedding_function.embed_query(query)\n docs = self.max_marginal_relevance_search_by_vector(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"}
+{"id": "5ec5f29c6d32-7", "text": "docs = self.max_marginal_relevance_search_by_vector(\n embedding, k, fetch_k, lambda_mul=lambda_mult, filter=filter\n )\n return docs\n[docs] def delete_collection(self) -> None:\n \"\"\"Delete the collection.\"\"\"\n self._client.delete_collection(self._collection.name)\n[docs] def get(self, include: Optional[List[str]] = None) -> Dict[str, Any]:\n \"\"\"Gets the collection.\n Args:\n include (Optional[List[str]]): List of fields to include from db.\n Defaults to None.\n \"\"\"\n if include is not None:\n return self._collection.get(include=include)\n else:\n return self._collection.get()\n[docs] def persist(self) -> None:\n \"\"\"Persist the collection.\n This can be used to explicitly persist the data to disk.\n It will also be called automatically when the object is destroyed.\n \"\"\"\n if self._persist_directory is None:\n raise ValueError(\n \"You must specify a persist_directory on\"\n \"creation to persist the collection.\"\n )\n self._client.persist()\n[docs] def update_document(self, document_id: str, document: Document) -> None:\n \"\"\"Update a document in the collection.\n Args:\n document_id (str): ID of the document to update.\n document (Document): Document to update.\n \"\"\"\n text = document.page_content\n metadata = document.metadata\n if self._embedding_function is None:\n raise ValueError(\n \"For update, you must specify an embedding function on creation.\"\n )\n embeddings = self._embedding_function.embed_documents([text])\n self._collection.update(\n ids=[document_id],", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"}
+{"id": "5ec5f29c6d32-8", "text": "self._collection.update(\n ids=[document_id],\n embeddings=embeddings,\n documents=[text],\n metadatas=[metadata],\n )\n[docs] @classmethod\n def from_texts(\n cls: Type[Chroma],\n texts: List[str],\n embedding: Optional[Embeddings] = None,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n persist_directory: Optional[str] = None,\n client_settings: Optional[chromadb.config.Settings] = None,\n client: Optional[chromadb.Client] = None,\n **kwargs: Any,\n ) -> Chroma:\n \"\"\"Create a Chroma vectorstore from a raw documents.\n If a persist_directory is specified, the collection will be persisted there.\n Otherwise, the data will be ephemeral in-memory.\n Args:\n texts (List[str]): List of texts to add to the collection.\n collection_name (str): Name of the collection to create.\n persist_directory (Optional[str]): Directory to persist the collection.\n embedding (Optional[Embeddings]): Embedding function. Defaults to None.\n metadatas (Optional[List[dict]]): List of metadatas. Defaults to None.\n ids (Optional[List[str]]): List of document IDs. Defaults to None.\n client_settings (Optional[chromadb.config.Settings]): Chroma client settings\n Returns:\n Chroma: Chroma vectorstore.\n \"\"\"\n chroma_collection = cls(\n collection_name=collection_name,\n embedding_function=embedding,\n persist_directory=persist_directory,\n client_settings=client_settings,\n client=client,\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"}
+{"id": "5ec5f29c6d32-9", "text": "client_settings=client_settings,\n client=client,\n )\n chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)\n return chroma_collection\n[docs] @classmethod\n def from_documents(\n cls: Type[Chroma],\n documents: List[Document],\n embedding: Optional[Embeddings] = None,\n ids: Optional[List[str]] = None,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n persist_directory: Optional[str] = None,\n client_settings: Optional[chromadb.config.Settings] = None,\n client: Optional[chromadb.Client] = None, # Add this line\n **kwargs: Any,\n ) -> Chroma:\n \"\"\"Create a Chroma vectorstore from a list of documents.\n If a persist_directory is specified, the collection will be persisted there.\n Otherwise, the data will be ephemeral in-memory.\n Args:\n collection_name (str): Name of the collection to create.\n persist_directory (Optional[str]): Directory to persist the collection.\n ids (Optional[List[str]]): List of document IDs. Defaults to None.\n documents (List[Document]): List of documents to add to the vectorstore.\n embedding (Optional[Embeddings]): Embedding function. Defaults to None.\n client_settings (Optional[chromadb.config.Settings]): Chroma client settings\n Returns:\n Chroma: Chroma vectorstore.\n \"\"\"\n texts = [doc.page_content for doc in documents]\n metadatas = [doc.metadata for doc in documents]\n return cls.from_texts(\n texts=texts,\n embedding=embedding,\n metadatas=metadatas,\n ids=ids,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"}
+{"id": "5ec5f29c6d32-10", "text": "metadatas=metadatas,\n ids=ids,\n collection_name=collection_name,\n persist_directory=persist_directory,\n client_settings=client_settings,\n client=client,\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"}
+{"id": "7423a5103981-0", "text": "Source code for langchain.vectorstores.tair\n\"\"\"Wrapper around Tair Vector.\"\"\"\nfrom __future__ import annotations\nimport json\nimport logging\nimport uuid\nfrom typing import Any, Iterable, List, Optional, Type\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger(__name__)\ndef _uuid_key() -> str:\n return uuid.uuid4().hex\n[docs]class Tair(VectorStore):\n def __init__(\n self,\n embedding_function: Embeddings,\n url: str,\n index_name: str,\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n search_params: Optional[dict] = None,\n **kwargs: Any,\n ):\n self.embedding_function = embedding_function\n self.index_name = index_name\n try:\n from tair import Tair as TairClient\n except ImportError:\n raise ValueError(\n \"Could not import tair python package. \"\n \"Please install it with `pip install tair`.\"\n )\n try:\n # connect to tair from url\n client = TairClient.from_url(url, **kwargs)\n except ValueError as e:\n raise ValueError(f\"Tair failed to connect: {e}\")\n self.client = client\n self.content_key = content_key\n self.metadata_key = metadata_key\n self.search_params = search_params\n[docs] def create_index_if_not_exist(\n self,\n dim: int,\n distance_type: str,\n index_type: str,\n data_type: str,\n **kwargs: Any,\n ) -> bool:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/tair.html"}
+{"id": "7423a5103981-1", "text": "data_type: str,\n **kwargs: Any,\n ) -> bool:\n index = self.client.tvs_get_index(self.index_name)\n if index is not None:\n logger.info(\"Index already exists\")\n return False\n self.client.tvs_create_index(\n self.index_name,\n dim,\n distance_type,\n index_type,\n data_type,\n **kwargs,\n )\n return True\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Add texts data to an existing index.\"\"\"\n ids = []\n keys = kwargs.get(\"keys\", None)\n # Write data to tair\n pipeline = self.client.pipeline(transaction=False)\n embeddings = self.embedding_function.embed_documents(list(texts))\n for i, text in enumerate(texts):\n # Use provided key otherwise use default key\n key = keys[i] if keys else _uuid_key()\n metadata = metadatas[i] if metadatas else {}\n pipeline.tvs_hset(\n self.index_name,\n key,\n embeddings[i],\n False,\n **{\n self.content_key: text,\n self.metadata_key: json.dumps(metadata),\n },\n )\n ids.append(key)\n pipeline.execute()\n return ids\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"\n Returns the most similar indexed documents to the query text.\n Args:\n query (str): The query text for which to find similar documents.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/tair.html"}
+{"id": "7423a5103981-2", "text": "Args:\n query (str): The query text for which to find similar documents.\n k (int): The number of documents to return. Default is 4.\n Returns:\n List[Document]: A list of documents that are most similar to the query text.\n \"\"\"\n # Creates embedding vector from user query\n embedding = self.embedding_function.embed_query(query)\n keys_and_scores = self.client.tvs_knnsearch(\n self.index_name, k, embedding, False, None, **kwargs\n )\n pipeline = self.client.pipeline(transaction=False)\n for key, _ in keys_and_scores:\n pipeline.tvs_hmget(\n self.index_name, key, self.metadata_key, self.content_key\n )\n docs = pipeline.execute()\n return [\n Document(\n page_content=d[1],\n metadata=json.loads(d[0]),\n )\n for d in docs\n ]\n[docs] @classmethod\n def from_texts(\n cls: Type[Tair],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n index_name: str = \"langchain\",\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n **kwargs: Any,\n ) -> Tair:\n try:\n from tair import tairvector\n except ImportError:\n raise ValueError(\n \"Could not import tair python package. \"\n \"Please install it with `pip install tair`.\"\n )\n url = get_from_dict_or_env(kwargs, \"tair_url\", \"TAIR_URL\")\n if \"tair_url\" in kwargs:\n kwargs.pop(\"tair_url\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/tair.html"}
+{"id": "7423a5103981-3", "text": "if \"tair_url\" in kwargs:\n kwargs.pop(\"tair_url\")\n distance_type = tairvector.DistanceMetric.InnerProduct\n if \"distance_type\" in kwargs:\n distance_type = kwargs.pop(\"distance_typ\")\n index_type = tairvector.IndexType.HNSW\n if \"index_type\" in kwargs:\n index_type = kwargs.pop(\"index_type\")\n data_type = tairvector.DataType.Float32\n if \"data_type\" in kwargs:\n data_type = kwargs.pop(\"data_type\")\n index_params = {}\n if \"index_params\" in kwargs:\n index_params = kwargs.pop(\"index_params\")\n search_params = {}\n if \"search_params\" in kwargs:\n search_params = kwargs.pop(\"search_params\")\n keys = None\n if \"keys\" in kwargs:\n keys = kwargs.pop(\"keys\")\n try:\n tair_vector_store = cls(\n embedding,\n url,\n index_name,\n content_key=content_key,\n metadata_key=metadata_key,\n search_params=search_params,\n **kwargs,\n )\n except ValueError as e:\n raise ValueError(f\"tair failed to connect: {e}\")\n # Create embeddings for documents\n embeddings = embedding.embed_documents(texts)\n tair_vector_store.create_index_if_not_exist(\n len(embeddings[0]),\n distance_type,\n index_type,\n data_type,\n **index_params,\n )\n tair_vector_store.add_texts(texts, metadatas, keys=keys)\n return tair_vector_store\n[docs] @classmethod\n def from_documents(\n cls,\n documents: List[Document],\n embedding: Embeddings,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/tair.html"}
+{"id": "7423a5103981-4", "text": "cls,\n documents: List[Document],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n index_name: str = \"langchain\",\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n **kwargs: Any,\n ) -> Tair:\n texts = [d.page_content for d in documents]\n metadatas = [d.metadata for d in documents]\n return cls.from_texts(\n texts, embedding, metadatas, index_name, content_key, metadata_key, **kwargs\n )\n[docs] @staticmethod\n def drop_index(\n index_name: str = \"langchain\",\n **kwargs: Any,\n ) -> bool:\n \"\"\"\n Drop an existing index.\n Args:\n index_name (str): Name of the index to drop.\n Returns:\n bool: True if the index is dropped successfully.\n \"\"\"\n try:\n from tair import Tair as TairClient\n except ImportError:\n raise ValueError(\n \"Could not import tair python package. \"\n \"Please install it with `pip install tair`.\"\n )\n url = get_from_dict_or_env(kwargs, \"tair_url\", \"TAIR_URL\")\n try:\n if \"tair_url\" in kwargs:\n kwargs.pop(\"tair_url\")\n client = TairClient.from_url(url=url, **kwargs)\n except ValueError as e:\n raise ValueError(f\"Tair connection error: {e}\")\n # delete index\n ret = client.tvs_del_index(index_name)\n if ret == 0:\n # index not exist\n logger.info(\"Index does not exist\")\n return False", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/tair.html"}
+{"id": "7423a5103981-5", "text": "# index not exist\n logger.info(\"Index does not exist\")\n return False\n return True\n[docs] @classmethod\n def from_existing_index(\n cls,\n embedding: Embeddings,\n index_name: str = \"langchain\",\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n **kwargs: Any,\n ) -> Tair:\n \"\"\"Connect to an existing Tair index.\"\"\"\n url = get_from_dict_or_env(kwargs, \"tair_url\", \"TAIR_URL\")\n search_params = {}\n if \"search_params\" in kwargs:\n search_params = kwargs.pop(\"search_params\")\n return cls(\n embedding,\n url,\n index_name,\n content_key=content_key,\n metadata_key=metadata_key,\n search_params=search_params,\n **kwargs,\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/tair.html"}
+{"id": "ff7fad9ef071-0", "text": "Source code for langchain.vectorstores.elastic_vector_search\n\"\"\"Wrapper around Elasticsearch vector database.\"\"\"\nfrom __future__ import annotations\nimport uuid\nfrom abc import ABC\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Dict,\n Iterable,\n List,\n Mapping,\n Optional,\n Tuple,\n Union,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_env\nfrom langchain.vectorstores.base import VectorStore\nif TYPE_CHECKING:\n from elasticsearch import Elasticsearch\ndef _default_text_mapping(dim: int) -> Dict:\n return {\n \"properties\": {\n \"text\": {\"type\": \"text\"},\n \"vector\": {\"type\": \"dense_vector\", \"dims\": dim},\n }\n }\ndef _default_script_query(query_vector: List[float], filter: Optional[dict]) -> Dict:\n if filter:\n ((key, value),) = filter.items()\n filter = {\"match\": {f\"metadata.{key}.keyword\": f\"{value}\"}}\n else:\n filter = {\"match_all\": {}}\n return {\n \"script_score\": {\n \"query\": filter,\n \"script\": {\n \"source\": \"cosineSimilarity(params.query_vector, 'vector') + 1.0\",\n \"params\": {\"query_vector\": query_vector},\n },\n }\n }\n# ElasticVectorSearch is a concrete implementation of the abstract base class\n# VectorStore, which defines a common interface for all vector database\n# implementations. By inheriting from the ABC class, ElasticVectorSearch can be\n# defined as an abstract base class itself, allowing the creation of subclasses with", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"}
+{"id": "ff7fad9ef071-1", "text": "# defined as an abstract base class itself, allowing the creation of subclasses with\n# their own specific implementations. If you plan to subclass ElasticVectorSearch,\n# you can inherit from it and define your own implementation of the necessary methods\n# and attributes.\n[docs]class ElasticVectorSearch(VectorStore, ABC):\n \"\"\"Wrapper around Elasticsearch as a vector database.\n To connect to an Elasticsearch instance that does not require\n login credentials, pass the Elasticsearch URL and index name along with the\n embedding object to the constructor.\n Example:\n .. code-block:: python\n from langchain import ElasticVectorSearch\n from langchain.embeddings import OpenAIEmbeddings\n embedding = OpenAIEmbeddings()\n elastic_vector_search = ElasticVectorSearch(\n elasticsearch_url=\"http://localhost:9200\",\n index_name=\"test_index\",\n embedding=embedding\n )\n To connect to an Elasticsearch instance that requires login credentials,\n including Elastic Cloud, use the Elasticsearch URL format\n https://username:password@es_host:9243. For example, to connect to Elastic\n Cloud, create the Elasticsearch URL with the required authentication details and\n pass it to the ElasticVectorSearch constructor as the named parameter\n elasticsearch_url.\n You can obtain your Elastic Cloud URL and login credentials by logging in to the\n Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and\n navigating to the \"Deployments\" page.\n To obtain your Elastic Cloud password for the default \"elastic\" user:\n 1. Log in to the Elastic Cloud console at https://cloud.elastic.co\n 2. Go to \"Security\" > \"Users\"\n 3. Locate the \"elastic\" user and click \"Edit\"\n 4. Click \"Reset password\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"}
+{"id": "ff7fad9ef071-2", "text": "4. Click \"Reset password\"\n 5. Follow the prompts to reset the password\n The format for Elastic Cloud URLs is\n https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.\n Example:\n .. code-block:: python\n from langchain import ElasticVectorSearch\n from langchain.embeddings import OpenAIEmbeddings\n embedding = OpenAIEmbeddings()\n elastic_host = \"cluster_id.region_id.gcp.cloud.es.io\"\n elasticsearch_url = f\"https://username:password@{elastic_host}:9243\"\n elastic_vector_search = ElasticVectorSearch(\n elasticsearch_url=elasticsearch_url,\n index_name=\"test_index\",\n embedding=embedding\n )\n Args:\n elasticsearch_url (str): The URL for the Elasticsearch instance.\n index_name (str): The name of the Elasticsearch index for the embeddings.\n embedding (Embeddings): An object that provides the ability to embed text.\n It should be an instance of a class that subclasses the Embeddings\n abstract base class, such as OpenAIEmbeddings()\n Raises:\n ValueError: If the elasticsearch python package is not installed.\n \"\"\"\n def __init__(\n self,\n elasticsearch_url: str,\n index_name: str,\n embedding: Embeddings,\n *,\n ssl_verify: Optional[Dict[str, Any]] = None,\n ):\n \"\"\"Initialize with necessary components.\"\"\"\n try:\n import elasticsearch\n except ImportError:\n raise ImportError(\n \"Could not import elasticsearch python package. \"\n \"Please install it with `pip install elasticsearch`.\"\n )\n self.embedding = embedding\n self.index_name = index_name\n _ssl_verify = ssl_verify or {}", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"}
+{"id": "ff7fad9ef071-3", "text": "self.index_name = index_name\n _ssl_verify = ssl_verify or {}\n try:\n self.client = elasticsearch.Elasticsearch(elasticsearch_url, **_ssl_verify)\n except ValueError as e:\n raise ValueError(\n f\"Your elasticsearch client string is mis-formatted. Got error: {e} \"\n )\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n refresh_indices: bool = True,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n refresh_indices: bool to refresh ElasticSearch indices\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n try:\n from elasticsearch.exceptions import NotFoundError\n from elasticsearch.helpers import bulk\n except ImportError:\n raise ImportError(\n \"Could not import elasticsearch python package. \"\n \"Please install it with `pip install elasticsearch`.\"\n )\n requests = []\n ids = []\n embeddings = self.embedding.embed_documents(list(texts))\n dim = len(embeddings[0])\n mapping = _default_text_mapping(dim)\n # check to see if the index already exists\n try:\n self.client.indices.get(index=self.index_name)\n except NotFoundError:\n # TODO would be nice to create index before embedding,\n # just to save expensive steps for last\n self.create_index(self.client, self.index_name, mapping)\n for i, text in enumerate(texts):", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"}
+{"id": "ff7fad9ef071-4", "text": "for i, text in enumerate(texts):\n metadata = metadatas[i] if metadatas else {}\n _id = str(uuid.uuid4())\n request = {\n \"_op_type\": \"index\",\n \"_index\": self.index_name,\n \"vector\": embeddings[i],\n \"text\": text,\n \"metadata\": metadata,\n \"_id\": _id,\n }\n ids.append(_id)\n requests.append(request)\n bulk(self.client, requests)\n if refresh_indices:\n self.client.indices.refresh(index=self.index_name)\n return ids\n[docs] def similarity_search(\n self, query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)\n documents = [d[0] for d in docs_and_scores]\n return documents\n[docs] def similarity_search_with_score(\n self, query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n embedding = self.embedding.embed_query(query)", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"}
+{"id": "ff7fad9ef071-5", "text": "\"\"\"\n embedding = self.embedding.embed_query(query)\n script_query = _default_script_query(embedding, filter)\n response = self.client_search(\n self.client, self.index_name, script_query, size=k\n )\n hits = [hit for hit in response[\"hits\"][\"hits\"]]\n docs_and_scores = [\n (\n Document(\n page_content=hit[\"_source\"][\"text\"],\n metadata=hit[\"_source\"][\"metadata\"],\n ),\n hit[\"_score\"],\n )\n for hit in hits\n ]\n return docs_and_scores\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n elasticsearch_url: Optional[str] = None,\n index_name: Optional[str] = None,\n refresh_indices: bool = True,\n **kwargs: Any,\n ) -> ElasticVectorSearch:\n \"\"\"Construct ElasticVectorSearch wrapper from raw documents.\n This is a user-friendly interface that:\n 1. Embeds documents.\n 2. Creates a new index for the embeddings in the Elasticsearch instance.\n 3. Adds the documents to the newly created Elasticsearch index.\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import ElasticVectorSearch\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n elastic_vector_search = ElasticVectorSearch.from_texts(\n texts,\n embeddings,\n elasticsearch_url=\"http://localhost:9200\"\n )\n \"\"\"\n elasticsearch_url = elasticsearch_url or get_from_env(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"}
+{"id": "ff7fad9ef071-6", "text": ")\n \"\"\"\n elasticsearch_url = elasticsearch_url or get_from_env(\n \"elasticsearch_url\", \"ELASTICSEARCH_URL\"\n )\n index_name = index_name or uuid.uuid4().hex\n vectorsearch = cls(elasticsearch_url, index_name, embedding, **kwargs)\n vectorsearch.add_texts(\n texts, metadatas=metadatas, refresh_indices=refresh_indices\n )\n return vectorsearch\n[docs] def create_index(self, client: Any, index_name: str, mapping: Dict) -> None:\n version_num = client.info()[\"version\"][\"number\"][0]\n version_num = int(version_num)\n if version_num >= 8:\n client.indices.create(index=index_name, mappings=mapping)\n else:\n client.indices.create(index=index_name, body={\"mappings\": mapping})\n[docs] def client_search(\n self, client: Any, index_name: str, script_query: Dict, size: int\n ) -> Any:\n version_num = client.info()[\"version\"][\"number\"][0]\n version_num = int(version_num)\n if version_num >= 8:\n response = client.search(index=index_name, query=script_query, size=size)\n else:\n response = client.search(\n index=index_name, body={\"query\": script_query, \"size\": size}\n )\n return response\nclass ElasticKnnSearch(ElasticVectorSearch):\n \"\"\"\n A class for performing k-Nearest Neighbors (k-NN) search on an Elasticsearch index.\n The class is designed for a text search scenario where documents are text strings\n and their embeddings are vector representations of those strings.\n \"\"\"\n def __init__(\n self,\n index_name: str,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"}
+{"id": "ff7fad9ef071-7", "text": "\"\"\"\n def __init__(\n self,\n index_name: str,\n embedding: Embeddings,\n es_connection: Optional[\"Elasticsearch\"] = None,\n es_cloud_id: Optional[str] = None,\n es_user: Optional[str] = None,\n es_password: Optional[str] = None,\n vector_query_field: Optional[str] = \"vector\",\n query_field: Optional[str] = \"text\",\n ):\n \"\"\"\n Initializes an instance of the ElasticKnnSearch class and sets up the\n Elasticsearch client.\n Args:\n index_name: The name of the Elasticsearch index.\n embedding: An instance of the Embeddings class, used to generate vector\n representations of text strings.\n es_connection: An existing Elasticsearch connection.\n es_cloud_id: The Cloud ID of the Elasticsearch instance. Required if\n creating a new connection.\n es_user: The username for the Elasticsearch instance. Required if\n creating a new connection.\n es_password: The password for the Elasticsearch instance. Required if\n creating a new connection.\n \"\"\"\n try:\n import elasticsearch\n except ImportError:\n raise ImportError(\n \"Could not import elasticsearch python package. \"\n \"Please install it with `pip install elasticsearch`.\"\n )\n self.embedding = embedding\n self.index_name = index_name\n self.query_field = query_field\n self.vector_query_field = vector_query_field\n # If a pre-existing Elasticsearch connection is provided, use it.\n if es_connection is not None:\n self.client = es_connection\n else:\n # If credentials for a new Elasticsearch connection are provided,\n # create a new connection.\n if es_cloud_id and es_user and es_password:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"}
+{"id": "ff7fad9ef071-8", "text": "if es_cloud_id and es_user and es_password:\n self.client = elasticsearch.Elasticsearch(\n cloud_id=es_cloud_id, basic_auth=(es_user, es_password)\n )\n else:\n raise ValueError(\n \"\"\"Either provide a pre-existing Elasticsearch connection, \\\n or valid credentials for creating a new connection.\"\"\"\n )\n @staticmethod\n def _default_knn_mapping(dims: int) -> Dict:\n \"\"\"Generates a default index mapping for kNN search.\"\"\"\n return {\n \"properties\": {\n \"text\": {\"type\": \"text\"},\n \"vector\": {\n \"type\": \"dense_vector\",\n \"dims\": dims,\n \"index\": True,\n \"similarity\": \"dot_product\",\n },\n }\n }\n def _default_knn_query(\n self,\n query_vector: Optional[List[float]] = None,\n query: Optional[str] = None,\n model_id: Optional[str] = None,\n k: Optional[int] = 10,\n num_candidates: Optional[int] = 10,\n ) -> Dict:\n knn: Dict = {\n \"field\": self.vector_query_field,\n \"k\": k,\n \"num_candidates\": num_candidates,\n }\n # Case 1: `query_vector` is provided, but not `model_id` -> use query_vector\n if query_vector and not model_id:\n knn[\"query_vector\"] = query_vector\n # Case 2: `query` and `model_id` are provided, -> use query_vector_builder\n elif query and model_id:\n knn[\"query_vector_builder\"] = {\n \"text_embedding\": {", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"}
+{"id": "ff7fad9ef071-9", "text": "knn[\"query_vector_builder\"] = {\n \"text_embedding\": {\n \"model_id\": model_id, # use 'model_id' argument\n \"model_text\": query, # use 'query' argument\n }\n }\n else:\n raise ValueError(\n \"Either `query_vector` or `model_id` must be provided, but not both.\"\n )\n return knn\n def knn_search(\n self,\n query: Optional[str] = None,\n k: Optional[int] = 10,\n query_vector: Optional[List[float]] = None,\n model_id: Optional[str] = None,\n size: Optional[int] = 10,\n source: Optional[bool] = True,\n fields: Optional[\n Union[List[Mapping[str, Any]], Tuple[Mapping[str, Any], ...], None]\n ] = None,\n ) -> Dict:\n \"\"\"\n Performs a k-nearest neighbor (k-NN) search on the Elasticsearch index.\n The search can be conducted using either a raw query vector or a model ID.\n The method first generates\n the body of the search query, which can be interpreted by Elasticsearch.\n It then performs the k-NN\n search on the Elasticsearch index and returns the results.\n Args:\n query: The query or queries to be used for the search. Required if\n `query_vector` is not provided.\n k: The number of nearest neighbors to return. Defaults to 10.\n query_vector: The query vector to be used for the search. Required if\n `query` is not provided.\n model_id: The ID of the model to use for generating the query vector, if\n `query` is provided.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"}
+{"id": "ff7fad9ef071-10", "text": "`query` is provided.\n size: The number of search hits to return. Defaults to 10.\n source: Whether to include the source of each hit in the results.\n fields: The fields to include in the source of each hit. If None, all\n fields are included.\n vector_query_field: Field name to use in knn search if not default 'vector'\n Returns:\n The search results.\n Raises:\n ValueError: If neither `query_vector` nor `model_id` is provided, or if\n both are provided.\n \"\"\"\n knn_query_body = self._default_knn_query(\n query_vector=query_vector, query=query, model_id=model_id, k=k\n )\n # Perform the kNN search on the Elasticsearch index and return the results.\n res = self.client.search(\n index=self.index_name,\n knn=knn_query_body,\n size=size,\n source=source,\n fields=fields,\n )\n return dict(res)\n def knn_hybrid_search(\n self,\n query: Optional[str] = None,\n k: Optional[int] = 10,\n query_vector: Optional[List[float]] = None,\n model_id: Optional[str] = None,\n size: Optional[int] = 10,\n source: Optional[bool] = True,\n knn_boost: Optional[float] = 0.9,\n query_boost: Optional[float] = 0.1,\n fields: Optional[\n Union[List[Mapping[str, Any]], Tuple[Mapping[str, Any], ...], None]\n ] = None,\n ) -> Dict[Any, Any]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"}
+{"id": "ff7fad9ef071-11", "text": "] = None,\n ) -> Dict[Any, Any]:\n \"\"\"Performs a hybrid k-nearest neighbor (k-NN) and text-based search on the\n Elasticsearch index.\n The search can be conducted using either a raw query vector or a model ID.\n The method first generates\n the body of the k-NN search query and the text-based query, which can be\n interpreted by Elasticsearch.\n It then performs the hybrid search on the Elasticsearch index and returns the\n results.\n Args:\n query: The query or queries to be used for the search. Required if\n `query_vector` is not provided.\n k: The number of nearest neighbors to return. Defaults to 10.\n query_vector: The query vector to be used for the search. Required if\n `query` is not provided.\n model_id: The ID of the model to use for generating the query vector, if\n `query` is provided.\n size: The number of search hits to return. Defaults to 10.\n source: Whether to include the source of each hit in the results.\n knn_boost: The boost factor for the k-NN part of the search.\n query_boost: The boost factor for the text-based part of the search.\n fields\n The fields to include in the source of each hit. If None, all fields are\n included. Defaults to None.\n vector_query_field: Field name to use in knn search if not default 'vector'\n query_field: Field name to use in search if not default 'text'\n Returns:\n The search results.\n Raises:\n ValueError: If neither `query_vector` nor `model_id` is provided, or if\n both are provided.\n \"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"}
+{"id": "ff7fad9ef071-12", "text": "both are provided.\n \"\"\"\n knn_query_body = self._default_knn_query(\n query_vector=query_vector, query=query, model_id=model_id, k=k\n )\n # Modify the knn_query_body to add a \"boost\" parameter\n knn_query_body[\"boost\"] = knn_boost\n # Generate the body of the standard Elasticsearch query\n match_query_body = {\n \"match\": {self.query_field: {\"query\": query, \"boost\": query_boost}}\n }\n # Perform the hybrid search on the Elasticsearch index and return the results.\n res = self.client.search(\n index=self.index_name,\n query=match_query_body,\n knn=knn_query_body,\n fields=fields,\n size=size,\n source=source,\n )\n return dict(res)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"}
+{"id": "5ffc9e519391-0", "text": "Source code for langchain.vectorstores.pinecone\n\"\"\"Wrapper around Pinecone vector database.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport uuid\nfrom typing import Any, Callable, Iterable, List, Optional, Tuple\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger(__name__)\n[docs]class Pinecone(VectorStore):\n \"\"\"Wrapper around Pinecone vector database.\n To use, you should have the ``pinecone-client`` python package installed.\n Example:\n .. code-block:: python\n from langchain.vectorstores import Pinecone\n from langchain.embeddings.openai import OpenAIEmbeddings\n import pinecone\n # The environment should be the one specified next to the API key\n # in your Pinecone console\n pinecone.init(api_key=\"***\", environment=\"...\")\n index = pinecone.Index(\"langchain-demo\")\n embeddings = OpenAIEmbeddings()\n vectorstore = Pinecone(index, embeddings.embed_query, \"text\")\n \"\"\"\n def __init__(\n self,\n index: Any,\n embedding_function: Callable,\n text_key: str,\n namespace: Optional[str] = None,\n ):\n \"\"\"Initialize with Pinecone client.\"\"\"\n try:\n import pinecone\n except ImportError:\n raise ValueError(\n \"Could not import pinecone python package. \"\n \"Please install it with `pip install pinecone-client`.\"\n )\n if not isinstance(index, pinecone.index.Index):\n raise ValueError(\n f\"client should be an instance of pinecone.index.Index, \"\n f\"got {type(index)}\"\n )\n self._index = index", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"}
+{"id": "5ffc9e519391-1", "text": "f\"got {type(index)}\"\n )\n self._index = index\n self._embedding_function = embedding_function\n self._text_key = text_key\n self._namespace = namespace\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n namespace: Optional[str] = None,\n batch_size: int = 32,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n ids: Optional list of ids to associate with the texts.\n namespace: Optional pinecone namespace to add the texts to.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n if namespace is None:\n namespace = self._namespace\n # Embed and create the documents\n docs = []\n ids = ids or [str(uuid.uuid4()) for _ in texts]\n for i, text in enumerate(texts):\n embedding = self._embedding_function(text)\n metadata = metadatas[i] if metadatas else {}\n metadata[self._text_key] = text\n docs.append((ids[i], embedding, metadata))\n # upsert to Pinecone\n self._index.upsert(vectors=docs, namespace=namespace, batch_size=batch_size)\n return ids\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n filter: Optional[dict] = None,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"}
+{"id": "5ffc9e519391-2", "text": "k: int = 4,\n filter: Optional[dict] = None,\n namespace: Optional[str] = None,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return pinecone documents most similar to query, along with scores.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter: Dictionary of argument(s) to filter on metadata\n namespace: Namespace to search in. Default will search in '' namespace.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n if namespace is None:\n namespace = self._namespace\n query_obj = self._embedding_function(query)\n docs = []\n results = self._index.query(\n [query_obj],\n top_k=k,\n include_metadata=True,\n namespace=namespace,\n filter=filter,\n )\n for res in results[\"matches\"]:\n metadata = res[\"metadata\"]\n if self._text_key in metadata:\n text = metadata.pop(self._text_key)\n score = res[\"score\"]\n docs.append((Document(page_content=text, metadata=metadata), score))\n else:\n logger.warning(\n f\"Found document with no `{self._text_key}` key. Skipping.\"\n )\n return docs\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n filter: Optional[dict] = None,\n namespace: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return pinecone documents most similar to query.\n Args:\n query: Text to look up documents similar to.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"}
+{"id": "5ffc9e519391-3", "text": "Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter: Dictionary of argument(s) to filter on metadata\n namespace: Namespace to search in. Default will search in '' namespace.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(\n query, k=k, filter=filter, namespace=namespace, **kwargs\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n batch_size: int = 32,\n text_key: str = \"text\",\n index_name: Optional[str] = None,\n namespace: Optional[str] = None,\n **kwargs: Any,\n ) -> Pinecone:\n \"\"\"Construct Pinecone wrapper from raw documents.\n This is a user friendly interface that:\n 1. Embeds documents.\n 2. Adds the documents to a provided Pinecone index\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import Pinecone\n from langchain.embeddings import OpenAIEmbeddings\n import pinecone\n # The environment should be the one specified next to the API key\n # in your Pinecone console\n pinecone.init(api_key=\"***\", environment=\"...\")\n embeddings = OpenAIEmbeddings()\n pinecone = Pinecone.from_texts(\n texts,\n embeddings,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"}
+{"id": "5ffc9e519391-4", "text": "pinecone = Pinecone.from_texts(\n texts,\n embeddings,\n index_name=\"langchain-demo\"\n )\n \"\"\"\n try:\n import pinecone\n except ImportError:\n raise ValueError(\n \"Could not import pinecone python package. \"\n \"Please install it with `pip install pinecone-client`.\"\n )\n indexes = pinecone.list_indexes() # checks if provided index exists\n if index_name in indexes:\n index = pinecone.Index(index_name)\n elif len(indexes) == 0:\n raise ValueError(\n \"No active indexes found in your Pinecone project, \"\n \"are you sure you're using the right API key and environment?\"\n )\n else:\n raise ValueError(\n f\"Index '{index_name}' not found in your Pinecone project. \"\n f\"Did you mean one of the following indexes: {', '.join(indexes)}\"\n )\n for i in range(0, len(texts), batch_size):\n # set end position of batch\n i_end = min(i + batch_size, len(texts))\n # get batch of texts and ids\n lines_batch = texts[i:i_end]\n # create ids if not provided\n if ids:\n ids_batch = ids[i:i_end]\n else:\n ids_batch = [str(uuid.uuid4()) for n in range(i, i_end)]\n # create embeddings\n embeds = embedding.embed_documents(lines_batch)\n # prep metadata and upsert batch\n if metadatas:\n metadata = metadatas[i:i_end]\n else:\n metadata = [{} for _ in range(i, i_end)]\n for j, line in enumerate(lines_batch):", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"}
+{"id": "5ffc9e519391-5", "text": "for j, line in enumerate(lines_batch):\n metadata[j][text_key] = line\n to_upsert = zip(ids_batch, embeds, metadata)\n # upsert to Pinecone\n index.upsert(vectors=list(to_upsert), namespace=namespace)\n return cls(index, embedding.embed_query, text_key, namespace)\n[docs] @classmethod\n def from_existing_index(\n cls,\n index_name: str,\n embedding: Embeddings,\n text_key: str = \"text\",\n namespace: Optional[str] = None,\n ) -> Pinecone:\n \"\"\"Load pinecone vectorstore from index name.\"\"\"\n try:\n import pinecone\n except ImportError:\n raise ValueError(\n \"Could not import pinecone python package. \"\n \"Please install it with `pip install pinecone-client`.\"\n )\n return cls(\n pinecone.Index(index_name), embedding.embed_query, text_key, namespace\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"}
+{"id": "452074a53f4a-0", "text": "Source code for langchain.vectorstores.sklearn\n\"\"\" Wrapper around scikit-learn NearestNeighbors implementation.\nThe vector store can be persisted in json, bson or parquet format.\n\"\"\"\nimport json\nimport math\nimport os\nfrom abc import ABC, abstractmethod\nfrom typing import Any, Dict, Iterable, List, Literal, Optional, Tuple, Type\nfrom uuid import uuid4\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import guard_import\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nDEFAULT_K = 4 # Number of Documents to return.\nDEFAULT_FETCH_K = 20 # Number of Documents to initially fetch during MMR search.\nclass BaseSerializer(ABC):\n \"\"\"Abstract base class for saving and loading data.\"\"\"\n def __init__(self, persist_path: str) -> None:\n self.persist_path = persist_path\n @classmethod\n @abstractmethod\n def extension(cls) -> str:\n \"\"\"The file extension suggested by this serializer (without dot).\"\"\"\n @abstractmethod\n def save(self, data: Any) -> None:\n \"\"\"Saves the data to the persist_path\"\"\"\n @abstractmethod\n def load(self) -> Any:\n \"\"\"Loads the data from the persist_path\"\"\"\nclass JsonSerializer(BaseSerializer):\n \"\"\"Serializes data in json using the json package from python standard library.\"\"\"\n @classmethod\n def extension(cls) -> str:\n return \"json\"\n def save(self, data: Any) -> None:\n with open(self.persist_path, \"w\") as fp:\n json.dump(data, fp)\n def load(self) -> Any:\n with open(self.persist_path, \"r\") as fp:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"}
+{"id": "452074a53f4a-1", "text": "with open(self.persist_path, \"r\") as fp:\n return json.load(fp)\nclass BsonSerializer(BaseSerializer):\n \"\"\"Serializes data in binary json using the bson python package.\"\"\"\n def __init__(self, persist_path: str) -> None:\n super().__init__(persist_path)\n self.bson = guard_import(\"bson\")\n @classmethod\n def extension(cls) -> str:\n return \"bson\"\n def save(self, data: Any) -> None:\n with open(self.persist_path, \"wb\") as fp:\n fp.write(self.bson.dumps(data))\n def load(self) -> Any:\n with open(self.persist_path, \"rb\") as fp:\n return self.bson.loads(fp.read())\nclass ParquetSerializer(BaseSerializer):\n \"\"\"Serializes data in Apache Parquet format using the pyarrow package.\"\"\"\n def __init__(self, persist_path: str) -> None:\n super().__init__(persist_path)\n self.pd = guard_import(\"pandas\")\n self.pa = guard_import(\"pyarrow\")\n self.pq = guard_import(\"pyarrow.parquet\")\n @classmethod\n def extension(cls) -> str:\n return \"parquet\"\n def save(self, data: Any) -> None:\n df = self.pd.DataFrame(data)\n table = self.pa.Table.from_pandas(df)\n if os.path.exists(self.persist_path):\n backup_path = str(self.persist_path) + \"-backup\"\n os.rename(self.persist_path, backup_path)\n try:\n self.pq.write_table(table, self.persist_path)\n except Exception as exc:\n os.rename(backup_path, self.persist_path)\n raise exc\n else:\n os.remove(backup_path)", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"}
+{"id": "452074a53f4a-2", "text": "raise exc\n else:\n os.remove(backup_path)\n else:\n self.pq.write_table(table, self.persist_path)\n def load(self) -> Any:\n table = self.pq.read_table(self.persist_path)\n df = table.to_pandas()\n return {col: series.tolist() for col, series in df.items()}\nSERIALIZER_MAP: Dict[str, Type[BaseSerializer]] = {\n \"json\": JsonSerializer,\n \"bson\": BsonSerializer,\n \"parquet\": ParquetSerializer,\n}\nclass SKLearnVectorStoreException(RuntimeError):\n pass\n[docs]class SKLearnVectorStore(VectorStore):\n \"\"\"A simple in-memory vector store based on the scikit-learn library\n NearestNeighbors implementation.\"\"\"\n def __init__(\n self,\n embedding: Embeddings,\n *,\n persist_path: Optional[str] = None,\n serializer: Literal[\"json\", \"bson\", \"parquet\"] = \"json\",\n metric: str = \"cosine\",\n **kwargs: Any,\n ) -> None:\n np = guard_import(\"numpy\")\n sklearn_neighbors = guard_import(\"sklearn.neighbors\", pip_name=\"scikit-learn\")\n # non-persistent properties\n self._np = np\n self._neighbors = sklearn_neighbors.NearestNeighbors(metric=metric, **kwargs)\n self._neighbors_fitted = False\n self._embedding_function = embedding\n self._persist_path = persist_path\n self._serializer: Optional[BaseSerializer] = None\n if self._persist_path is not None:\n serializer_cls = SERIALIZER_MAP[serializer]\n self._serializer = serializer_cls(persist_path=self._persist_path)\n # data properties", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"}
+{"id": "452074a53f4a-3", "text": "# data properties\n self._embeddings: List[List[float]] = []\n self._texts: List[str] = []\n self._metadatas: List[dict] = []\n self._ids: List[str] = []\n # cache properties\n self._embeddings_np: Any = np.asarray([])\n if self._persist_path is not None and os.path.isfile(self._persist_path):\n self._load()\n[docs] def persist(self) -> None:\n if self._serializer is None:\n raise SKLearnVectorStoreException(\n \"You must specify a persist_path on creation to persist the \"\n \"collection.\"\n )\n data = {\n \"ids\": self._ids,\n \"texts\": self._texts,\n \"metadatas\": self._metadatas,\n \"embeddings\": self._embeddings,\n }\n self._serializer.save(data)\n def _load(self) -> None:\n if self._serializer is None:\n raise SKLearnVectorStoreException(\n \"You must specify a persist_path on creation to load the \" \"collection.\"\n )\n data = self._serializer.load()\n self._embeddings = data[\"embeddings\"]\n self._texts = data[\"texts\"]\n self._metadatas = data[\"metadatas\"]\n self._ids = data[\"ids\"]\n self._update_neighbors()\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n _texts = list(texts)", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"}
+{"id": "452074a53f4a-4", "text": ") -> List[str]:\n _texts = list(texts)\n _ids = ids or [str(uuid4()) for _ in _texts]\n self._texts.extend(_texts)\n self._embeddings.extend(self._embedding_function.embed_documents(_texts))\n self._metadatas.extend(metadatas or ([{}] * len(_texts)))\n self._ids.extend(_ids)\n self._update_neighbors()\n return _ids\n def _update_neighbors(self) -> None:\n if len(self._embeddings) == 0:\n raise SKLearnVectorStoreException(\n \"No data was added to SKLearnVectorStore.\"\n )\n self._embeddings_np = self._np.asarray(self._embeddings)\n self._neighbors.fit(self._embeddings_np)\n self._neighbors_fitted = True\n def _similarity_index_search_with_score(\n self, query_embedding: List[float], *, k: int = DEFAULT_K, **kwargs: Any\n ) -> List[Tuple[int, float]]:\n \"\"\"Search k embeddings similar to the query embedding. Returns a list of\n (index, distance) tuples.\"\"\"\n if not self._neighbors_fitted:\n raise SKLearnVectorStoreException(\n \"No data was added to SKLearnVectorStore.\"\n )\n neigh_dists, neigh_idxs = self._neighbors.kneighbors(\n [query_embedding], n_neighbors=k\n )\n return list(zip(neigh_idxs[0], neigh_dists[0]))\n[docs] def similarity_search_with_score(\n self, query: str, *, k: int = DEFAULT_K, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n query_embedding = self._embedding_function.embed_query(query)", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"}
+{"id": "452074a53f4a-5", "text": "query_embedding = self._embedding_function.embed_query(query)\n indices_dists = self._similarity_index_search_with_score(\n query_embedding, k=k, **kwargs\n )\n return [\n (\n Document(\n page_content=self._texts[idx],\n metadata={\"id\": self._ids[idx], **self._metadatas[idx]},\n ),\n dist,\n )\n for idx, dist in indices_dists\n ]\n[docs] def similarity_search(\n self, query: str, k: int = DEFAULT_K, **kwargs: Any\n ) -> List[Document]:\n docs_scores = self.similarity_search_with_score(query, k=k, **kwargs)\n return [doc for doc, _ in docs_scores]\n def _similarity_search_with_relevance_scores(\n self, query: str, k: int = DEFAULT_K, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n docs_dists = self.similarity_search_with_score(query, k=k, **kwargs)\n docs, dists = zip(*docs_dists)\n scores = [1 / math.exp(dist) for dist in dists]\n return list(zip(list(docs), scores))\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = DEFAULT_K,\n fetch_k: int = DEFAULT_FETCH_K,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"}
+{"id": "452074a53f4a-6", "text": "Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n indices_dists = self._similarity_index_search_with_score(\n embedding, k=fetch_k, **kwargs\n )\n indices, _ = zip(*indices_dists)\n result_embeddings = self._embeddings_np[indices,]\n mmr_selected = maximal_marginal_relevance(\n self._np.array(embedding, dtype=self._np.float32),\n result_embeddings,\n k=k,\n lambda_mult=lambda_mult,\n )\n mmr_indices = [indices[i] for i in mmr_selected]\n return [\n Document(\n page_content=self._texts[idx],\n metadata={\"id\": self._ids[idx], **self._metadatas[idx]},\n )\n for idx in mmr_indices\n ]\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = DEFAULT_K,\n fetch_k: int = DEFAULT_FETCH_K,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"}
+{"id": "452074a53f4a-7", "text": "among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n if self._embedding_function is None:\n raise ValueError(\n \"For MMR search, you must specify an embedding function on creation.\"\n )\n embedding = self._embedding_function.embed_query(query)\n docs = self.max_marginal_relevance_search_by_vector(\n embedding, k, fetch_k, lambda_mul=lambda_mult\n )\n return docs\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n persist_path: Optional[str] = None,\n **kwargs: Any,\n ) -> \"SKLearnVectorStore\":\n vs = SKLearnVectorStore(embedding, persist_path=persist_path, **kwargs)\n vs.add_texts(texts, metadatas=metadatas, ids=ids)\n return vs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"}
+{"id": "7b4a896e6191-0", "text": "Source code for langchain.vectorstores.deeplake\n\"\"\"Wrapper around Activeloop Deep Lake.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport uuid\nfrom functools import partial\nfrom typing import Any, Callable, Dict, Iterable, List, Optional, Sequence, Tuple\nimport numpy as np\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nlogger = logging.getLogger(__name__)\ndistance_metric_map = {\n \"l2\": lambda a, b: np.linalg.norm(a - b, axis=1, ord=2),\n \"l1\": lambda a, b: np.linalg.norm(a - b, axis=1, ord=1),\n \"max\": lambda a, b: np.linalg.norm(a - b, axis=1, ord=np.inf),\n \"cos\": lambda a, b: np.dot(a, b.T)\n / (np.linalg.norm(a) * np.linalg.norm(b, axis=1)),\n \"dot\": lambda a, b: np.dot(a, b.T),\n}\ndef vector_search(\n query_embedding: np.ndarray,\n data_vectors: np.ndarray,\n distance_metric: str = \"L2\",\n k: Optional[int] = 4,\n) -> Tuple[List, List]:\n \"\"\"Naive search for nearest neighbors\n args:\n query_embedding: np.ndarray\n data_vectors: np.ndarray\n k (int): number of nearest neighbors\n distance_metric: distance function 'L2' for Euclidean, 'L1' for Nuclear, 'Max'\n l-infinity distnace, 'cos' for cosine similarity, 'dot' for dot product\n returns:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"}
+{"id": "7b4a896e6191-1", "text": "returns:\n nearest_indices: List, indices of nearest neighbors\n \"\"\"\n if data_vectors.shape[0] == 0:\n return [], []\n # Calculate the distance between the query_vector and all data_vectors\n distances = distance_metric_map[distance_metric](query_embedding, data_vectors)\n nearest_indices = np.argsort(distances)\n nearest_indices = (\n nearest_indices[::-1][:k] if distance_metric in [\"cos\"] else nearest_indices[:k]\n )\n return nearest_indices.tolist(), distances[nearest_indices].tolist()\ndef dp_filter(x: dict, filter: Dict[str, str]) -> bool:\n \"\"\"Filter helper function for Deep Lake\"\"\"\n metadata = x[\"metadata\"].data()[\"value\"]\n return all(k in metadata and v == metadata[k] for k, v in filter.items())\n[docs]class DeepLake(VectorStore):\n \"\"\"Wrapper around Deep Lake, a data lake for deep learning applications.\n We implement naive similarity search and filtering for fast prototyping,\n but it can be extended with Tensor Query Language (TQL) for production use cases\n over billion rows.\n Why Deep Lake?\n - Not only stores embeddings, but also the original data with version control.\n - Serverless, doesn't require another service and can be used with major\n cloud providers (S3, GCS, etc.)\n - More than just a multi-modal vector store. You can use the dataset\n to fine-tune your own LLM models.\n To use, you should have the ``deeplake`` python package installed.\n Example:\n .. code-block:: python\n from langchain.vectorstores import DeepLake\n from langchain.embeddings.openai import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"}
+{"id": "7b4a896e6191-2", "text": "embeddings = OpenAIEmbeddings()\n vectorstore = DeepLake(\"langchain_store\", embeddings.embed_query)\n \"\"\"\n _LANGCHAIN_DEFAULT_DEEPLAKE_PATH = \"./deeplake/\"\n def __init__(\n self,\n dataset_path: str = _LANGCHAIN_DEFAULT_DEEPLAKE_PATH,\n token: Optional[str] = None,\n embedding_function: Optional[Embeddings] = None,\n read_only: Optional[bool] = False,\n ingestion_batch_size: int = 1024,\n num_workers: int = 0,\n verbose: bool = True,\n **kwargs: Any,\n ) -> None:\n \"\"\"Initialize with Deep Lake client.\"\"\"\n self.ingestion_batch_size = ingestion_batch_size\n self.num_workers = num_workers\n self.verbose = verbose\n try:\n import deeplake\n from deeplake.constants import MB\n except ImportError:\n raise ValueError(\n \"Could not import deeplake python package. \"\n \"Please install it with `pip install deeplake`.\"\n )\n self._deeplake = deeplake\n self.dataset_path = dataset_path\n creds_args = {\"creds\": kwargs[\"creds\"]} if \"creds\" in kwargs else {}\n if deeplake.exists(dataset_path, token=token, **creds_args) and not kwargs.get(\n \"overwrite\", False\n ):\n if \"overwrite\" in kwargs:\n del kwargs[\"overwrite\"]\n self.ds = deeplake.load(\n dataset_path,\n token=token,\n read_only=read_only,\n verbose=self.verbose,\n **kwargs,\n )\n logger.info(f\"Loading deeplake {dataset_path} from storage.\")\n if self.verbose:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"}
+{"id": "7b4a896e6191-3", "text": "if self.verbose:\n print(\n f\"Deep Lake Dataset in {dataset_path} already exists, \"\n f\"loading from the storage\"\n )\n self.ds.summary()\n else:\n if \"overwrite\" in kwargs:\n del kwargs[\"overwrite\"]\n self.ds = deeplake.empty(\n dataset_path,\n token=token,\n overwrite=True,\n verbose=self.verbose,\n **kwargs,\n )\n with self.ds:\n self.ds.create_tensor(\n \"text\",\n htype=\"text\",\n create_id_tensor=False,\n create_sample_info_tensor=False,\n create_shape_tensor=False,\n chunk_compression=\"lz4\",\n )\n self.ds.create_tensor(\n \"metadata\",\n htype=\"json\",\n create_id_tensor=False,\n create_sample_info_tensor=False,\n create_shape_tensor=False,\n chunk_compression=\"lz4\",\n )\n self.ds.create_tensor(\n \"embedding\",\n htype=\"generic\",\n dtype=np.float32,\n create_id_tensor=False,\n create_sample_info_tensor=False,\n max_chunk_size=64 * MB,\n create_shape_tensor=True,\n )\n self.ds.create_tensor(\n \"ids\",\n htype=\"text\",\n create_id_tensor=False,\n create_sample_info_tensor=False,\n create_shape_tensor=False,\n chunk_compression=\"lz4\",\n )\n self._embedding_function = embedding_function\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"}
+{"id": "7b4a896e6191-4", "text": "**kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts (Iterable[str]): Texts to add to the vectorstore.\n metadatas (Optional[List[dict]], optional): Optional list of metadatas.\n ids (Optional[List[str]], optional): Optional list of IDs.\n Returns:\n List[str]: List of IDs of the added texts.\n \"\"\"\n if ids is None:\n ids = [str(uuid.uuid1()) for _ in texts]\n text_list = list(texts)\n if metadatas is None:\n metadatas = [{}] * len(text_list)\n elements = list(zip(text_list, metadatas, ids))\n @self._deeplake.compute\n def ingest(sample_in: list, sample_out: list) -> None:\n text_list = [s[0] for s in sample_in]\n embeds: Sequence[Optional[np.ndarray]] = []\n if self._embedding_function is not None:\n embeddings = self._embedding_function.embed_documents(text_list)\n embeds = [np.array(e, dtype=np.float32) for e in embeddings]\n else:\n embeds = [None] * len(text_list)\n for s, e in zip(sample_in, embeds):\n sample_out.append(\n {\n \"text\": s[0],\n \"metadata\": s[1],\n \"ids\": s[2],\n \"embedding\": e,\n }\n )\n batch_size = min(self.ingestion_batch_size, len(elements))\n if batch_size == 0:\n return []\n batched = [", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"}
+{"id": "7b4a896e6191-5", "text": "if batch_size == 0:\n return []\n batched = [\n elements[i : i + batch_size] for i in range(0, len(elements), batch_size)\n ]\n ingest().eval(\n batched,\n self.ds,\n num_workers=min(self.num_workers, len(batched) // max(self.num_workers, 1)),\n **kwargs,\n )\n self.ds.commit(allow_empty=True)\n if self.verbose:\n self.ds.summary()\n return ids\n def _search_helper(\n self,\n query: Any[str, None] = None,\n embedding: Any[float, None] = None,\n k: int = 4,\n distance_metric: str = \"L2\",\n use_maximal_marginal_relevance: Optional[bool] = False,\n fetch_k: Optional[int] = 20,\n filter: Optional[Any[Dict[str, str], Callable, str]] = None,\n return_score: Optional[bool] = False,\n **kwargs: Any,\n ) -> Any[List[Document], List[Tuple[Document, float]]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n embedding: Embedding function to use. Defaults to None.\n k: Number of Documents to return. Defaults to 4.\n distance_metric: `L2` for Euclidean, `L1` for Nuclear,\n `max` L-infinity distance, `cos` for cosine similarity,\n 'dot' for dot product. Defaults to `L2`.\n filter: Attribute filter by metadata example {'key': 'value'}. It can also\n take [Deep Lake filter]", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"}
+{"id": "7b4a896e6191-6", "text": "take [Deep Lake filter]\n (https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake.core.dataset.Dataset.filter)\n Defaults to None.\n maximal_marginal_relevance: Whether to use maximal marginal relevance.\n Defaults to False.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n Defaults to 20.\n return_score: Whether to return the score. Defaults to False.\n Returns:\n List of Documents selected by the specified distance metric,\n if return_score True, return a tuple of (Document, score)\n \"\"\"\n view = self.ds\n # attribute based filtering\n if filter is not None:\n if isinstance(filter, dict):\n filter = partial(dp_filter, filter=filter)\n view = view.filter(filter)\n if len(view) == 0:\n return []\n if self._embedding_function is None:\n view = view.filter(lambda x: query in x[\"text\"].data()[\"value\"])\n scores = [1.0] * len(view)\n if use_maximal_marginal_relevance:\n raise ValueError(\n \"For MMR search, you must specify an embedding function on\"\n \"creation.\"\n )\n else:\n emb = embedding or self._embedding_function.embed_query(\n query\n ) # type: ignore\n query_emb = np.array(emb, dtype=np.float32)\n embeddings = view.embedding.numpy(fetch_chunks=True)\n k_search = fetch_k if use_maximal_marginal_relevance else k\n indices, scores = vector_search(\n query_emb,\n embeddings,\n k=k_search,\n distance_metric=distance_metric.lower(),\n )\n view = view[indices]", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"}
+{"id": "7b4a896e6191-7", "text": "distance_metric=distance_metric.lower(),\n )\n view = view[indices]\n if use_maximal_marginal_relevance:\n lambda_mult = kwargs.get(\"lambda_mult\", 0.5)\n indices = maximal_marginal_relevance(\n query_emb,\n embeddings[indices],\n k=min(k, len(indices)),\n lambda_mult=lambda_mult,\n )\n view = view[indices]\n scores = [scores[i] for i in indices]\n docs = [\n Document(\n page_content=el[\"text\"].data()[\"value\"],\n metadata=el[\"metadata\"].data()[\"value\"],\n )\n for el in view\n ]\n if return_score:\n return [(doc, score) for doc, score in zip(docs, scores)]\n return docs\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: text to embed and run the query on.\n k: Number of Documents to return.\n Defaults to 4.\n query: Text to look up documents similar to.\n embedding: Embedding function to use.\n Defaults to None.\n k: Number of Documents to return.\n Defaults to 4.\n distance_metric: `L2` for Euclidean, `L1` for Nuclear, `max`\n L-infinity distance, `cos` for cosine similarity, 'dot' for dot product\n Defaults to `L2`.\n filter: Attribute filter by metadata example {'key': 'value'}.\n Defaults to None.\n maximal_marginal_relevance: Whether to use maximal marginal relevance.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"}
+{"id": "7b4a896e6191-8", "text": "maximal_marginal_relevance: Whether to use maximal marginal relevance.\n Defaults to False.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n Defaults to 20.\n return_score: Whether to return the score. Defaults to False.\n Returns:\n List of Documents most similar to the query vector.\n \"\"\"\n return self._search_helper(query=query, k=k, **kwargs)\n[docs] def similarity_search_by_vector(\n self, embedding: List[float], k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query vector.\n \"\"\"\n return self._search_helper(embedding=embedding, k=k, **kwargs)\n[docs] def similarity_search_with_score(\n self,\n query: str,\n distance_metric: str = \"L2\",\n k: int = 4,\n filter: Optional[Dict[str, str]] = None,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Run similarity search with Deep Lake with distance returned.\n Args:\n query (str): Query text to search for.\n distance_metric: `L2` for Euclidean, `L1` for Nuclear, `max` L-infinity\n distance, `cos` for cosine similarity, 'dot' for dot product.\n Defaults to `L2`.\n k (int): Number of results to return. Defaults to 4.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"}
+{"id": "7b4a896e6191-9", "text": "k (int): Number of results to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List[Tuple[Document, float]]: List of documents most similar to the query\n text with distance in float.\n \"\"\"\n return self._search_helper(\n query=query,\n k=k,\n filter=filter,\n return_score=True,\n distance_metric=distance_metric,\n )\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n return self._search_helper(\n embedding=embedding,\n k=k,\n fetch_k=fetch_k,\n use_maximal_marginal_relevance=True,\n lambda_mult=lambda_mult,\n **kwargs,\n )\n[docs] def max_marginal_relevance_search(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"}
+{"id": "7b4a896e6191-10", "text": ")\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n if self._embedding_function is None:\n raise ValueError(\n \"For MMR search, you must specify an embedding function on\" \"creation.\"\n )\n return self._search_helper(\n query=query,\n k=k,\n fetch_k=fetch_k,\n use_maximal_marginal_relevance=True,\n lambda_mult=lambda_mult,\n **kwargs,\n )\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Optional[Embeddings] = None,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n dataset_path: str = _LANGCHAIN_DEFAULT_DEEPLAKE_PATH,\n **kwargs: Any,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"}
+{"id": "7b4a896e6191-11", "text": "**kwargs: Any,\n ) -> DeepLake:\n \"\"\"Create a Deep Lake dataset from a raw documents.\n If a dataset_path is specified, the dataset will be persisted in that location,\n otherwise by default at `./deeplake`\n Args:\n path (str, pathlib.Path): - The full path to the dataset. Can be:\n - Deep Lake cloud path of the form ``hub://username/dataset_name``.\n To write to Deep Lake cloud datasets,\n ensure that you are logged in to Deep Lake\n (use 'activeloop login' from command line)\n - AWS S3 path of the form ``s3://bucketname/path/to/dataset``.\n Credentials are required in either the environment\n - Google Cloud Storage path of the form\n ``gcs://bucketname/path/to/dataset`` Credentials are required\n in either the environment\n - Local file system path of the form ``./path/to/dataset`` or\n ``~/path/to/dataset`` or ``path/to/dataset``.\n - In-memory path of the form ``mem://path/to/dataset`` which doesn't\n save the dataset, but keeps it in memory instead.\n Should be used only for testing as it does not persist.\n documents (List[Document]): List of documents to add.\n embedding (Optional[Embeddings]): Embedding function. Defaults to None.\n metadatas (Optional[List[dict]]): List of metadatas. Defaults to None.\n ids (Optional[List[str]]): List of document IDs. Defaults to None.\n Returns:\n DeepLake: Deep Lake dataset.\n \"\"\"\n deeplake_dataset = cls(\n dataset_path=dataset_path, embedding_function=embedding, **kwargs\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"}
+{"id": "7b4a896e6191-12", "text": "dataset_path=dataset_path, embedding_function=embedding, **kwargs\n )\n deeplake_dataset.add_texts(texts=texts, metadatas=metadatas, ids=ids)\n return deeplake_dataset\n[docs] def delete(\n self,\n ids: Any[List[str], None] = None,\n filter: Any[Dict[str, str], None] = None,\n delete_all: Any[bool, None] = None,\n ) -> bool:\n \"\"\"Delete the entities in the dataset\n Args:\n ids (Optional[List[str]], optional): The document_ids to delete.\n Defaults to None.\n filter (Optional[Dict[str, str]], optional): The filter to delete by.\n Defaults to None.\n delete_all (Optional[bool], optional): Whether to drop the dataset.\n Defaults to None.\n \"\"\"\n if delete_all:\n self.ds.delete(large_ok=True)\n return True\n view = None\n if ids:\n view = self.ds.filter(lambda x: x[\"ids\"].data()[\"value\"] in ids)\n ids = list(view.sample_indices)\n if filter:\n if view is None:\n view = self.ds\n view = view.filter(partial(dp_filter, filter=filter))\n ids = list(view.sample_indices)\n with self.ds:\n for id in sorted(ids)[::-1]:\n self.ds.pop(id)\n self.ds.commit(f\"deleted {len(ids)} samples\", allow_empty=True)\n return True\n[docs] @classmethod\n def force_delete_by_path(cls, path: str) -> None:\n \"\"\"Force delete dataset by path\"\"\"\n try:\n import deeplake\n except ImportError:\n raise ValueError(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"}
+{"id": "7b4a896e6191-13", "text": "try:\n import deeplake\n except ImportError:\n raise ValueError(\n \"Could not import deeplake python package. \"\n \"Please install it with `pip install deeplake`.\"\n )\n deeplake.delete(path, large_ok=True, force=True)\n[docs] def delete_dataset(self) -> None:\n \"\"\"Delete the collection.\"\"\"\n self.delete(delete_all=True)\n[docs] def persist(self) -> None:\n \"\"\"Persist the collection.\"\"\"\n self.ds.flush()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"}
+{"id": "582eead4a32f-0", "text": "Source code for langchain.vectorstores.matching_engine\n\"\"\"Vertex Matching Engine implementation of the vector store.\"\"\"\nfrom __future__ import annotations\nimport json\nimport logging\nimport time\nimport uuid\nfrom typing import TYPE_CHECKING, Any, Iterable, List, Optional, Type\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings import TensorflowHubEmbeddings\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nif TYPE_CHECKING:\n from google.cloud import storage\n from google.cloud.aiplatform import MatchingEngineIndex, MatchingEngineIndexEndpoint\n from google.oauth2.service_account import Credentials\nlogger = logging.getLogger()\n[docs]class MatchingEngine(VectorStore):\n \"\"\"Vertex Matching Engine implementation of the vector store.\n While the embeddings are stored in the Matching Engine, the embedded\n documents will be stored in GCS.\n An existing Index and corresponding Endpoint are preconditions for\n using this module.\n See usage in docs/modules/indexes/vectorstores/examples/matchingengine.ipynb\n Note that this implementation is mostly meant for reading if you are\n planning to do a real time implementation. While reading is a real time\n operation, updating the index takes close to one hour.\"\"\"\n def __init__(\n self,\n project_id: str,\n index: MatchingEngineIndex,\n endpoint: MatchingEngineIndexEndpoint,\n embedding: Embeddings,\n gcs_client: storage.Client,\n gcs_bucket_name: str,\n credentials: Optional[Credentials] = None,\n ):\n \"\"\"Vertex Matching Engine implementation of the vector store.\n While the embeddings are stored in the Matching Engine, the embedded\n documents will be stored in GCS.\n An existing Index and corresponding Endpoint are preconditions for\n using this module.\n See usage in", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"}
+{"id": "582eead4a32f-1", "text": "using this module.\n See usage in\n docs/modules/indexes/vectorstores/examples/matchingengine.ipynb.\n Note that this implementation is mostly meant for reading if you are\n planning to do a real time implementation. While reading is a real time\n operation, updating the index takes close to one hour.\n Attributes:\n project_id: The GCS project id.\n index: The created index class. See\n ~:func:`MatchingEngine.from_components`.\n endpoint: The created endpoint class. See\n ~:func:`MatchingEngine.from_components`.\n embedding: A :class:`Embeddings` that will be used for\n embedding the text sent. If none is sent, then the\n multilingual Tensorflow Universal Sentence Encoder will be used.\n gcs_client: The GCS client.\n gcs_bucket_name: The GCS bucket name.\n credentials (Optional): Created GCP credentials.\n \"\"\"\n super().__init__()\n self._validate_google_libraries_installation()\n self.project_id = project_id\n self.index = index\n self.endpoint = endpoint\n self.embedding = embedding\n self.gcs_client = gcs_client\n self.credentials = credentials\n self.gcs_bucket_name = gcs_bucket_name\n def _validate_google_libraries_installation(self) -> None:\n \"\"\"Validates that Google libraries that are needed are installed.\"\"\"\n try:\n from google.cloud import aiplatform, storage # noqa: F401\n from google.oauth2 import service_account # noqa: F401\n except ImportError:\n raise ImportError(\n \"You must run `pip install --upgrade \"\n \"google-cloud-aiplatform google-cloud-storage`\"\n \"to use the MatchingEngine Vectorstore.\"\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"}
+{"id": "582eead4a32f-2", "text": "\"to use the MatchingEngine Vectorstore.\"\n )\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n kwargs: vectorstore specific parameters.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n logger.debug(\"Embedding documents.\")\n embeddings = self.embedding.embed_documents(list(texts))\n jsons = []\n ids = []\n # Could be improved with async.\n for embedding, text in zip(embeddings, texts):\n id = str(uuid.uuid4())\n ids.append(id)\n jsons.append({\"id\": id, \"embedding\": embedding})\n self._upload_to_gcs(text, f\"documents/{id}\")\n logger.debug(f\"Uploaded {len(ids)} documents to GCS.\")\n # Creating json lines from the embedded documents.\n result_str = \"\\n\".join([json.dumps(x) for x in jsons])\n filename_prefix = f\"indexes/{uuid.uuid4()}\"\n filename = f\"{filename_prefix}/{time.time()}.json\"\n self._upload_to_gcs(result_str, filename)\n logger.debug(\n f\"Uploaded updated json with embeddings to \"\n f\"{self.gcs_bucket_name}/{filename}.\"\n )\n self.index = self.index.update_embeddings(\n contents_delta_uri=f\"gs://{self.gcs_bucket_name}/{filename_prefix}/\"\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"}
+{"id": "582eead4a32f-3", "text": ")\n logger.debug(\"Updated index with new configuration.\")\n return ids\n def _upload_to_gcs(self, data: str, gcs_location: str) -> None:\n \"\"\"Uploads data to gcs_location.\n Args:\n data: The data that will be stored.\n gcs_location: The location where the data will be stored.\n \"\"\"\n bucket = self.gcs_client.get_bucket(self.gcs_bucket_name)\n blob = bucket.blob(gcs_location)\n blob.upload_from_string(data)\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: The string that will be used to search for similar documents.\n k: The amount of neighbors that will be retrieved.\n Returns:\n A list of k matching documents.\n \"\"\"\n logger.debug(f\"Embedding query {query}.\")\n embedding_query = self.embedding.embed_documents([query])\n response = self.endpoint.match(\n deployed_index_id=self._get_index_id(),\n queries=embedding_query,\n num_neighbors=k,\n )\n if len(response) == 0:\n return []\n logger.debug(f\"Found {len(response)} matches for the query {query}.\")\n results = []\n # I'm only getting the first one because queries receives an array\n # and the similarity_search method only recevies one query. This\n # means that the match method will always return an array with only\n # one element.\n for doc in response[0]:\n page_content = self._download_from_gcs(f\"documents/{doc.id}\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"}
+{"id": "582eead4a32f-4", "text": "page_content = self._download_from_gcs(f\"documents/{doc.id}\")\n results.append(Document(page_content=page_content))\n logger.debug(\"Downloaded documents for query.\")\n return results\n def _get_index_id(self) -> str:\n \"\"\"Gets the correct index id for the endpoint.\n Returns:\n The index id if found (which should be found) or throws\n ValueError otherwise.\n \"\"\"\n for index in self.endpoint.deployed_indexes:\n if index.index == self.index.resource_name:\n return index.id\n raise ValueError(\n f\"No index with id {self.index.resource_name} \"\n f\"deployed on endpoint \"\n f\"{self.endpoint.display_name}.\"\n )\n def _download_from_gcs(self, gcs_location: str) -> str:\n \"\"\"Downloads from GCS in text format.\n Args:\n gcs_location: The location where the file is located.\n Returns:\n The string contents of the file.\n \"\"\"\n bucket = self.gcs_client.get_bucket(self.gcs_bucket_name)\n blob = bucket.blob(gcs_location)\n return blob.download_as_string()\n[docs] @classmethod\n def from_texts(\n cls: Type[\"MatchingEngine\"],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> \"MatchingEngine\":\n \"\"\"Use from components instead.\"\"\"\n raise NotImplementedError(\n \"This method is not implemented. Instead, you should initialize the class\"\n \" with `MatchingEngine.from_components(...)` and then call \"\n \"`add_texts`\"\n )\n[docs] @classmethod\n def from_components(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"}
+{"id": "582eead4a32f-5", "text": ")\n[docs] @classmethod\n def from_components(\n cls: Type[\"MatchingEngine\"],\n project_id: str,\n region: str,\n gcs_bucket_name: str,\n index_id: str,\n endpoint_id: str,\n credentials_path: Optional[str] = None,\n embedding: Optional[Embeddings] = None,\n ) -> \"MatchingEngine\":\n \"\"\"Takes the object creation out of the constructor.\n Args:\n project_id: The GCP project id.\n region: The default location making the API calls. It must have\n the same location as the GCS bucket and must be regional.\n gcs_bucket_name: The location where the vectors will be stored in\n order for the index to be created.\n index_id: The id of the created index.\n endpoint_id: The id of the created endpoint.\n credentials_path: (Optional) The path of the Google credentials on\n the local file system.\n embedding: The :class:`Embeddings` that will be used for\n embedding the texts.\n Returns:\n A configured MatchingEngine with the texts added to the index.\n \"\"\"\n gcs_bucket_name = cls._validate_gcs_bucket(gcs_bucket_name)\n credentials = cls._create_credentials_from_file(credentials_path)\n index = cls._create_index_by_id(index_id, project_id, region, credentials)\n endpoint = cls._create_endpoint_by_id(\n endpoint_id, project_id, region, credentials\n )\n gcs_client = cls._get_gcs_client(credentials, project_id)\n cls._init_aiplatform(project_id, region, gcs_bucket_name, credentials)\n return cls(\n project_id=project_id,\n index=index,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"}
+{"id": "582eead4a32f-6", "text": "return cls(\n project_id=project_id,\n index=index,\n endpoint=endpoint,\n embedding=embedding or cls._get_default_embeddings(),\n gcs_client=gcs_client,\n credentials=credentials,\n gcs_bucket_name=gcs_bucket_name,\n )\n @classmethod\n def _validate_gcs_bucket(cls, gcs_bucket_name: str) -> str:\n \"\"\"Validates the gcs_bucket_name as a bucket name.\n Args:\n gcs_bucket_name: The received bucket uri.\n Returns:\n A valid gcs_bucket_name or throws ValueError if full path is\n provided.\n \"\"\"\n gcs_bucket_name = gcs_bucket_name.replace(\"gs://\", \"\")\n if \"/\" in gcs_bucket_name:\n raise ValueError(\n f\"The argument gcs_bucket_name should only be \"\n f\"the bucket name. Received {gcs_bucket_name}\"\n )\n return gcs_bucket_name\n @classmethod\n def _create_credentials_from_file(\n cls, json_credentials_path: Optional[str]\n ) -> Optional[Credentials]:\n \"\"\"Creates credentials for GCP.\n Args:\n json_credentials_path: The path on the file system where the\n credentials are stored.\n Returns:\n An optional of Credentials or None, in which case the default\n will be used.\n \"\"\"\n from google.oauth2 import service_account\n credentials = None\n if json_credentials_path is not None:\n credentials = service_account.Credentials.from_service_account_file(\n json_credentials_path\n )\n return credentials\n @classmethod\n def _create_index_by_id(\n cls, index_id: str, project_id: str, region: str, credentials: \"Credentials\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"}
+{"id": "582eead4a32f-7", "text": ") -> MatchingEngineIndex:\n \"\"\"Creates a MatchingEngineIndex object by id.\n Args:\n index_id: The created index id.\n project_id: The project to retrieve index from.\n region: Location to retrieve index from.\n credentials: GCS credentials.\n Returns:\n A configured MatchingEngineIndex.\n \"\"\"\n from google.cloud import aiplatform\n logger.debug(f\"Creating matching engine index with id {index_id}.\")\n return aiplatform.MatchingEngineIndex(\n index_name=index_id,\n project=project_id,\n location=region,\n credentials=credentials,\n )\n @classmethod\n def _create_endpoint_by_id(\n cls, endpoint_id: str, project_id: str, region: str, credentials: \"Credentials\"\n ) -> MatchingEngineIndexEndpoint:\n \"\"\"Creates a MatchingEngineIndexEndpoint object by id.\n Args:\n endpoint_id: The created endpoint id.\n project_id: The project to retrieve index from.\n region: Location to retrieve index from.\n credentials: GCS credentials.\n Returns:\n A configured MatchingEngineIndexEndpoint.\n \"\"\"\n from google.cloud import aiplatform\n logger.debug(f\"Creating endpoint with id {endpoint_id}.\")\n return aiplatform.MatchingEngineIndexEndpoint(\n index_endpoint_name=endpoint_id,\n project=project_id,\n location=region,\n credentials=credentials,\n )\n @classmethod\n def _get_gcs_client(\n cls, credentials: \"Credentials\", project_id: str\n ) -> \"storage.Client\":\n \"\"\"Lazily creates a GCS client.\n Returns:\n A configured GCS client.\n \"\"\"\n from google.cloud import storage", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"}
+{"id": "582eead4a32f-8", "text": "A configured GCS client.\n \"\"\"\n from google.cloud import storage\n return storage.Client(credentials=credentials, project=project_id)\n @classmethod\n def _init_aiplatform(\n cls,\n project_id: str,\n region: str,\n gcs_bucket_name: str,\n credentials: \"Credentials\",\n ) -> None:\n \"\"\"Configures the aiplatform library.\n Args:\n project_id: The GCP project id.\n region: The default location making the API calls. It must have\n the same location as the GCS bucket and must be regional.\n gcs_bucket_name: GCS staging location.\n credentials: The GCS Credentials object.\n \"\"\"\n from google.cloud import aiplatform\n logger.debug(\n f\"Initializing AI Platform for project {project_id} on \"\n f\"{region} and for {gcs_bucket_name}.\"\n )\n aiplatform.init(\n project=project_id,\n location=region,\n staging_bucket=gcs_bucket_name,\n credentials=credentials,\n )\n @classmethod\n def _get_default_embeddings(cls) -> TensorflowHubEmbeddings:\n \"\"\"This function returns the default embedding.\n Returns:\n Default TensorflowHubEmbeddings to use.\n \"\"\"\n return TensorflowHubEmbeddings()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"}
+{"id": "b6f86bbcbcab-0", "text": "Source code for langchain.vectorstores.awadb\n\"\"\"Wrapper around AwaDB for embedding vectors\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import TYPE_CHECKING, Any, Iterable, List, Optional, Tuple, Type\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\n# from pydantic import BaseModel, Field, root_validator\nif TYPE_CHECKING:\n import awadb\nlogger = logging.getLogger()\nDEFAULT_TOPN = 4\n[docs]class AwaDB(VectorStore):\n \"\"\"Interface implemented by AwaDB vector stores.\"\"\"\n _DEFAULT_TABLE_NAME = \"langchain_awadb\"\n def __init__(\n self,\n table_name: str = _DEFAULT_TABLE_NAME,\n embedding_model: Optional[Embeddings] = None,\n log_and_data_dir: Optional[str] = None,\n client: Optional[awadb.Client] = None,\n ) -> None:\n \"\"\"Initialize with AwaDB client.\"\"\"\n try:\n import awadb\n except ImportError:\n raise ValueError(\n \"Could not import awadb python package. \"\n \"Please install it with `pip install awadb`.\"\n )\n if client is not None:\n self.awadb_client = client\n else:\n if log_and_data_dir is not None:\n self.awadb_client = awadb.Client(log_and_data_dir)\n else:\n self.awadb_client = awadb.Client()\n self.awadb_client.Create(table_name)\n if embedding_model is not None:\n self.embedding_model = embedding_model\n self.added_doc_count = 0\n[docs] def add_texts(\n self,\n texts: Iterable[str],", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"}
+{"id": "b6f86bbcbcab-1", "text": "[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n kwargs: vectorstore specific parameters\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n embeddings = None\n if self.embedding_model is not None:\n embeddings = self.embedding_model.embed_documents(list(texts))\n added_results: List[str] = []\n doc_no = 0\n for text in texts:\n doc: List[Any] = []\n if embeddings is not None:\n doc.append(text)\n doc.append(embeddings[doc_no])\n else:\n dict_tmp = {}\n dict_tmp[\"embedding_text\"] = text\n doc.append(dict_tmp)\n if metadatas is not None:\n if doc_no < metadatas.__len__():\n doc.append(metadatas[doc_no])\n self.awadb_client.Add(doc)\n added_results.append(str(self.added_doc_count))\n doc_no = doc_no + 1\n self.added_doc_count = self.added_doc_count + 1\n return added_results\n[docs] def load_local(\n self,\n table_name: str = _DEFAULT_TABLE_NAME,\n **kwargs: Any,\n ) -> bool:\n if self.awadb_client is None:", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"}
+{"id": "b6f86bbcbcab-2", "text": ") -> bool:\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n return self.awadb_client.Load(table_name)\n[docs] def similarity_search(\n self,\n query: str,\n k: int = DEFAULT_TOPN,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\"\"\"\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n embedding = None\n if self.embedding_model is not None:\n embedding = self.embedding_model.embed_query(query)\n return self.similarity_search_by_vector(embedding, k)\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = DEFAULT_TOPN,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores, normalized on a scale from 0 to 1.\n 0 is dissimilar, 1 is most similar.\n \"\"\"\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n embedding = None\n if self.embedding_model is not None:\n embedding = self.embedding_model.embed_query(query)\n show_results = self.awadb_client.Search(embedding, k)\n results: List[Tuple[Document, float]] = []\n if show_results.__len__() == 0:\n return results\n scores: List[float] = []\n retrieval_docs = self.similarity_search_by_vector(embedding, k, scores)\n L2_Norm = 0.0\n for score in scores:\n L2_Norm = L2_Norm + score * score", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"}
+{"id": "b6f86bbcbcab-3", "text": "L2_Norm = L2_Norm + score * score\n L2_Norm = pow(L2_Norm, 0.5)\n doc_no = 0\n for doc in retrieval_docs:\n doc_tuple = (doc, 1 - scores[doc_no] / L2_Norm)\n results.append(doc_tuple)\n doc_no = doc_no + 1\n return results\n[docs] def similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = DEFAULT_TOPN,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores, normalized on a scale from 0 to 1.\n 0 is dissimilar, 1 is most similar.\n \"\"\"\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n embedding = None\n if self.embedding_model is not None:\n embedding = self.embedding_model.embed_query(query)\n show_results = self.awadb_client.Search(embedding, k)\n results: List[Tuple[Document, float]] = []\n if show_results.__len__() == 0:\n return results\n scores: List[float] = []\n retrieval_docs = self.similarity_search_by_vector(embedding, k, scores)\n L2_Norm = 0.0\n for score in scores:\n L2_Norm = L2_Norm + score * score\n L2_Norm = pow(L2_Norm, 0.5)\n doc_no = 0\n for doc in retrieval_docs:\n doc_tuple = (doc, 1 - scores[doc_no] / L2_Norm)\n results.append(doc_tuple)", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"}
+{"id": "b6f86bbcbcab-4", "text": "results.append(doc_tuple)\n doc_no = doc_no + 1\n return results\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = DEFAULT_TOPN,\n scores: Optional[list] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query vector.\n \"\"\"\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n show_results = self.awadb_client.Search(embedding, k)\n results: List[Document] = []\n if show_results.__len__() == 0:\n return results\n for item_detail in show_results[0][\"ResultItems\"]:\n content = \"\"\n meta_data = {}\n for item_key in item_detail:\n if item_key == \"Field@0\": # text for the document\n content = item_detail[item_key]\n elif item_key == \"Field@1\": # embedding field for the document\n continue\n elif item_key == \"score\": # L2 distance\n if scores is not None:\n score = item_detail[item_key]\n scores.append(score)\n else:\n meta_data[item_key] = item_detail[item_key]\n results.append(Document(page_content=content, metadata=meta_data))\n return results\n[docs] @classmethod\n def from_texts(\n cls: Type[AwaDB],\n texts: List[str],\n embedding: Optional[Embeddings] = None,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"}
+{"id": "b6f86bbcbcab-5", "text": "texts: List[str],\n embedding: Optional[Embeddings] = None,\n metadatas: Optional[List[dict]] = None,\n table_name: str = _DEFAULT_TABLE_NAME,\n logging_and_data_dir: Optional[str] = None,\n client: Optional[awadb.Client] = None,\n **kwargs: Any,\n ) -> AwaDB:\n \"\"\"Create an AwaDB vectorstore from a raw documents.\n Args:\n texts (List[str]): List of texts to add to the table.\n embedding (Optional[Embeddings]): Embedding function. Defaults to None.\n metadatas (Optional[List[dict]]): List of metadatas. Defaults to None.\n table_name (str): Name of the table to create.\n logging_and_data_dir (Optional[str]): Directory of logging and persistence.\n client (Optional[awadb.Client]): AwaDB client\n Returns:\n AwaDB: AwaDB vectorstore.\n \"\"\"\n awadb_client = cls(\n table_name=table_name,\n embedding_model=embedding,\n log_and_data_dir=logging_and_data_dir,\n client=client,\n )\n awadb_client.add_texts(texts=texts, metadatas=metadatas)\n return awadb_client\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"}
+{"id": "3ba445096b48-0", "text": "Source code for langchain.vectorstores.annoy\n\"\"\"Wrapper around Annoy vector database.\"\"\"\nfrom __future__ import annotations\nimport os\nimport pickle\nimport uuid\nfrom configparser import ConfigParser\nfrom pathlib import Path\nfrom typing import Any, Callable, Dict, Iterable, List, Optional, Tuple\nimport numpy as np\nfrom langchain.docstore.base import Docstore\nfrom langchain.docstore.document import Document\nfrom langchain.docstore.in_memory import InMemoryDocstore\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nINDEX_METRICS = frozenset([\"angular\", \"euclidean\", \"manhattan\", \"hamming\", \"dot\"])\nDEFAULT_METRIC = \"angular\"\ndef dependable_annoy_import() -> Any:\n \"\"\"Import annoy if available, otherwise raise error.\"\"\"\n try:\n import annoy\n except ImportError:\n raise ValueError(\n \"Could not import annoy python package. \"\n \"Please install it with `pip install --user annoy` \"\n )\n return annoy\n[docs]class Annoy(VectorStore):\n \"\"\"Wrapper around Annoy vector database.\n To use, you should have the ``annoy`` python package installed.\n Example:\n .. code-block:: python\n from langchain import Annoy\n db = Annoy(embedding_function, index, docstore, index_to_docstore_id)\n \"\"\"\n def __init__(\n self,\n embedding_function: Callable,\n index: Any,\n metric: str,\n docstore: Docstore,\n index_to_docstore_id: Dict[int, str],\n ):\n \"\"\"Initialize with necessary components.\"\"\"\n self.embedding_function = embedding_function", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"}
+{"id": "3ba445096b48-1", "text": "):\n \"\"\"Initialize with necessary components.\"\"\"\n self.embedding_function = embedding_function\n self.index = index\n self.metric = metric\n self.docstore = docstore\n self.index_to_docstore_id = index_to_docstore_id\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n raise NotImplementedError(\n \"Annoy does not allow to add new data once the index is build.\"\n )\n[docs] def process_index_results(\n self, idxs: List[int], dists: List[float]\n ) -> List[Tuple[Document, float]]:\n \"\"\"Turns annoy results into a list of documents and scores.\n Args:\n idxs: List of indices of the documents in the index.\n dists: List of distances of the documents in the index.\n Returns:\n List of Documents and scores.\n \"\"\"\n docs = []\n for idx, dist in zip(idxs, dists):\n _id = self.index_to_docstore_id[idx]\n doc = self.docstore.search(_id)\n if not isinstance(doc, Document):\n raise ValueError(f\"Could not find document for id {_id}, got {doc}\")\n docs.append((doc, dist))\n return docs\n[docs] def similarity_search_with_score_by_vector(\n self, embedding: List[float], k: int = 4, search_k: int = -1\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"}
+{"id": "3ba445096b48-2", "text": "Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n search_k: inspect up to search_k nodes which defaults\n to n_trees * n if not provided\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n idxs, dists = self.index.get_nns_by_vector(\n embedding, k, search_k=search_k, include_distances=True\n )\n return self.process_index_results(idxs, dists)\n[docs] def similarity_search_with_score_by_index(\n self, docstore_index: int, k: int = 4, search_k: int = -1\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n search_k: inspect up to search_k nodes which defaults\n to n_trees * n if not provided\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n idxs, dists = self.index.get_nns_by_item(\n docstore_index, k, search_k=search_k, include_distances=True\n )\n return self.process_index_results(idxs, dists)\n[docs] def similarity_search_with_score(\n self, query: str, k: int = 4, search_k: int = -1\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"}
+{"id": "3ba445096b48-3", "text": "k: Number of Documents to return. Defaults to 4.\n search_k: inspect up to search_k nodes which defaults\n to n_trees * n if not provided\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n embedding = self.embedding_function(query)\n docs = self.similarity_search_with_score_by_vector(embedding, k, search_k)\n return docs\n[docs] def similarity_search_by_vector(\n self, embedding: List[float], k: int = 4, search_k: int = -1, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n search_k: inspect up to search_k nodes which defaults\n to n_trees * n if not provided\n Returns:\n List of Documents most similar to the embedding.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score_by_vector(\n embedding, k, search_k\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] def similarity_search_by_index(\n self, docstore_index: int, k: int = 4, search_k: int = -1, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to docstore_index.\n Args:\n docstore_index: Index of document in docstore\n k: Number of Documents to return. Defaults to 4.\n search_k: inspect up to search_k nodes which defaults\n to n_trees * n if not provided\n Returns:\n List of Documents most similar to the embedding.\n \"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"}
+{"id": "3ba445096b48-4", "text": "Returns:\n List of Documents most similar to the embedding.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score_by_index(\n docstore_index, k, search_k\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] def similarity_search(\n self, query: str, k: int = 4, search_k: int = -1, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n search_k: inspect up to search_k nodes which defaults\n to n_trees * n if not provided\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(query, k, search_k)\n return [doc for doc, _ in docs_and_scores]\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n k: Number of Documents to return. Defaults to 4.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"}
+{"id": "3ba445096b48-5", "text": "of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n idxs = self.index.get_nns_by_vector(\n embedding, fetch_k, search_k=-1, include_distances=False\n )\n embeddings = [self.index.get_item_vector(i) for i in idxs]\n mmr_selected = maximal_marginal_relevance(\n np.array([embedding], dtype=np.float32),\n embeddings,\n k=k,\n lambda_mult=lambda_mult,\n )\n # ignore the -1's if not enough docs are returned/indexed\n selected_indices = [idxs[i] for i in mmr_selected if i != -1]\n docs = []\n for i in selected_indices:\n _id = self.index_to_docstore_id[i]\n doc = self.docstore.search(_id)\n if not isinstance(doc, Document):\n raise ValueError(f\"Could not find document for id {_id}, got {doc}\")\n docs.append(doc)\n return docs\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"}
+{"id": "3ba445096b48-6", "text": "k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n embedding = self.embedding_function(query)\n docs = self.max_marginal_relevance_search_by_vector(\n embedding, k, fetch_k, lambda_mult=lambda_mult\n )\n return docs\n @classmethod\n def __from(\n cls,\n texts: List[str],\n embeddings: List[List[float]],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n metric: str = DEFAULT_METRIC,\n trees: int = 100,\n n_jobs: int = -1,\n **kwargs: Any,\n ) -> Annoy:\n if metric not in INDEX_METRICS:\n raise ValueError(\n (\n f\"Unsupported distance metric: {metric}. \"\n f\"Expected one of {list(INDEX_METRICS)}\"\n )\n )\n annoy = dependable_annoy_import()\n if not embeddings:\n raise ValueError(\"embeddings must be provided to build AnnoyIndex\")\n f = len(embeddings[0])\n index = annoy.AnnoyIndex(f, metric=metric)\n for i, emb in enumerate(embeddings):\n index.add_item(i, emb)\n index.build(trees, n_jobs=n_jobs)\n documents = []\n for i, text in enumerate(texts):", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"}
+{"id": "3ba445096b48-7", "text": "documents = []\n for i, text in enumerate(texts):\n metadata = metadatas[i] if metadatas else {}\n documents.append(Document(page_content=text, metadata=metadata))\n index_to_id = {i: str(uuid.uuid4()) for i in range(len(documents))}\n docstore = InMemoryDocstore(\n {index_to_id[i]: doc for i, doc in enumerate(documents)}\n )\n return cls(embedding.embed_query, index, metric, docstore, index_to_id)\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n metric: str = DEFAULT_METRIC,\n trees: int = 100,\n n_jobs: int = -1,\n **kwargs: Any,\n ) -> Annoy:\n \"\"\"Construct Annoy wrapper from raw documents.\n Args:\n texts: List of documents to index.\n embedding: Embedding function to use.\n metadatas: List of metadata dictionaries to associate with documents.\n metric: Metric to use for indexing. Defaults to \"angular\".\n trees: Number of trees to use for indexing. Defaults to 100.\n n_jobs: Number of jobs to use for indexing. Defaults to -1.\n This is a user friendly interface that:\n 1. Embeds documents.\n 2. Creates an in memory docstore\n 3. Initializes the Annoy database\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import Annoy\n from langchain.embeddings import OpenAIEmbeddings", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"}
+{"id": "3ba445096b48-8", "text": "from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n index = Annoy.from_texts(texts, embeddings)\n \"\"\"\n embeddings = embedding.embed_documents(texts)\n return cls.__from(\n texts, embeddings, embedding, metadatas, metric, trees, n_jobs, **kwargs\n )\n[docs] @classmethod\n def from_embeddings(\n cls,\n text_embeddings: List[Tuple[str, List[float]]],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n metric: str = DEFAULT_METRIC,\n trees: int = 100,\n n_jobs: int = -1,\n **kwargs: Any,\n ) -> Annoy:\n \"\"\"Construct Annoy wrapper from embeddings.\n Args:\n text_embeddings: List of tuples of (text, embedding)\n embedding: Embedding function to use.\n metadatas: List of metadata dictionaries to associate with documents.\n metric: Metric to use for indexing. Defaults to \"angular\".\n trees: Number of trees to use for indexing. Defaults to 100.\n n_jobs: Number of jobs to use for indexing. Defaults to -1\n This is a user friendly interface that:\n 1. Creates an in memory docstore with provided embeddings\n 2. Initializes the Annoy database\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import Annoy\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n text_embeddings = embeddings.embed_documents(texts)\n text_embedding_pairs = list(zip(texts, text_embeddings))", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"}
+{"id": "3ba445096b48-9", "text": "text_embedding_pairs = list(zip(texts, text_embeddings))\n db = Annoy.from_embeddings(text_embedding_pairs, embeddings)\n \"\"\"\n texts = [t[0] for t in text_embeddings]\n embeddings = [t[1] for t in text_embeddings]\n return cls.__from(\n texts, embeddings, embedding, metadatas, metric, trees, n_jobs, **kwargs\n )\n[docs] def save_local(self, folder_path: str, prefault: bool = False) -> None:\n \"\"\"Save Annoy index, docstore, and index_to_docstore_id to disk.\n Args:\n folder_path: folder path to save index, docstore,\n and index_to_docstore_id to.\n prefault: Whether to pre-load the index into memory.\n \"\"\"\n path = Path(folder_path)\n os.makedirs(path, exist_ok=True)\n # save index, index config, docstore and index_to_docstore_id\n config_object = ConfigParser()\n config_object[\"ANNOY\"] = {\n \"f\": self.index.f,\n \"metric\": self.metric,\n }\n self.index.save(str(path / \"index.annoy\"), prefault=prefault)\n with open(path / \"index.pkl\", \"wb\") as file:\n pickle.dump((self.docstore, self.index_to_docstore_id, config_object), file)\n[docs] @classmethod\n def load_local(\n cls,\n folder_path: str,\n embeddings: Embeddings,\n ) -> Annoy:\n \"\"\"Load Annoy index, docstore, and index_to_docstore_id to disk.\n Args:\n folder_path: folder path to load index, docstore,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"}
+{"id": "3ba445096b48-10", "text": "Args:\n folder_path: folder path to load index, docstore,\n and index_to_docstore_id from.\n embeddings: Embeddings to use when generating queries.\n \"\"\"\n path = Path(folder_path)\n # load index separately since it is not picklable\n annoy = dependable_annoy_import()\n # load docstore and index_to_docstore_id\n with open(path / \"index.pkl\", \"rb\") as file:\n docstore, index_to_docstore_id, config_object = pickle.load(file)\n f = int(config_object[\"ANNOY\"][\"f\"])\n metric = config_object[\"ANNOY\"][\"metric\"]\n index = annoy.AnnoyIndex(f, metric=metric)\n index.load(str(path / \"index.annoy\"))\n return cls(\n embeddings.embed_query, index, metric, docstore, index_to_docstore_id\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"}
+{"id": "8078984a02a5-0", "text": "Source code for langchain.vectorstores.docarray.in_memory\n\"\"\"Wrapper around in-memory storage.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Literal, Optional\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.docarray.base import (\n DocArrayIndex,\n _check_docarray_import,\n)\n[docs]class DocArrayInMemorySearch(DocArrayIndex):\n \"\"\"Wrapper around in-memory storage for exact search.\n To use it, you should have the ``docarray`` package with version >=0.32.0 installed.\n You can install it with `pip install \"langchain[docarray]\"`.\n \"\"\"\n[docs] @classmethod\n def from_params(\n cls,\n embedding: Embeddings,\n metric: Literal[\n \"cosine_sim\", \"euclidian_dist\", \"sgeuclidean_dist\"\n ] = \"cosine_sim\",\n **kwargs: Any,\n ) -> DocArrayInMemorySearch:\n \"\"\"Initialize DocArrayInMemorySearch store.\n Args:\n embedding (Embeddings): Embedding function.\n metric (str): metric for exact nearest-neighbor search.\n Can be one of: \"cosine_sim\", \"euclidean_dist\" and \"sqeuclidean_dist\".\n Defaults to \"cosine_sim\".\n **kwargs: Other keyword arguments to be passed to the get_doc_cls method.\n \"\"\"\n _check_docarray_import()\n from docarray.index import InMemoryExactNNIndex\n doc_cls = cls._get_doc_cls(space=metric, **kwargs)\n doc_index = InMemoryExactNNIndex[doc_cls]() # type: ignore\n return cls(doc_index, embedding)\n[docs] @classmethod\n def from_texts(", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/docarray/in_memory.html"}
+{"id": "8078984a02a5-1", "text": "[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[Dict[Any, Any]]] = None,\n **kwargs: Any,\n ) -> DocArrayInMemorySearch:\n \"\"\"Create an DocArrayInMemorySearch store and insert data.\n Args:\n texts (List[str]): Text data.\n embedding (Embeddings): Embedding function.\n metadatas (Optional[List[Dict[Any, Any]]]): Metadata for each text\n if it exists. Defaults to None.\n metric (str): metric for exact nearest-neighbor search.\n Can be one of: \"cosine_sim\", \"euclidean_dist\" and \"sqeuclidean_dist\".\n Defaults to \"cosine_sim\".\n Returns:\n DocArrayInMemorySearch Vector Store\n \"\"\"\n store = cls.from_params(embedding, **kwargs)\n store.add_texts(texts=texts, metadatas=metadatas)\n return store\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/docarray/in_memory.html"}
+{"id": "0e3314b830fd-0", "text": "Source code for langchain.vectorstores.docarray.hnsw\n\"\"\"Wrapper around Hnswlib store.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, List, Literal, Optional\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.docarray.base import (\n DocArrayIndex,\n _check_docarray_import,\n)\n[docs]class DocArrayHnswSearch(DocArrayIndex):\n \"\"\"Wrapper around HnswLib storage.\n To use it, you should have the ``docarray`` package with version >=0.32.0 installed.\n You can install it with `pip install \"langchain[docarray]\"`.\n \"\"\"\n[docs] @classmethod\n def from_params(\n cls,\n embedding: Embeddings,\n work_dir: str,\n n_dim: int,\n dist_metric: Literal[\"cosine\", \"ip\", \"l2\"] = \"cosine\",\n max_elements: int = 1024,\n index: bool = True,\n ef_construction: int = 200,\n ef: int = 10,\n M: int = 16,\n allow_replace_deleted: bool = True,\n num_threads: int = 1,\n **kwargs: Any,\n ) -> DocArrayHnswSearch:\n \"\"\"Initialize DocArrayHnswSearch store.\n Args:\n embedding (Embeddings): Embedding function.\n work_dir (str): path to the location where all the data will be stored.\n n_dim (int): dimension of an embedding.\n dist_metric (str): Distance metric for DocArrayHnswSearch can be one of:\n \"cosine\", \"ip\", and \"l2\". Defaults to \"cosine\".", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/docarray/hnsw.html"}
+{"id": "0e3314b830fd-1", "text": "\"cosine\", \"ip\", and \"l2\". Defaults to \"cosine\".\n max_elements (int): Maximum number of vectors that can be stored.\n Defaults to 1024.\n index (bool): Whether an index should be built for this field.\n Defaults to True.\n ef_construction (int): defines a construction time/accuracy trade-off.\n Defaults to 200.\n ef (int): parameter controlling query time/accuracy trade-off.\n Defaults to 10.\n M (int): parameter that defines the maximum number of outgoing\n connections in the graph. Defaults to 16.\n allow_replace_deleted (bool): Enables replacing of deleted elements\n with new added ones. Defaults to True.\n num_threads (int): Sets the number of cpu threads to use. Defaults to 1.\n **kwargs: Other keyword arguments to be passed to the get_doc_cls method.\n \"\"\"\n _check_docarray_import()\n from docarray.index import HnswDocumentIndex\n doc_cls = cls._get_doc_cls(\n dim=n_dim,\n space=dist_metric,\n max_elements=max_elements,\n index=index,\n ef_construction=ef_construction,\n ef=ef,\n M=M,\n allow_replace_deleted=allow_replace_deleted,\n num_threads=num_threads,\n **kwargs,\n )\n doc_index = HnswDocumentIndex[doc_cls](work_dir=work_dir) # type: ignore\n return cls(doc_index, embedding)\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n work_dir: Optional[str] = None,", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/docarray/hnsw.html"}
+{"id": "0e3314b830fd-2", "text": "work_dir: Optional[str] = None,\n n_dim: Optional[int] = None,\n **kwargs: Any,\n ) -> DocArrayHnswSearch:\n \"\"\"Create an DocArrayHnswSearch store and insert data.\n Args:\n texts (List[str]): Text data.\n embedding (Embeddings): Embedding function.\n metadatas (Optional[List[dict]]): Metadata for each text if it exists.\n Defaults to None.\n work_dir (str): path to the location where all the data will be stored.\n n_dim (int): dimension of an embedding.\n **kwargs: Other keyword arguments to be passed to the __init__ method.\n Returns:\n DocArrayHnswSearch Vector Store\n \"\"\"\n if work_dir is None:\n raise ValueError(\"`work_dir` parameter has not been set.\")\n if n_dim is None:\n raise ValueError(\"`n_dim` parameter has not been set.\")\n store = cls.from_params(embedding, work_dir, n_dim, **kwargs)\n store.add_texts(texts=texts, metadatas=metadatas)\n return store\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/vectorstores/docarray/hnsw.html"}
+{"id": "31fd9adfb983-0", "text": "Source code for langchain.llms.llamacpp\n\"\"\"Wrapper around llama.cpp.\"\"\"\nimport logging\nfrom typing import Any, Dict, Generator, List, Optional\nfrom pydantic import Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nlogger = logging.getLogger(__name__)\n[docs]class LlamaCpp(LLM):\n \"\"\"Wrapper around the llama.cpp model.\n To use, you should have the llama-cpp-python library installed, and provide the\n path to the Llama model as a named parameter to the constructor.\n Check out: https://github.com/abetlen/llama-cpp-python\n Example:\n .. code-block:: python\n from langchain.llms import LlamaCppEmbeddings\n llm = LlamaCppEmbeddings(model_path=\"/path/to/llama/model\")\n \"\"\"\n client: Any #: :meta private:\n model_path: str\n \"\"\"The path to the Llama model file.\"\"\"\n lora_base: Optional[str] = None\n \"\"\"The path to the Llama LoRA base model.\"\"\"\n lora_path: Optional[str] = None\n \"\"\"The path to the Llama LoRA. If None, no LoRa is loaded.\"\"\"\n n_ctx: int = Field(512, alias=\"n_ctx\")\n \"\"\"Token context window.\"\"\"\n n_parts: int = Field(-1, alias=\"n_parts\")\n \"\"\"Number of parts to split the model into.\n If -1, the number of parts is automatically determined.\"\"\"\n seed: int = Field(-1, alias=\"seed\")\n \"\"\"Seed. If -1, a random seed is used.\"\"\"\n f16_kv: bool = Field(True, alias=\"f16_kv\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html"}
+{"id": "31fd9adfb983-1", "text": "f16_kv: bool = Field(True, alias=\"f16_kv\")\n \"\"\"Use half-precision for key/value cache.\"\"\"\n logits_all: bool = Field(False, alias=\"logits_all\")\n \"\"\"Return logits for all tokens, not just the last token.\"\"\"\n vocab_only: bool = Field(False, alias=\"vocab_only\")\n \"\"\"Only load the vocabulary, no weights.\"\"\"\n use_mlock: bool = Field(False, alias=\"use_mlock\")\n \"\"\"Force system to keep model in RAM.\"\"\"\n n_threads: Optional[int] = Field(None, alias=\"n_threads\")\n \"\"\"Number of threads to use.\n If None, the number of threads is automatically determined.\"\"\"\n n_batch: Optional[int] = Field(8, alias=\"n_batch\")\n \"\"\"Number of tokens to process in parallel.\n Should be a number between 1 and n_ctx.\"\"\"\n n_gpu_layers: Optional[int] = Field(None, alias=\"n_gpu_layers\")\n \"\"\"Number of layers to be loaded into gpu memory. Default None.\"\"\"\n suffix: Optional[str] = Field(None)\n \"\"\"A suffix to append to the generated text. If None, no suffix is appended.\"\"\"\n max_tokens: Optional[int] = 256\n \"\"\"The maximum number of tokens to generate.\"\"\"\n temperature: Optional[float] = 0.8\n \"\"\"The temperature to use for sampling.\"\"\"\n top_p: Optional[float] = 0.95\n \"\"\"The top-p value to use for sampling.\"\"\"\n logprobs: Optional[int] = Field(None)\n \"\"\"The number of logprobs to return. If None, no logprobs are returned.\"\"\"\n echo: Optional[bool] = False\n \"\"\"Whether to echo the prompt.\"\"\"\n stop: Optional[List[str]] = []", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html"}
+{"id": "31fd9adfb983-2", "text": "\"\"\"Whether to echo the prompt.\"\"\"\n stop: Optional[List[str]] = []\n \"\"\"A list of strings to stop generation when encountered.\"\"\"\n repeat_penalty: Optional[float] = 1.1\n \"\"\"The penalty to apply to repeated tokens.\"\"\"\n top_k: Optional[int] = 40\n \"\"\"The top-k value to use for sampling.\"\"\"\n last_n_tokens_size: Optional[int] = 64\n \"\"\"The number of tokens to look back when applying the repeat_penalty.\"\"\"\n use_mmap: Optional[bool] = True\n \"\"\"Whether to keep the model loaded in RAM\"\"\"\n streaming: bool = True\n \"\"\"Whether to stream the results, token by token.\"\"\"\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that llama-cpp-python library is installed.\"\"\"\n model_path = values[\"model_path\"]\n model_param_names = [\n \"lora_path\",\n \"lora_base\",\n \"n_ctx\",\n \"n_parts\",\n \"seed\",\n \"f16_kv\",\n \"logits_all\",\n \"vocab_only\",\n \"use_mlock\",\n \"n_threads\",\n \"n_batch\",\n \"use_mmap\",\n \"last_n_tokens_size\",\n ]\n model_params = {k: values[k] for k in model_param_names}\n # For backwards compatibility, only include if non-null.\n if values[\"n_gpu_layers\"] is not None:\n model_params[\"n_gpu_layers\"] = values[\"n_gpu_layers\"]\n try:\n from llama_cpp import Llama\n values[\"client\"] = Llama(model_path, **model_params)\n except ImportError:\n raise ModuleNotFoundError(", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html"}
+{"id": "31fd9adfb983-3", "text": "except ImportError:\n raise ModuleNotFoundError(\n \"Could not import llama-cpp-python library. \"\n \"Please install the llama-cpp-python library to \"\n \"use this embedding model: pip install llama-cpp-python\"\n )\n except Exception as e:\n raise ValueError(\n f\"Could not load Llama model from path: {model_path}. \"\n f\"Received error {e}\"\n )\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling llama_cpp.\"\"\"\n return {\n \"suffix\": self.suffix,\n \"max_tokens\": self.max_tokens,\n \"temperature\": self.temperature,\n \"top_p\": self.top_p,\n \"logprobs\": self.logprobs,\n \"echo\": self.echo,\n \"stop_sequences\": self.stop, # key here is convention among LLM classes\n \"repeat_penalty\": self.repeat_penalty,\n \"top_k\": self.top_k,\n }\n @property\n def _identifying_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_path\": self.model_path}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"llama.cpp\"\n def _get_parameters(self, stop: Optional[List[str]] = None) -> Dict[str, Any]:\n \"\"\"\n Performs sanity check, preparing parameters in format needed by llama_cpp.\n Args:\n stop (Optional[List[str]]): List of stop sequences for llama_cpp.\n Returns:\n Dictionary containing the combined parameters.\n \"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html"}
+{"id": "31fd9adfb983-4", "text": "Returns:\n Dictionary containing the combined parameters.\n \"\"\"\n # Raise error if stop sequences are in both input and default params\n if self.stop and stop is not None:\n raise ValueError(\"`stop` found in both the input and default params.\")\n params = self._default_params\n # llama_cpp expects the \"stop\" key not this, so we remove it:\n params.pop(\"stop_sequences\")\n # then sets it as configured, or default to an empty list:\n params[\"stop\"] = self.stop or stop or []\n return params\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call the Llama model and return the output.\n Args:\n prompt: The prompt to use for generation.\n stop: A list of strings to stop generation when encountered.\n Returns:\n The generated text.\n Example:\n .. code-block:: python\n from langchain.llms import LlamaCpp\n llm = LlamaCpp(model_path=\"/path/to/local/llama/model.bin\")\n llm(\"This is a prompt.\")\n \"\"\"\n if self.streaming:\n # If streaming is enabled, we use the stream\n # method that yields as they are generated\n # and return the combined strings from the first choices's text:\n combined_text_output = \"\"\n for token in self.stream(prompt=prompt, stop=stop, run_manager=run_manager):\n combined_text_output += token[\"choices\"][0][\"text\"]\n return combined_text_output\n else:\n params = self._get_parameters(stop)\n result = self.client(prompt=prompt, **params)", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html"}
+{"id": "31fd9adfb983-5", "text": "result = self.client(prompt=prompt, **params)\n return result[\"choices\"][0][\"text\"]\n[docs] def stream(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> Generator[Dict, None, None]:\n \"\"\"Yields results objects as they are generated in real time.\n BETA: this is a beta feature while we figure out the right abstraction.\n Once that happens, this interface could change.\n It also calls the callback manager's on_llm_new_token event with\n similar parameters to the OpenAI LLM class method of the same name.\n Args:\n prompt: The prompts to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n A generator representing the stream of tokens being generated.\n Yields:\n A dictionary like objects containing a string token and metadata.\n See llama-cpp-python docs and below for more.\n Example:\n .. code-block:: python\n from langchain.llms import LlamaCpp\n llm = LlamaCpp(\n model_path=\"/path/to/local/model.bin\",\n temperature = 0.5\n )\n for chunk in llm.stream(\"Ask 'Hi, how are you?' like a pirate:'\",\n stop=[\"'\",\"\\n\"]):\n result = chunk[\"choices\"][0]\n print(result[\"text\"], end='', flush=True)\n \"\"\"\n params = self._get_parameters(stop)\n result = self.client(prompt=prompt, stream=True, **params)\n for chunk in result:\n token = chunk[\"choices\"][0][\"text\"]", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html"}
+{"id": "31fd9adfb983-6", "text": "for chunk in result:\n token = chunk[\"choices\"][0][\"text\"]\n log_probs = chunk[\"choices\"][0].get(\"logprobs\", None)\n if run_manager:\n run_manager.on_llm_new_token(\n token=token, verbose=self.verbose, log_probs=log_probs\n )\n yield chunk\n[docs] def get_num_tokens(self, text: str) -> int:\n tokenized_text = self.client.tokenize(text.encode(\"utf-8\"))\n return len(tokenized_text)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html"}
+{"id": "b44701dcae4f-0", "text": "Source code for langchain.llms.writer\n\"\"\"Wrapper around Writer APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\n[docs]class Writer(LLM):\n \"\"\"Wrapper around Writer large language models.\n To use, you should have the environment variable ``WRITER_API_KEY`` and\n ``WRITER_ORG_ID`` set with your API key and organization ID respectively.\n Example:\n .. code-block:: python\n from langchain import Writer\n writer = Writer(model_id=\"palmyra-base\")\n \"\"\"\n writer_org_id: Optional[str] = None\n \"\"\"Writer organization ID.\"\"\"\n model_id: str = \"palmyra-instruct\"\n \"\"\"Model name to use.\"\"\"\n min_tokens: Optional[int] = None\n \"\"\"Minimum number of tokens to generate.\"\"\"\n max_tokens: Optional[int] = None\n \"\"\"Maximum number of tokens to generate.\"\"\"\n temperature: Optional[float] = None\n \"\"\"What sampling temperature to use.\"\"\"\n top_p: Optional[float] = None\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n stop: Optional[List[str]] = None\n \"\"\"Sequences when completion generation will stop.\"\"\"\n presence_penalty: Optional[float] = None\n \"\"\"Penalizes repeated tokens regardless of frequency.\"\"\"\n repetition_penalty: Optional[float] = None\n \"\"\"Penalizes repeated tokens according to frequency.\"\"\"\n best_of: Optional[int] = None\n \"\"\"Generates this many completions server-side and returns the \"best\".\"\"\"\n logprobs: bool = False", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/writer.html"}
+{"id": "b44701dcae4f-1", "text": "logprobs: bool = False\n \"\"\"Whether to return log probabilities.\"\"\"\n n: Optional[int] = None\n \"\"\"How many completions to generate.\"\"\"\n writer_api_key: Optional[str] = None\n \"\"\"Writer API key.\"\"\"\n base_url: Optional[str] = None\n \"\"\"Base url to use, if None decides based on model name.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and organization id exist in environment.\"\"\"\n writer_api_key = get_from_dict_or_env(\n values, \"writer_api_key\", \"WRITER_API_KEY\"\n )\n values[\"writer_api_key\"] = writer_api_key\n writer_org_id = get_from_dict_or_env(values, \"writer_org_id\", \"WRITER_ORG_ID\")\n values[\"writer_org_id\"] = writer_org_id\n return values\n @property\n def _default_params(self) -> Mapping[str, Any]:\n \"\"\"Get the default parameters for calling Writer API.\"\"\"\n return {\n \"minTokens\": self.min_tokens,\n \"maxTokens\": self.max_tokens,\n \"temperature\": self.temperature,\n \"topP\": self.top_p,\n \"stop\": self.stop,\n \"presencePenalty\": self.presence_penalty,\n \"repetitionPenalty\": self.repetition_penalty,\n \"bestOf\": self.best_of,\n \"logprobs\": self.logprobs,\n \"n\": self.n,\n }\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/writer.html"}
+{"id": "b44701dcae4f-2", "text": "\"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"model_id\": self.model_id, \"writer_org_id\": self.writer_org_id},\n **self._default_params,\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"writer\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call out to Writer's completions endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = Writer(\"Tell me a joke.\")\n \"\"\"\n if self.base_url is not None:\n base_url = self.base_url\n else:\n base_url = (\n \"https://enterprise-api.writer.com/llm\"\n f\"/organization/{self.writer_org_id}\"\n f\"/model/{self.model_id}/completions\"\n )\n response = requests.post(\n url=base_url,\n headers={\n \"Authorization\": f\"{self.writer_api_key}\",\n \"Content-Type\": \"application/json\",\n \"Accept\": \"application/json\",\n },\n json={\"prompt\": prompt, **self._default_params},\n )\n text = response.text\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text\nBy Harrison Chase", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/writer.html"}
+{"id": "b44701dcae4f-3", "text": "return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/writer.html"}
+{"id": "c5671f6bebf9-0", "text": "Source code for langchain.llms.bedrock\nimport json\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nclass LLMInputOutputAdapter:\n \"\"\"Adapter class to prepare the inputs from Langchain to a format\n that LLM model expects. Also, provides helper function to extract\n the generated text from the model response.\"\"\"\n @classmethod\n def prepare_input(\n cls, provider: str, prompt: str, model_kwargs: Dict[str, Any]\n ) -> Dict[str, Any]:\n input_body = {**model_kwargs}\n if provider == \"anthropic\" or provider == \"ai21\":\n input_body[\"prompt\"] = prompt\n elif provider == \"amazon\":\n input_body = dict()\n input_body[\"inputText\"] = prompt\n input_body[\"textGenerationConfig\"] = {**model_kwargs}\n else:\n input_body[\"inputText\"] = prompt\n if provider == \"anthropic\" and \"max_tokens_to_sample\" not in input_body:\n input_body[\"max_tokens_to_sample\"] = 50\n return input_body\n @classmethod\n def prepare_output(cls, provider: str, response: Any) -> str:\n if provider == \"anthropic\":\n response_body = json.loads(response.get(\"body\").read().decode())\n return response_body.get(\"completion\")\n else:\n response_body = json.loads(response.get(\"body\").read())\n if provider == \"ai21\":\n return response_body.get(\"completions\")[0].get(\"data\").get(\"text\")\n else:", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/bedrock.html"}
+{"id": "c5671f6bebf9-1", "text": "else:\n return response_body.get(\"results\")[0].get(\"outputText\")\n[docs]class Bedrock(LLM):\n \"\"\"LLM provider to invoke Bedrock models.\n To authenticate, the AWS client uses the following methods to\n automatically load credentials:\n https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n If a specific credential profile should be used, you must pass\n the name of the profile from the ~/.aws/credentials file that is to be used.\n Make sure the credentials / roles used have the required policies to\n access the Bedrock service.\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n from bedrock_langchain.bedrock_llm import BedrockLLM\n llm = BedrockLLM(\n credentials_profile_name=\"default\", \n model_id=\"amazon.titan-tg1-large\"\n )\n \"\"\"\n client: Any #: :meta private:\n region_name: Optional[str] = None\n \"\"\"The aws region e.g., `us-west-2`. Fallsback to AWS_DEFAULT_REGION env variable\n or region specified in ~/.aws/config in case it is not provided here.\n \"\"\"\n credentials_profile_name: Optional[str] = None\n \"\"\"The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\n has either access keys or role information specified.\n If not specified, the default credential profile or, if on an EC2 instance,\n credentials from IMDS will be used.\n See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n \"\"\"\n model_id: str\n \"\"\"Id of the model to call, e.g., amazon.titan-tg1-large, this is", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/bedrock.html"}
+{"id": "c5671f6bebf9-2", "text": "equivalent to the modelId property in the list-foundation-models api\"\"\"\n model_kwargs: Optional[Dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that AWS credentials to and python package exists in environment.\"\"\"\n # Skip creating new client if passed in constructor\n if values[\"client\"] is not None:\n return values\n try:\n import boto3\n if values[\"credentials_profile_name\"] is not None:\n session = boto3.Session(profile_name=values[\"credentials_profile_name\"])\n else:\n # use default credentials\n session = boto3.Session()\n client_params = {}\n if values[\"region_name\"]:\n client_params[\"region_name\"] = values[\"region_name\"]\n values[\"client\"] = session.client(\"bedrock\", **client_params)\n except ImportError:\n raise ModuleNotFoundError(\n \"Could not import boto3 python package. \"\n \"Please install it with `pip install boto3`.\"\n )\n except Exception as e:\n raise ValueError(\n \"Could not load credentials to authenticate with AWS client. \"\n \"Please check that credentials in the specified \"\n \"profile name are valid.\"\n ) from e\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n _model_kwargs = self.model_kwargs or {}\n return {\n **{\"model_kwargs\": _model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/bedrock.html"}
+{"id": "c5671f6bebf9-3", "text": "\"\"\"Return type of llm.\"\"\"\n return \"amazon_bedrock\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call out to Bedrock service model.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = se(\"Tell me a joke.\")\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n provider = self.model_id.split(\".\")[0]\n input_body = LLMInputOutputAdapter.prepare_input(\n provider, prompt, _model_kwargs\n )\n body = json.dumps(input_body)\n accept = \"application/json\"\n contentType = \"application/json\"\n try:\n response = self.client.invoke_model(\n body=body, modelId=self.model_id, accept=accept, contentType=contentType\n )\n text = LLMInputOutputAdapter.prepare_output(provider, response)\n except Exception as e:\n raise ValueError(f\"Error raised by bedrock service: {e}\")\n if stop is not None:\n text = enforce_stop_tokens(text, stop)\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/bedrock.html"}
+{"id": "8605d99651e1-0", "text": "Source code for langchain.llms.gpt4all\n\"\"\"Wrapper for the GPT4All model.\"\"\"\nfrom functools import partial\nfrom typing import Any, Dict, List, Mapping, Optional, Set\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\n[docs]class GPT4All(LLM):\n r\"\"\"Wrapper around GPT4All language models.\n To use, you should have the ``gpt4all`` python package installed, the\n pre-trained model file, and the model's config information.\n Example:\n .. code-block:: python\n from langchain.llms import GPT4All\n model = GPT4All(model=\"./models/gpt4all-model.bin\", n_ctx=512, n_threads=8)\n # Simplest invocation\n response = model(\"Once upon a time, \")\n \"\"\"\n model: str\n \"\"\"Path to the pre-trained GPT4All model file.\"\"\"\n backend: Optional[str] = Field(None, alias=\"backend\")\n n_ctx: int = Field(512, alias=\"n_ctx\")\n \"\"\"Token context window.\"\"\"\n n_parts: int = Field(-1, alias=\"n_parts\")\n \"\"\"Number of parts to split the model into. \n If -1, the number of parts is automatically determined.\"\"\"\n seed: int = Field(0, alias=\"seed\")\n \"\"\"Seed. If -1, a random seed is used.\"\"\"\n f16_kv: bool = Field(False, alias=\"f16_kv\")\n \"\"\"Use half-precision for key/value cache.\"\"\"\n logits_all: bool = Field(False, alias=\"logits_all\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/gpt4all.html"}
+{"id": "8605d99651e1-1", "text": "logits_all: bool = Field(False, alias=\"logits_all\")\n \"\"\"Return logits for all tokens, not just the last token.\"\"\"\n vocab_only: bool = Field(False, alias=\"vocab_only\")\n \"\"\"Only load the vocabulary, no weights.\"\"\"\n use_mlock: bool = Field(False, alias=\"use_mlock\")\n \"\"\"Force system to keep model in RAM.\"\"\"\n embedding: bool = Field(False, alias=\"embedding\")\n \"\"\"Use embedding mode only.\"\"\"\n n_threads: Optional[int] = Field(4, alias=\"n_threads\")\n \"\"\"Number of threads to use.\"\"\"\n n_predict: Optional[int] = 256\n \"\"\"The maximum number of tokens to generate.\"\"\"\n temp: Optional[float] = 0.8\n \"\"\"The temperature to use for sampling.\"\"\"\n top_p: Optional[float] = 0.95\n \"\"\"The top-p value to use for sampling.\"\"\"\n top_k: Optional[int] = 40\n \"\"\"The top-k value to use for sampling.\"\"\"\n echo: Optional[bool] = False\n \"\"\"Whether to echo the prompt.\"\"\"\n stop: Optional[List[str]] = []\n \"\"\"A list of strings to stop generation when encountered.\"\"\"\n repeat_last_n: Optional[int] = 64\n \"Last n tokens to penalize\"\n repeat_penalty: Optional[float] = 1.3\n \"\"\"The penalty to apply to repeated tokens.\"\"\"\n n_batch: int = Field(1, alias=\"n_batch\")\n \"\"\"Batch size for prompt processing.\"\"\"\n streaming: bool = False\n \"\"\"Whether to stream the results or not.\"\"\"\n context_erase: float = 0.5\n \"\"\"Leave (n_ctx * context_erase) tokens\n starting from beginning if the context has run out.\"\"\"\n allow_download: bool = False", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/gpt4all.html"}
+{"id": "8605d99651e1-2", "text": "starting from beginning if the context has run out.\"\"\"\n allow_download: bool = False\n \"\"\"If model does not exist in ~/.cache/gpt4all/, download it.\"\"\"\n client: Any = None #: :meta private:\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @staticmethod\n def _model_param_names() -> Set[str]:\n return {\n \"n_ctx\",\n \"n_predict\",\n \"top_k\",\n \"top_p\",\n \"temp\",\n \"n_batch\",\n \"repeat_penalty\",\n \"repeat_last_n\",\n \"context_erase\",\n }\n def _default_params(self) -> Dict[str, Any]:\n return {\n \"n_ctx\": self.n_ctx,\n \"n_predict\": self.n_predict,\n \"top_k\": self.top_k,\n \"top_p\": self.top_p,\n \"temp\": self.temp,\n \"n_batch\": self.n_batch,\n \"repeat_penalty\": self.repeat_penalty,\n \"repeat_last_n\": self.repeat_last_n,\n \"context_erase\": self.context_erase,\n }\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in the environment.\"\"\"\n try:\n from gpt4all import GPT4All as GPT4AllModel\n except ImportError:\n raise ImportError(\n \"Could not import gpt4all python package. \"\n \"Please install it with `pip install gpt4all`.\"\n )\n full_path = values[\"model\"]\n model_path, delimiter, model_name = full_path.rpartition(\"/\")\n model_path += delimiter", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/gpt4all.html"}
+{"id": "8605d99651e1-3", "text": "model_path += delimiter\n values[\"client\"] = GPT4AllModel(\n model_name,\n model_path=model_path or None,\n model_type=values[\"backend\"],\n allow_download=values[\"allow_download\"],\n )\n if values[\"n_threads\"] is not None:\n # set n_threads\n values[\"client\"].model.set_thread_count(values[\"n_threads\"])\n try:\n values[\"backend\"] = values[\"client\"].model_type\n except AttributeError:\n # The below is for compatibility with GPT4All Python bindings <= 0.2.3.\n values[\"backend\"] = values[\"client\"].model.model_type\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"model\": self.model,\n **self._default_params(),\n **{\n k: v for k, v in self.__dict__.items() if k in self._model_param_names()\n },\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return the type of llm.\"\"\"\n return \"gpt4all\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n r\"\"\"Call out to GPT4All's generate method.\n Args:\n prompt: The prompt to pass into the model.\n stop: A list of strings to stop generation when encountered.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n prompt = \"Once upon a time, \"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/gpt4all.html"}
+{"id": "8605d99651e1-4", "text": ".. code-block:: python\n prompt = \"Once upon a time, \"\n response = model(prompt, n_predict=55)\n \"\"\"\n text_callback = None\n if run_manager:\n text_callback = partial(run_manager.on_llm_new_token, verbose=self.verbose)\n text = \"\"\n for token in self.client.generate(prompt, **self._default_params()):\n if text_callback:\n text_callback(token)\n text += token\n if stop is not None:\n text = enforce_stop_tokens(text, stop)\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/gpt4all.html"}
+{"id": "c5b0cfb8e98a-0", "text": "Source code for langchain.llms.self_hosted_hugging_face\n\"\"\"Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware.\"\"\"\nimport importlib.util\nimport logging\nfrom typing import Any, Callable, List, Mapping, Optional\nfrom pydantic import Extra\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.self_hosted import SelfHostedPipeline\nfrom langchain.llms.utils import enforce_stop_tokens\nDEFAULT_MODEL_ID = \"gpt2\"\nDEFAULT_TASK = \"text-generation\"\nVALID_TASKS = (\"text2text-generation\", \"text-generation\", \"summarization\")\nlogger = logging.getLogger(__name__)\ndef _generate_text(\n pipeline: Any,\n prompt: str,\n *args: Any,\n stop: Optional[List[str]] = None,\n **kwargs: Any,\n) -> str:\n \"\"\"Inference function to send to the remote hardware.\n Accepts a Hugging Face pipeline (or more likely,\n a key pointing to such a pipeline on the cluster's object store)\n and returns generated text.\n \"\"\"\n response = pipeline(prompt, *args, **kwargs)\n if pipeline.task == \"text-generation\":\n # Text generation return includes the starter text.\n text = response[0][\"generated_text\"][len(prompt) :]\n elif pipeline.task == \"text2text-generation\":\n text = response[0][\"generated_text\"]\n elif pipeline.task == \"summarization\":\n text = response[0][\"summary_text\"]\n else:\n raise ValueError(\n f\"Got invalid task {pipeline.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n if stop is not None:\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/self_hosted_hugging_face.html"}
+{"id": "c5b0cfb8e98a-1", "text": "text = enforce_stop_tokens(text, stop)\n return text\ndef _load_transformer(\n model_id: str = DEFAULT_MODEL_ID,\n task: str = DEFAULT_TASK,\n device: int = 0,\n model_kwargs: Optional[dict] = None,\n) -> Any:\n \"\"\"Inference function to send to the remote hardware.\n Accepts a huggingface model_id and returns a pipeline for the task.\n \"\"\"\n from transformers import AutoModelForCausalLM, AutoModelForSeq2SeqLM, AutoTokenizer\n from transformers import pipeline as hf_pipeline\n _model_kwargs = model_kwargs or {}\n tokenizer = AutoTokenizer.from_pretrained(model_id, **_model_kwargs)\n try:\n if task == \"text-generation\":\n model = AutoModelForCausalLM.from_pretrained(model_id, **_model_kwargs)\n elif task in (\"text2text-generation\", \"summarization\"):\n model = AutoModelForSeq2SeqLM.from_pretrained(model_id, **_model_kwargs)\n else:\n raise ValueError(\n f\"Got invalid task {task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n except ImportError as e:\n raise ValueError(\n f\"Could not load the {task} model due to missing dependencies.\"\n ) from e\n if importlib.util.find_spec(\"torch\") is not None:\n import torch\n cuda_device_count = torch.cuda.device_count()\n if device < -1 or (device >= cuda_device_count):\n raise ValueError(\n f\"Got device=={device}, \"\n f\"device is required to be within [-1, {cuda_device_count})\"\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/self_hosted_hugging_face.html"}
+{"id": "c5b0cfb8e98a-2", "text": ")\n if device < 0 and cuda_device_count > 0:\n logger.warning(\n \"Device has %d GPUs available. \"\n \"Provide device={deviceId} to `from_model_id` to use available\"\n \"GPUs for execution. deviceId is -1 for CPU and \"\n \"can be a positive integer associated with CUDA device id.\",\n cuda_device_count,\n )\n pipeline = hf_pipeline(\n task=task,\n model=model,\n tokenizer=tokenizer,\n device=device,\n model_kwargs=_model_kwargs,\n )\n if pipeline.task not in VALID_TASKS:\n raise ValueError(\n f\"Got invalid task {pipeline.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n return pipeline\n[docs]class SelfHostedHuggingFaceLLM(SelfHostedPipeline):\n \"\"\"Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware.\n Supported hardware includes auto-launched instances on AWS, GCP, Azure,\n and Lambda, as well as servers specified\n by IP address and SSH credentials (such as on-prem, or another cloud\n like Paperspace, Coreweave, etc.).\n To use, you should have the ``runhouse`` python package installed.\n Only supports `text-generation`, `text2text-generation` and `summarization` for now.\n Example using from_model_id:\n .. code-block:: python\n from langchain.llms import SelfHostedHuggingFaceLLM\n import runhouse as rh\n gpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\n hf = SelfHostedHuggingFaceLLM(", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/self_hosted_hugging_face.html"}
+{"id": "c5b0cfb8e98a-3", "text": "hf = SelfHostedHuggingFaceLLM(\n model_id=\"google/flan-t5-large\", task=\"text2text-generation\",\n hardware=gpu\n )\n Example passing fn that generates a pipeline (bc the pipeline is not serializable):\n .. code-block:: python\n from langchain.llms import SelfHostedHuggingFaceLLM\n from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\n import runhouse as rh\n def get_pipeline():\n model_id = \"gpt2\"\n tokenizer = AutoTokenizer.from_pretrained(model_id)\n model = AutoModelForCausalLM.from_pretrained(model_id)\n pipe = pipeline(\n \"text-generation\", model=model, tokenizer=tokenizer\n )\n return pipe\n hf = SelfHostedHuggingFaceLLM(\n model_load_fn=get_pipeline, model_id=\"gpt2\", hardware=gpu)\n \"\"\"\n model_id: str = DEFAULT_MODEL_ID\n \"\"\"Hugging Face model_id to load the model.\"\"\"\n task: str = DEFAULT_TASK\n \"\"\"Hugging Face task (\"text-generation\", \"text2text-generation\" or\n \"summarization\").\"\"\"\n device: int = 0\n \"\"\"Device to use for inference. -1 for CPU, 0 for GPU, 1 for second GPU, etc.\"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n hardware: Any\n \"\"\"Remote hardware to send the inference function to.\"\"\"\n model_reqs: List[str] = [\"./\", \"transformers\", \"torch\"]\n \"\"\"Requirements to install on hardware to inference the model.\"\"\"\n model_load_fn: Callable = _load_transformer\n \"\"\"Function to load the model remotely on the server.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/self_hosted_hugging_face.html"}
+{"id": "c5b0cfb8e98a-4", "text": "\"\"\"Function to load the model remotely on the server.\"\"\"\n inference_fn: Callable = _generate_text #: :meta private:\n \"\"\"Inference function to send to the remote hardware.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n def __init__(self, **kwargs: Any):\n \"\"\"Construct the pipeline remotely using an auxiliary function.\n The load function needs to be importable to be imported\n and run on the server, i.e. in a module and not a REPL or closure.\n Then, initialize the remote inference function.\n \"\"\"\n load_fn_kwargs = {\n \"model_id\": kwargs.get(\"model_id\", DEFAULT_MODEL_ID),\n \"task\": kwargs.get(\"task\", DEFAULT_TASK),\n \"device\": kwargs.get(\"device\", 0),\n \"model_kwargs\": kwargs.get(\"model_kwargs\", None),\n }\n super().__init__(load_fn_kwargs=load_fn_kwargs, **kwargs)\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"model_id\": self.model_id},\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n return \"selfhosted_huggingface_pipeline\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n return self.client(pipeline=self.pipeline_ref, prompt=prompt, stop=stop)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/self_hosted_hugging_face.html"}
+{"id": "c5b0cfb8e98a-5", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/self_hosted_hugging_face.html"}
+{"id": "d81ab44052e2-0", "text": "Source code for langchain.llms.gooseai\n\"\"\"Wrapper around GooseAI API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class GooseAI(LLM):\n \"\"\"Wrapper around OpenAI large language models.\n To use, you should have the ``openai`` python package installed, and the\n environment variable ``GOOSEAI_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the openai.create call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import GooseAI\n gooseai = GooseAI(model_name=\"gpt-neo-20b\")\n \"\"\"\n client: Any\n model_name: str = \"gpt-neo-20b\"\n \"\"\"Model name to use\"\"\"\n temperature: float = 0.7\n \"\"\"What sampling temperature to use\"\"\"\n max_tokens: int = 256\n \"\"\"The maximum number of tokens to generate in the completion.\n -1 returns as many tokens as possible given the prompt and\n the models maximal context size.\"\"\"\n top_p: float = 1\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n min_tokens: int = 1\n \"\"\"The minimum number of tokens to generate in the completion.\"\"\"\n frequency_penalty: float = 0\n \"\"\"Penalizes repeated tokens according to frequency.\"\"\"\n presence_penalty: float = 0\n \"\"\"Penalizes repeated tokens.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/gooseai.html"}
+{"id": "d81ab44052e2-1", "text": "presence_penalty: float = 0\n \"\"\"Penalizes repeated tokens.\"\"\"\n n: int = 1\n \"\"\"How many completions to generate for each prompt.\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not explicitly specified.\"\"\"\n logit_bias: Optional[Dict[str, float]] = Field(default_factory=dict)\n \"\"\"Adjust the probability of specific tokens being generated.\"\"\"\n gooseai_api_key: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.ignore\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"WARNING! {field_name} is not default parameter.\n {field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n gooseai_api_key = get_from_dict_or_env(\n values, \"gooseai_api_key\", \"GOOSEAI_API_KEY\"\n )\n try:", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/gooseai.html"}
+{"id": "d81ab44052e2-2", "text": ")\n try:\n import openai\n openai.api_key = gooseai_api_key\n openai.api_base = \"https://api.goose.ai/v1\"\n values[\"client\"] = openai.Completion\n except ImportError:\n raise ImportError(\n \"Could not import openai python package. \"\n \"Please install it with `pip install openai`.\"\n )\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling GooseAI API.\"\"\"\n normal_params = {\n \"temperature\": self.temperature,\n \"max_tokens\": self.max_tokens,\n \"top_p\": self.top_p,\n \"min_tokens\": self.min_tokens,\n \"frequency_penalty\": self.frequency_penalty,\n \"presence_penalty\": self.presence_penalty,\n \"n\": self.n,\n \"logit_bias\": self.logit_bias,\n }\n return {**normal_params, **self.model_kwargs}\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_name\": self.model_name}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"gooseai\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call the GooseAI API.\"\"\"\n params = self._default_params\n if stop is not None:\n if \"stop\" in params:", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/gooseai.html"}
+{"id": "d81ab44052e2-3", "text": "if stop is not None:\n if \"stop\" in params:\n raise ValueError(\"`stop` found in both the input and default params.\")\n params[\"stop\"] = stop\n response = self.client.create(engine=self.model_name, prompt=prompt, **params)\n text = response.choices[0].text\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/gooseai.html"}
+{"id": "3663bc7e25af-0", "text": "Source code for langchain.llms.bananadev\n\"\"\"Wrapper around Banana API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class Banana(LLM):\n \"\"\"Wrapper around Banana large language models.\n To use, you should have the ``banana-dev`` python package installed,\n and the environment variable ``BANANA_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import Banana\n banana = Banana(model_key=\"\")\n \"\"\"\n model_key: str = \"\"\n \"\"\"model endpoint to use\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not\n explicitly specified.\"\"\"\n banana_api_key: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.forbid\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/bananadev.html"}
+{"id": "3663bc7e25af-1", "text": "if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"{field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n banana_api_key = get_from_dict_or_env(\n values, \"banana_api_key\", \"BANANA_API_KEY\"\n )\n values[\"banana_api_key\"] = banana_api_key\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"model_key\": self.model_key},\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"banana\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call to Banana endpoint.\"\"\"\n try:\n import banana_dev as banana\n except ImportError:\n raise ImportError(\n \"Could not import banana-dev python package. \"\n \"Please install it with `pip install banana-dev`.\"\n )\n params = self.model_kwargs or {}\n api_key = self.banana_api_key\n model_key = self.model_key", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/bananadev.html"}
+{"id": "3663bc7e25af-2", "text": "api_key = self.banana_api_key\n model_key = self.model_key\n model_inputs = {\n # a json specific to your model.\n \"prompt\": prompt,\n **params,\n }\n response = banana.run(api_key, model_key, model_inputs)\n try:\n text = response[\"modelOutputs\"][0][\"output\"]\n except (KeyError, TypeError):\n returned = response[\"modelOutputs\"][0]\n raise ValueError(\n \"Response should be of schema: {'output': 'text'}.\"\n f\"\\nResponse was: {returned}\"\n \"\\nTo fix this:\"\n \"\\n- fork the source repo of the Banana model\"\n \"\\n- modify app.py to return the above schema\"\n \"\\n- deploy that as a custom repo\"\n )\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/bananadev.html"}
+{"id": "dd11d18c74c1-0", "text": "Source code for langchain.llms.cerebriumai\n\"\"\"Wrapper around CerebriumAI API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class CerebriumAI(LLM):\n \"\"\"Wrapper around CerebriumAI large language models.\n To use, you should have the ``cerebrium`` python package installed, and the\n environment variable ``CEREBRIUMAI_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import CerebriumAI\n cerebrium = CerebriumAI(endpoint_url=\"\")\n \"\"\"\n endpoint_url: str = \"\"\n \"\"\"model endpoint to use\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not\n explicitly specified.\"\"\"\n cerebriumai_api_key: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.forbid\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/cerebriumai.html"}
+{"id": "dd11d18c74c1-1", "text": "all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"{field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n cerebriumai_api_key = get_from_dict_or_env(\n values, \"cerebriumai_api_key\", \"CEREBRIUMAI_API_KEY\"\n )\n values[\"cerebriumai_api_key\"] = cerebriumai_api_key\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"endpoint_url\": self.endpoint_url},\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"cerebriumai\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call to CerebriumAI endpoint.\"\"\"\n try:\n from cerebrium import model_api_request", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/cerebriumai.html"}
+{"id": "dd11d18c74c1-2", "text": "try:\n from cerebrium import model_api_request\n except ImportError:\n raise ValueError(\n \"Could not import cerebrium python package. \"\n \"Please install it with `pip install cerebrium`.\"\n )\n params = self.model_kwargs or {}\n response = model_api_request(\n self.endpoint_url, {\"prompt\": prompt, **params}, self.cerebriumai_api_key\n )\n text = response[\"data\"][\"result\"]\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/cerebriumai.html"}
+{"id": "84561d26e7df-0", "text": "Source code for langchain.llms.huggingface_hub\n\"\"\"Wrapper around HuggingFace APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nDEFAULT_REPO_ID = \"gpt2\"\nVALID_TASKS = (\"text2text-generation\", \"text-generation\", \"summarization\")\n[docs]class HuggingFaceHub(LLM):\n \"\"\"Wrapper around HuggingFaceHub models.\n To use, you should have the ``huggingface_hub`` python package installed, and the\n environment variable ``HUGGINGFACEHUB_API_TOKEN`` set with your API token, or pass\n it as a named parameter to the constructor.\n Only supports `text-generation`, `text2text-generation` and `summarization` for now.\n Example:\n .. code-block:: python\n from langchain.llms import HuggingFaceHub\n hf = HuggingFaceHub(repo_id=\"gpt2\", huggingfacehub_api_token=\"my-api-key\")\n \"\"\"\n client: Any #: :meta private:\n repo_id: str = DEFAULT_REPO_ID\n \"\"\"Model name to use.\"\"\"\n task: Optional[str] = None\n \"\"\"Task to call the model with.\n Should be a task that returns `generated_text` or `summary_text`.\"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n huggingfacehub_api_token: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_hub.html"}
+{"id": "84561d26e7df-1", "text": "\"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n huggingfacehub_api_token = get_from_dict_or_env(\n values, \"huggingfacehub_api_token\", \"HUGGINGFACEHUB_API_TOKEN\"\n )\n try:\n from huggingface_hub.inference_api import InferenceApi\n repo_id = values[\"repo_id\"]\n client = InferenceApi(\n repo_id=repo_id,\n token=huggingfacehub_api_token,\n task=values.get(\"task\"),\n )\n if client.task not in VALID_TASKS:\n raise ValueError(\n f\"Got invalid task {client.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n values[\"client\"] = client\n except ImportError:\n raise ValueError(\n \"Could not import huggingface_hub python package. \"\n \"Please install it with `pip install huggingface_hub`.\"\n )\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n _model_kwargs = self.model_kwargs or {}\n return {\n **{\"repo_id\": self.repo_id, \"task\": self.task},\n **{\"model_kwargs\": _model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"huggingface_hub\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_hub.html"}
+{"id": "84561d26e7df-2", "text": "prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call out to HuggingFace Hub's inference endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = hf(\"Tell me a joke.\")\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n response = self.client(inputs=prompt, params=_model_kwargs)\n if \"error\" in response:\n raise ValueError(f\"Error raised by inference API: {response['error']}\")\n if self.client.task == \"text-generation\":\n # Text generation return includes the starter text.\n text = response[0][\"generated_text\"][len(prompt) :]\n elif self.client.task == \"text2text-generation\":\n text = response[0][\"generated_text\"]\n elif self.client.task == \"summarization\":\n text = response[0][\"summary_text\"]\n else:\n raise ValueError(\n f\"Got invalid task {self.client.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n if stop is not None:\n # This is a bit hacky, but I can't figure out a better way to enforce\n # stop tokens when making calls to huggingface_hub.\n text = enforce_stop_tokens(text, stop)\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_hub.html"}
+{"id": "fe9e6c109be7-0", "text": "Source code for langchain.llms.promptlayer_openai\n\"\"\"PromptLayer wrapper.\"\"\"\nimport datetime\nfrom typing import List, Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.llms import OpenAI, OpenAIChat\nfrom langchain.schema import LLMResult\n[docs]class PromptLayerOpenAI(OpenAI):\n \"\"\"Wrapper around OpenAI large language models.\n To use, you should have the ``openai`` and ``promptlayer`` python\n package installed, and the environment variable ``OPENAI_API_KEY``\n and ``PROMPTLAYER_API_KEY`` set with your openAI API key and\n promptlayer key respectively.\n All parameters that can be passed to the OpenAI LLM can also\n be passed here. The PromptLayerOpenAI LLM adds two optional\n parameters:\n ``pl_tags``: List of strings to tag the request with.\n ``return_pl_id``: If True, the PromptLayer request ID will be\n returned in the ``generation_info`` field of the\n ``Generation`` object.\n Example:\n .. code-block:: python\n from langchain.llms import PromptLayerOpenAI\n openai = PromptLayerOpenAI(model_name=\"text-davinci-003\")\n \"\"\"\n pl_tags: Optional[List[str]]\n return_pl_id: Optional[bool] = False\n def _generate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> LLMResult:\n \"\"\"Call OpenAI generate and then call PromptLayer API to log the request.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/promptlayer_openai.html"}
+{"id": "fe9e6c109be7-1", "text": "\"\"\"Call OpenAI generate and then call PromptLayer API to log the request.\"\"\"\n from promptlayer.utils import get_api_key, promptlayer_api_request\n request_start_time = datetime.datetime.now().timestamp()\n generated_responses = super()._generate(prompts, stop, run_manager)\n request_end_time = datetime.datetime.now().timestamp()\n for i in range(len(prompts)):\n prompt = prompts[i]\n generation = generated_responses.generations[i][0]\n resp = {\n \"text\": generation.text,\n \"llm_output\": generated_responses.llm_output,\n }\n pl_request_id = promptlayer_api_request(\n \"langchain.PromptLayerOpenAI\",\n \"langchain\",\n [prompt],\n self._identifying_params,\n self.pl_tags,\n resp,\n request_start_time,\n request_end_time,\n get_api_key(),\n return_pl_id=self.return_pl_id,\n )\n if self.return_pl_id:\n if generation.generation_info is None or not isinstance(\n generation.generation_info, dict\n ):\n generation.generation_info = {}\n generation.generation_info[\"pl_request_id\"] = pl_request_id\n return generated_responses\n async def _agenerate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n ) -> LLMResult:\n from promptlayer.utils import get_api_key, promptlayer_api_request_async\n request_start_time = datetime.datetime.now().timestamp()\n generated_responses = await super()._agenerate(prompts, stop, run_manager)\n request_end_time = datetime.datetime.now().timestamp()\n for i in range(len(prompts)):", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/promptlayer_openai.html"}
+{"id": "fe9e6c109be7-2", "text": "for i in range(len(prompts)):\n prompt = prompts[i]\n generation = generated_responses.generations[i][0]\n resp = {\n \"text\": generation.text,\n \"llm_output\": generated_responses.llm_output,\n }\n pl_request_id = await promptlayer_api_request_async(\n \"langchain.PromptLayerOpenAI.async\",\n \"langchain\",\n [prompt],\n self._identifying_params,\n self.pl_tags,\n resp,\n request_start_time,\n request_end_time,\n get_api_key(),\n return_pl_id=self.return_pl_id,\n )\n if self.return_pl_id:\n if generation.generation_info is None or not isinstance(\n generation.generation_info, dict\n ):\n generation.generation_info = {}\n generation.generation_info[\"pl_request_id\"] = pl_request_id\n return generated_responses\n[docs]class PromptLayerOpenAIChat(OpenAIChat):\n \"\"\"Wrapper around OpenAI large language models.\n To use, you should have the ``openai`` and ``promptlayer`` python\n package installed, and the environment variable ``OPENAI_API_KEY``\n and ``PROMPTLAYER_API_KEY`` set with your openAI API key and\n promptlayer key respectively.\n All parameters that can be passed to the OpenAIChat LLM can also\n be passed here. The PromptLayerOpenAIChat adds two optional\n parameters:\n ``pl_tags``: List of strings to tag the request with.\n ``return_pl_id``: If True, the PromptLayer request ID will be\n returned in the ``generation_info`` field of the\n ``Generation`` object.\n Example:\n .. code-block:: python", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/promptlayer_openai.html"}
+{"id": "fe9e6c109be7-3", "text": "``Generation`` object.\n Example:\n .. code-block:: python\n from langchain.llms import PromptLayerOpenAIChat\n openaichat = PromptLayerOpenAIChat(model_name=\"gpt-3.5-turbo\")\n \"\"\"\n pl_tags: Optional[List[str]]\n return_pl_id: Optional[bool] = False\n def _generate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> LLMResult:\n \"\"\"Call OpenAI generate and then call PromptLayer API to log the request.\"\"\"\n from promptlayer.utils import get_api_key, promptlayer_api_request\n request_start_time = datetime.datetime.now().timestamp()\n generated_responses = super()._generate(prompts, stop, run_manager)\n request_end_time = datetime.datetime.now().timestamp()\n for i in range(len(prompts)):\n prompt = prompts[i]\n generation = generated_responses.generations[i][0]\n resp = {\n \"text\": generation.text,\n \"llm_output\": generated_responses.llm_output,\n }\n pl_request_id = promptlayer_api_request(\n \"langchain.PromptLayerOpenAIChat\",\n \"langchain\",\n [prompt],\n self._identifying_params,\n self.pl_tags,\n resp,\n request_start_time,\n request_end_time,\n get_api_key(),\n return_pl_id=self.return_pl_id,\n )\n if self.return_pl_id:\n if generation.generation_info is None or not isinstance(\n generation.generation_info, dict\n ):\n generation.generation_info = {}", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/promptlayer_openai.html"}
+{"id": "fe9e6c109be7-4", "text": "generation.generation_info, dict\n ):\n generation.generation_info = {}\n generation.generation_info[\"pl_request_id\"] = pl_request_id\n return generated_responses\n async def _agenerate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n ) -> LLMResult:\n from promptlayer.utils import get_api_key, promptlayer_api_request_async\n request_start_time = datetime.datetime.now().timestamp()\n generated_responses = await super()._agenerate(prompts, stop, run_manager)\n request_end_time = datetime.datetime.now().timestamp()\n for i in range(len(prompts)):\n prompt = prompts[i]\n generation = generated_responses.generations[i][0]\n resp = {\n \"text\": generation.text,\n \"llm_output\": generated_responses.llm_output,\n }\n pl_request_id = await promptlayer_api_request_async(\n \"langchain.PromptLayerOpenAIChat.async\",\n \"langchain\",\n [prompt],\n self._identifying_params,\n self.pl_tags,\n resp,\n request_start_time,\n request_end_time,\n get_api_key(),\n return_pl_id=self.return_pl_id,\n )\n if self.return_pl_id:\n if generation.generation_info is None or not isinstance(\n generation.generation_info, dict\n ):\n generation.generation_info = {}\n generation.generation_info[\"pl_request_id\"] = pl_request_id\n return generated_responses\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/promptlayer_openai.html"}
+{"id": "f7c678a95cb6-0", "text": "Source code for langchain.llms.aleph_alpha\n\"\"\"Wrapper around Aleph Alpha APIs.\"\"\"\nfrom typing import Any, Dict, List, Optional, Sequence\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\n[docs]class AlephAlpha(LLM):\n \"\"\"Wrapper around Aleph Alpha large language models.\n To use, you should have the ``aleph_alpha_client`` python package installed, and the\n environment variable ``ALEPH_ALPHA_API_KEY`` set with your API key, or pass\n it as a named parameter to the constructor.\n Parameters are explained more in depth here:\n https://github.com/Aleph-Alpha/aleph-alpha-client/blob/c14b7dd2b4325c7da0d6a119f6e76385800e097b/aleph_alpha_client/completion.py#L10\n Example:\n .. code-block:: python\n from langchain.llms import AlephAlpha\n alpeh_alpha = AlephAlpha(aleph_alpha_api_key=\"my-api-key\")\n \"\"\"\n client: Any #: :meta private:\n model: Optional[str] = \"luminous-base\"\n \"\"\"Model name to use.\"\"\"\n maximum_tokens: int = 64\n \"\"\"The maximum number of tokens to be generated.\"\"\"\n temperature: float = 0.0\n \"\"\"A non-negative float that tunes the degree of randomness in generation.\"\"\"\n top_k: int = 0\n \"\"\"Number of most likely tokens to consider at each step.\"\"\"\n top_p: float = 0.0\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html"}
+{"id": "f7c678a95cb6-1", "text": "\"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n presence_penalty: float = 0.0\n \"\"\"Penalizes repeated tokens.\"\"\"\n frequency_penalty: float = 0.0\n \"\"\"Penalizes repeated tokens according to frequency.\"\"\"\n repetition_penalties_include_prompt: Optional[bool] = False\n \"\"\"Flag deciding whether presence penalty or frequency penalty are\n updated from the prompt.\"\"\"\n use_multiplicative_presence_penalty: Optional[bool] = False\n \"\"\"Flag deciding whether presence penalty is applied\n multiplicatively (True) or additively (False).\"\"\"\n penalty_bias: Optional[str] = None\n \"\"\"Penalty bias for the completion.\"\"\"\n penalty_exceptions: Optional[List[str]] = None\n \"\"\"List of strings that may be generated without penalty,\n regardless of other penalty settings\"\"\"\n penalty_exceptions_include_stop_sequences: Optional[bool] = None\n \"\"\"Should stop_sequences be included in penalty_exceptions.\"\"\"\n best_of: Optional[int] = None\n \"\"\"returns the one with the \"best of\" results\n (highest log probability per token)\n \"\"\"\n n: int = 1\n \"\"\"How many completions to generate for each prompt.\"\"\"\n logit_bias: Optional[Dict[int, float]] = None\n \"\"\"The logit bias allows to influence the likelihood of generating tokens.\"\"\"\n log_probs: Optional[int] = None\n \"\"\"Number of top log probabilities to be returned for each generated token.\"\"\"\n tokens: Optional[bool] = False\n \"\"\"return tokens of completion.\"\"\"\n disable_optimizations: Optional[bool] = False\n minimum_tokens: Optional[int] = 0\n \"\"\"Generate at least this number of tokens.\"\"\"\n echo: bool = False\n \"\"\"Echo the prompt in the completion.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html"}
+{"id": "f7c678a95cb6-2", "text": "echo: bool = False\n \"\"\"Echo the prompt in the completion.\"\"\"\n use_multiplicative_frequency_penalty: bool = False\n sequence_penalty: float = 0.0\n sequence_penalty_min_length: int = 2\n use_multiplicative_sequence_penalty: bool = False\n completion_bias_inclusion: Optional[Sequence[str]] = None\n completion_bias_inclusion_first_token_only: bool = False\n completion_bias_exclusion: Optional[Sequence[str]] = None\n completion_bias_exclusion_first_token_only: bool = False\n \"\"\"Only consider the first token for the completion_bias_exclusion.\"\"\"\n contextual_control_threshold: Optional[float] = None\n \"\"\"If set to None, attention control parameters only apply to those tokens that have\n explicitly been set in the request.\n If set to a non-None value, control parameters are also applied to similar tokens.\n \"\"\"\n control_log_additive: Optional[bool] = True\n \"\"\"True: apply control by adding the log(control_factor) to attention scores.\n False: (attention_scores - - attention_scores.min(-1)) * control_factor\n \"\"\"\n repetition_penalties_include_completion: bool = True\n \"\"\"Flag deciding whether presence penalty or frequency penalty\n are updated from the completion.\"\"\"\n raw_completion: bool = False\n \"\"\"Force the raw completion of the model to be returned.\"\"\"\n aleph_alpha_api_key: Optional[str] = None\n \"\"\"API key for Aleph Alpha API.\"\"\"\n stop_sequences: Optional[List[str]] = None\n \"\"\"Stop sequences to use.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html"}
+{"id": "f7c678a95cb6-3", "text": "\"\"\"Validate that api key and python package exists in environment.\"\"\"\n aleph_alpha_api_key = get_from_dict_or_env(\n values, \"aleph_alpha_api_key\", \"ALEPH_ALPHA_API_KEY\"\n )\n try:\n import aleph_alpha_client\n values[\"client\"] = aleph_alpha_client.Client(token=aleph_alpha_api_key)\n except ImportError:\n raise ImportError(\n \"Could not import aleph_alpha_client python package. \"\n \"Please install it with `pip install aleph_alpha_client`.\"\n )\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling the Aleph Alpha API.\"\"\"\n return {\n \"maximum_tokens\": self.maximum_tokens,\n \"temperature\": self.temperature,\n \"top_k\": self.top_k,\n \"top_p\": self.top_p,\n \"presence_penalty\": self.presence_penalty,\n \"frequency_penalty\": self.frequency_penalty,\n \"n\": self.n,\n \"repetition_penalties_include_prompt\": self.repetition_penalties_include_prompt, # noqa: E501\n \"use_multiplicative_presence_penalty\": self.use_multiplicative_presence_penalty, # noqa: E501\n \"penalty_bias\": self.penalty_bias,\n \"penalty_exceptions\": self.penalty_exceptions,\n \"penalty_exceptions_include_stop_sequences\": self.penalty_exceptions_include_stop_sequences, # noqa: E501\n \"best_of\": self.best_of,\n \"logit_bias\": self.logit_bias,\n \"log_probs\": self.log_probs,\n \"tokens\": self.tokens,\n \"disable_optimizations\": self.disable_optimizations,\n \"minimum_tokens\": self.minimum_tokens,\n \"echo\": self.echo,", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html"}
+{"id": "f7c678a95cb6-4", "text": "\"minimum_tokens\": self.minimum_tokens,\n \"echo\": self.echo,\n \"use_multiplicative_frequency_penalty\": self.use_multiplicative_frequency_penalty, # noqa: E501\n \"sequence_penalty\": self.sequence_penalty,\n \"sequence_penalty_min_length\": self.sequence_penalty_min_length,\n \"use_multiplicative_sequence_penalty\": self.use_multiplicative_sequence_penalty, # noqa: E501\n \"completion_bias_inclusion\": self.completion_bias_inclusion,\n \"completion_bias_inclusion_first_token_only\": self.completion_bias_inclusion_first_token_only, # noqa: E501\n \"completion_bias_exclusion\": self.completion_bias_exclusion,\n \"completion_bias_exclusion_first_token_only\": self.completion_bias_exclusion_first_token_only, # noqa: E501\n \"contextual_control_threshold\": self.contextual_control_threshold,\n \"control_log_additive\": self.control_log_additive,\n \"repetition_penalties_include_completion\": self.repetition_penalties_include_completion, # noqa: E501\n \"raw_completion\": self.raw_completion,\n }\n @property\n def _identifying_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model\": self.model}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"alpeh_alpha\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call out to Aleph Alpha's completion endpoint.\n Args:\n prompt: The prompt to pass into the model.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html"}
+{"id": "f7c678a95cb6-5", "text": "Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = alpeh_alpha(\"Tell me a joke.\")\n \"\"\"\n from aleph_alpha_client import CompletionRequest, Prompt\n params = self._default_params\n if self.stop_sequences is not None and stop is not None:\n raise ValueError(\n \"stop sequences found in both the input and default params.\"\n )\n elif self.stop_sequences is not None:\n params[\"stop_sequences\"] = self.stop_sequences\n else:\n params[\"stop_sequences\"] = stop\n request = CompletionRequest(prompt=Prompt.from_text(prompt), **params)\n response = self.client.complete(model=self.model, request=request)\n text = response.completions[0].completion\n # If stop tokens are provided, Aleph Alpha's endpoint returns them.\n # In order to make this consistent with other endpoints, we strip them.\n if stop is not None or self.stop_sequences is not None:\n text = enforce_stop_tokens(text, params[\"stop_sequences\"])\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html"}
+{"id": "957571529765-0", "text": "Source code for langchain.llms.stochasticai\n\"\"\"Wrapper around StochasticAI APIs.\"\"\"\nimport logging\nimport time\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class StochasticAI(LLM):\n \"\"\"Wrapper around StochasticAI large language models.\n To use, you should have the environment variable ``STOCHASTICAI_API_KEY``\n set with your API key.\n Example:\n .. code-block:: python\n from langchain.llms import StochasticAI\n stochasticai = StochasticAI(api_url=\"\")\n \"\"\"\n api_url: str = \"\"\n \"\"\"Model name to use.\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not\n explicitly specified.\"\"\"\n stochasticai_api_key: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/stochasticai.html"}
+{"id": "957571529765-1", "text": "raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"{field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key exists in environment.\"\"\"\n stochasticai_api_key = get_from_dict_or_env(\n values, \"stochasticai_api_key\", \"STOCHASTICAI_API_KEY\"\n )\n values[\"stochasticai_api_key\"] = stochasticai_api_key\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"endpoint_url\": self.api_url},\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"stochasticai\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call out to StochasticAI's complete endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = StochasticAI(\"Tell me a joke.\")\n \"\"\"\n params = self.model_kwargs or {}", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/stochasticai.html"}
+{"id": "957571529765-2", "text": "\"\"\"\n params = self.model_kwargs or {}\n response_post = requests.post(\n url=self.api_url,\n json={\"prompt\": prompt, \"params\": params},\n headers={\n \"apiKey\": f\"{self.stochasticai_api_key}\",\n \"Accept\": \"application/json\",\n \"Content-Type\": \"application/json\",\n },\n )\n response_post.raise_for_status()\n response_post_json = response_post.json()\n completed = False\n while not completed:\n response_get = requests.get(\n url=response_post_json[\"data\"][\"responseUrl\"],\n headers={\n \"apiKey\": f\"{self.stochasticai_api_key}\",\n \"Accept\": \"application/json\",\n \"Content-Type\": \"application/json\",\n },\n )\n response_get.raise_for_status()\n response_get_json = response_get.json()[\"data\"]\n text = response_get_json.get(\"completion\")\n completed = text is not None\n time.sleep(0.5)\n text = text[0]\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/stochasticai.html"}
+{"id": "21dde289d2bc-0", "text": "Source code for langchain.llms.databricks\nimport os\nfrom abc import ABC, abstractmethod\nfrom typing import Any, Callable, Dict, List, Optional\nimport requests\nfrom pydantic import BaseModel, Extra, Field, PrivateAttr, root_validator, validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\n__all__ = [\"Databricks\"]\nclass _DatabricksClientBase(BaseModel, ABC):\n \"\"\"A base JSON API client that talks to Databricks.\"\"\"\n api_url: str\n api_token: str\n def post_raw(self, request: Any) -> Any:\n headers = {\"Authorization\": f\"Bearer {self.api_token}\"}\n response = requests.post(self.api_url, headers=headers, json=request)\n # TODO: error handling and automatic retries\n if not response.ok:\n raise ValueError(f\"HTTP {response.status_code} error: {response.text}\")\n return response.json()\n @abstractmethod\n def post(self, request: Any) -> Any:\n ...\nclass _DatabricksServingEndpointClient(_DatabricksClientBase):\n \"\"\"An API client that talks to a Databricks serving endpoint.\"\"\"\n host: str\n endpoint_name: str\n @root_validator(pre=True)\n def set_api_url(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n if \"api_url\" not in values:\n host = values[\"host\"]\n endpoint_name = values[\"endpoint_name\"]\n api_url = f\"https://{host}/serving-endpoints/{endpoint_name}/invocations\"\n values[\"api_url\"] = api_url\n return values\n def post(self, request: Any) -> Any:", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"}
+{"id": "21dde289d2bc-1", "text": "return values\n def post(self, request: Any) -> Any:\n # See https://docs.databricks.com/machine-learning/model-serving/score-model-serving-endpoints.html\n wrapped_request = {\"dataframe_records\": [request]}\n response = self.post_raw(wrapped_request)[\"predictions\"]\n # For a single-record query, the result is not a list.\n if isinstance(response, list):\n response = response[0]\n return response\nclass _DatabricksClusterDriverProxyClient(_DatabricksClientBase):\n \"\"\"An API client that talks to a Databricks cluster driver proxy app.\"\"\"\n host: str\n cluster_id: str\n cluster_driver_port: str\n @root_validator(pre=True)\n def set_api_url(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n if \"api_url\" not in values:\n host = values[\"host\"]\n cluster_id = values[\"cluster_id\"]\n port = values[\"cluster_driver_port\"]\n api_url = f\"https://{host}/driver-proxy-api/o/0/{cluster_id}/{port}\"\n values[\"api_url\"] = api_url\n return values\n def post(self, request: Any) -> Any:\n return self.post_raw(request)\ndef get_repl_context() -> Any:\n \"\"\"Gets the notebook REPL context if running inside a Databricks notebook.\n Returns None otherwise.\n \"\"\"\n try:\n from dbruntime.databricks_repl_context import get_context\n return get_context()\n except ImportError:\n raise ValueError(\n \"Cannot access dbruntime, not running inside a Databricks notebook.\"\n )\ndef get_default_host() -> str:\n \"\"\"Gets the default Databricks workspace hostname.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"}
+{"id": "21dde289d2bc-2", "text": "\"\"\"Gets the default Databricks workspace hostname.\n Raises an error if the hostname cannot be automatically determined.\n \"\"\"\n host = os.getenv(\"DATABRICKS_HOST\")\n if not host:\n try:\n host = get_repl_context().browserHostName\n if not host:\n raise ValueError(\"context doesn't contain browserHostName.\")\n except Exception as e:\n raise ValueError(\n \"host was not set and cannot be automatically inferred. Set \"\n f\"environment variable 'DATABRICKS_HOST'. Received error: {e}\"\n )\n # TODO: support Databricks CLI profile\n host = host.lstrip(\"https://\").lstrip(\"http://\").rstrip(\"/\")\n return host\ndef get_default_api_token() -> str:\n \"\"\"Gets the default Databricks personal access token.\n Raises an error if the token cannot be automatically determined.\n \"\"\"\n if api_token := os.getenv(\"DATABRICKS_TOKEN\"):\n return api_token\n try:\n api_token = get_repl_context().apiToken\n if not api_token:\n raise ValueError(\"context doesn't contain apiToken.\")\n except Exception as e:\n raise ValueError(\n \"api_token was not set and cannot be automatically inferred. Set \"\n f\"environment variable 'DATABRICKS_TOKEN'. Received error: {e}\"\n )\n # TODO: support Databricks CLI profile\n return api_token\n[docs]class Databricks(LLM):\n \"\"\"LLM wrapper around a Databricks serving endpoint or a cluster driver proxy app.\n It supports two endpoint types:\n * **Serving endpoint** (recommended for both production and development).\n We assume that an LLM was registered and deployed to a serving endpoint.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"}
+{"id": "21dde289d2bc-3", "text": "We assume that an LLM was registered and deployed to a serving endpoint.\n To wrap it as an LLM you must have \"Can Query\" permission to the endpoint.\n Set ``endpoint_name`` accordingly and do not set ``cluster_id`` and\n ``cluster_driver_port``.\n The expected model signature is:\n * inputs::\n [{\"name\": \"prompt\", \"type\": \"string\"},\n {\"name\": \"stop\", \"type\": \"list[string]\"}]\n * outputs: ``[{\"type\": \"string\"}]``\n * **Cluster driver proxy app** (recommended for interactive development).\n One can load an LLM on a Databricks interactive cluster and start a local HTTP\n server on the driver node to serve the model at ``/`` using HTTP POST method\n with JSON input/output.\n Please use a port number between ``[3000, 8000]`` and let the server listen to\n the driver IP address or simply ``0.0.0.0`` instead of localhost only.\n To wrap it as an LLM you must have \"Can Attach To\" permission to the cluster.\n Set ``cluster_id`` and ``cluster_driver_port`` and do not set ``endpoint_name``.\n The expected server schema (using JSON schema) is:\n * inputs::\n {\"type\": \"object\",\n \"properties\": {\n \"prompt\": {\"type\": \"string\"},\n \"stop\": {\"type\": \"array\", \"items\": {\"type\": \"string\"}}},\n \"required\": [\"prompt\"]}`\n * outputs: ``{\"type\": \"string\"}``\n If the endpoint model signature is different or you want to set extra params,\n you can use `transform_input_fn` and `transform_output_fn` to apply necessary", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"}
+{"id": "21dde289d2bc-4", "text": "you can use `transform_input_fn` and `transform_output_fn` to apply necessary\n transformations before and after the query.\n \"\"\"\n host: str = Field(default_factory=get_default_host)\n \"\"\"Databricks workspace hostname.\n If not provided, the default value is determined by\n * the ``DATABRICKS_HOST`` environment variable if present, or\n * the hostname of the current Databricks workspace if running inside\n a Databricks notebook attached to an interactive cluster in \"single user\"\n or \"no isolation shared\" mode.\n \"\"\"\n api_token: str = Field(default_factory=get_default_api_token)\n \"\"\"Databricks personal access token.\n If not provided, the default value is determined by\n * the ``DATABRICKS_TOKEN`` environment variable if present, or\n * an automatically generated temporary token if running inside a Databricks\n notebook attached to an interactive cluster in \"single user\" or\n \"no isolation shared\" mode.\n \"\"\"\n endpoint_name: Optional[str] = None\n \"\"\"Name of the model serving endpont.\n You must specify the endpoint name to connect to a model serving endpoint.\n You must not set both ``endpoint_name`` and ``cluster_id``.\n \"\"\"\n cluster_id: Optional[str] = None\n \"\"\"ID of the cluster if connecting to a cluster driver proxy app.\n If neither ``endpoint_name`` nor ``cluster_id`` is not provided and the code runs\n inside a Databricks notebook attached to an interactive cluster in \"single user\"\n or \"no isolation shared\" mode, the current cluster ID is used as default.\n You must not set both ``endpoint_name`` and ``cluster_id``.\n \"\"\"\n cluster_driver_port: Optional[str] = None", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"}
+{"id": "21dde289d2bc-5", "text": "\"\"\"\n cluster_driver_port: Optional[str] = None\n \"\"\"The port number used by the HTTP server running on the cluster driver node.\n The server should listen on the driver IP address or simply ``0.0.0.0`` to connect.\n We recommend the server using a port number between ``[3000, 8000]``.\n \"\"\"\n model_kwargs: Optional[Dict[str, Any]] = None\n \"\"\"Extra parameters to pass to the endpoint.\"\"\"\n transform_input_fn: Optional[Callable] = None\n \"\"\"A function that transforms ``{prompt, stop, **kwargs}`` into a JSON-compatible\n request object that the endpoint accepts.\n For example, you can apply a prompt template to the input prompt.\n \"\"\"\n transform_output_fn: Optional[Callable[..., str]] = None\n \"\"\"A function that transforms the output from the endpoint to the generated text.\n \"\"\"\n _client: _DatabricksClientBase = PrivateAttr()\n class Config:\n extra = Extra.forbid\n underscore_attrs_are_private = True\n @validator(\"cluster_id\", always=True)\n def set_cluster_id(cls, v: Any, values: Dict[str, Any]) -> Optional[str]:\n if v and values[\"endpoint_name\"]:\n raise ValueError(\"Cannot set both endpoint_name and cluster_id.\")\n elif values[\"endpoint_name\"]:\n return None\n elif v:\n return v\n else:\n try:\n if v := get_repl_context().clusterId:\n return v\n raise ValueError(\"Context doesn't contain clusterId.\")\n except Exception as e:\n raise ValueError(\n \"Neither endpoint_name nor cluster_id was set. \"\n \"And the cluster_id cannot be automatically determined. Received\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"}
+{"id": "21dde289d2bc-6", "text": "\"And the cluster_id cannot be automatically determined. Received\"\n f\" error: {e}\"\n )\n @validator(\"cluster_driver_port\", always=True)\n def set_cluster_driver_port(cls, v: Any, values: Dict[str, Any]) -> Optional[str]:\n if v and values[\"endpoint_name\"]:\n raise ValueError(\"Cannot set both endpoint_name and cluster_driver_port.\")\n elif values[\"endpoint_name\"]:\n return None\n elif v is None:\n raise ValueError(\n \"Must set cluster_driver_port to connect to a cluster driver.\"\n )\n elif int(v) <= 0:\n raise ValueError(f\"Invalid cluster_driver_port: {v}\")\n else:\n return v\n @validator(\"model_kwargs\", always=True)\n def set_model_kwargs(cls, v: Optional[Dict[str, Any]]) -> Optional[Dict[str, Any]]:\n if v:\n assert \"prompt\" not in v, \"model_kwargs must not contain key 'prompt'\"\n assert \"stop\" not in v, \"model_kwargs must not contain key 'stop'\"\n return v\n def __init__(self, **data: Any):\n super().__init__(**data)\n if self.endpoint_name:\n self._client = _DatabricksServingEndpointClient(\n host=self.host,\n api_token=self.api_token,\n endpoint_name=self.endpoint_name,\n )\n elif self.cluster_id and self.cluster_driver_port:\n self._client = _DatabricksClusterDriverProxyClient(\n host=self.host,\n api_token=self.api_token,\n cluster_id=self.cluster_id,\n cluster_driver_port=self.cluster_driver_port,\n )\n else:\n raise ValueError(", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"}
+{"id": "21dde289d2bc-7", "text": ")\n else:\n raise ValueError(\n \"Must specify either endpoint_name or cluster_id/cluster_driver_port.\"\n )\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"databricks\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Queries the LLM endpoint with the given prompt and stop sequence.\"\"\"\n # TODO: support callbacks\n request = {\"prompt\": prompt, \"stop\": stop}\n if self.model_kwargs:\n request.update(self.model_kwargs)\n if self.transform_input_fn:\n request = self.transform_input_fn(**request)\n response = self._client.post(request)\n if self.transform_output_fn:\n response = self.transform_output_fn(response)\n return response\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"}
+{"id": "5460f2ff577b-0", "text": "Source code for langchain.llms.fake\n\"\"\"Fake LLM wrapper for testing purposes.\"\"\"\nfrom typing import Any, List, Mapping, Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.llms.base import LLM\n[docs]class FakeListLLM(LLM):\n \"\"\"Fake LLM wrapper for testing purposes.\"\"\"\n responses: List\n i: int = 0\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"fake-list\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Return next response\"\"\"\n response = self.responses[self.i]\n self.i += 1\n return response\n async def _acall(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Return next response\"\"\"\n response = self.responses[self.i]\n self.i += 1\n return response\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n return {\"responses\": self.responses}\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/fake.html"}
+{"id": "54f028dac958-0", "text": "Source code for langchain.llms.cohere\n\"\"\"Wrapper around Cohere APIs.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Callable, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom tenacity import (\n before_sleep_log,\n retry,\n retry_if_exception_type,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\ndef _create_retry_decorator(llm: Cohere) -> Callable[[Any], Any]:\n import cohere\n min_seconds = 4\n max_seconds = 10\n # Wait 2^x * 1 second between each retry starting with\n # 4 seconds, then up to 10 seconds, then 10 seconds afterwards\n return retry(\n reraise=True,\n stop=stop_after_attempt(llm.max_retries),\n wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),\n retry=(retry_if_exception_type(cohere.error.CohereError)),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\ndef completion_with_retry(llm: Cohere, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the completion call.\"\"\"\n retry_decorator = _create_retry_decorator(llm)\n @retry_decorator\n def _completion_with_retry(**kwargs: Any) -> Any:\n return llm.client.generate(**kwargs)\n return _completion_with_retry(**kwargs)\n[docs]class Cohere(LLM):\n \"\"\"Wrapper around Cohere large language models.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/cohere.html"}
+{"id": "54f028dac958-1", "text": "\"\"\"Wrapper around Cohere large language models.\n To use, you should have the ``cohere`` python package installed, and the\n environment variable ``COHERE_API_KEY`` set with your API key, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.llms import Cohere\n cohere = Cohere(model=\"gptd-instruct-tft\", cohere_api_key=\"my-api-key\")\n \"\"\"\n client: Any #: :meta private:\n model: Optional[str] = None\n \"\"\"Model name to use.\"\"\"\n max_tokens: int = 256\n \"\"\"Denotes the number of tokens to predict per generation.\"\"\"\n temperature: float = 0.75\n \"\"\"A non-negative float that tunes the degree of randomness in generation.\"\"\"\n k: int = 0\n \"\"\"Number of most likely tokens to consider at each step.\"\"\"\n p: int = 1\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n frequency_penalty: float = 0.0\n \"\"\"Penalizes repeated tokens according to frequency. Between 0 and 1.\"\"\"\n presence_penalty: float = 0.0\n \"\"\"Penalizes repeated tokens. Between 0 and 1.\"\"\"\n truncate: Optional[str] = None\n \"\"\"Specify how the client handles inputs longer than the maximum token\n length: Truncate from START, END or NONE\"\"\"\n max_retries: int = 10\n \"\"\"Maximum number of retries to make when generating.\"\"\"\n cohere_api_key: Optional[str] = None\n stop: Optional[List[str]] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/cohere.html"}
+{"id": "54f028dac958-2", "text": "extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n cohere_api_key = get_from_dict_or_env(\n values, \"cohere_api_key\", \"COHERE_API_KEY\"\n )\n try:\n import cohere\n values[\"client\"] = cohere.Client(cohere_api_key)\n except ImportError:\n raise ImportError(\n \"Could not import cohere python package. \"\n \"Please install it with `pip install cohere`.\"\n )\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling Cohere API.\"\"\"\n return {\n \"max_tokens\": self.max_tokens,\n \"temperature\": self.temperature,\n \"k\": self.k,\n \"p\": self.p,\n \"frequency_penalty\": self.frequency_penalty,\n \"presence_penalty\": self.presence_penalty,\n \"truncate\": self.truncate,\n }\n @property\n def _identifying_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model\": self.model}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"cohere\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call out to Cohere's generate endpoint.\n Args:\n prompt: The prompt to pass into the model.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/cohere.html"}
+{"id": "54f028dac958-3", "text": "Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = cohere(\"Tell me a joke.\")\n \"\"\"\n params = self._default_params\n if self.stop is not None and stop is not None:\n raise ValueError(\"`stop` found in both the input and default params.\")\n elif self.stop is not None:\n params[\"stop_sequences\"] = self.stop\n else:\n params[\"stop_sequences\"] = stop\n response = completion_with_retry(\n self, model=self.model, prompt=prompt, **params\n )\n text = response.generations[0].text\n # If stop tokens are provided, Cohere's endpoint returns them.\n # In order to make this consistent with other endpoints, we strip them.\n if stop is not None or self.stop is not None:\n text = enforce_stop_tokens(text, params[\"stop_sequences\"])\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/cohere.html"}
+{"id": "fcda3b7ab57b-0", "text": "Source code for langchain.llms.vertexai\n\"\"\"Wrapper around Google VertexAI models.\"\"\"\nfrom typing import TYPE_CHECKING, Any, Dict, List, Optional\nfrom pydantic import BaseModel, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utilities.vertexai import (\n init_vertexai,\n raise_vertex_import_error,\n)\nif TYPE_CHECKING:\n from vertexai.language_models._language_models import _LanguageModel\nclass _VertexAICommon(BaseModel):\n client: \"_LanguageModel\" = None #: :meta private:\n model_name: str\n \"Model name to use.\"\n temperature: float = 0.0\n \"Sampling temperature, it controls the degree of randomness in token selection.\"\n max_output_tokens: int = 128\n \"Token limit determines the maximum amount of text output from one prompt.\"\n top_p: float = 0.95\n \"Tokens are selected from most probable to least until the sum of their \"\n \"probabilities equals the top-p value.\"\n top_k: int = 40\n \"How the model selects tokens for output, the next token is selected from \"\n \"among the top-k most probable tokens.\"\n stop: Optional[List[str]] = None\n \"Optional list of stop words to use when generating.\"\n project: Optional[str] = None\n \"The default GCP project to use when making Vertex API calls.\"\n location: str = \"us-central1\"\n \"The default location to use when making API calls.\"\n credentials: Any = None\n \"The default custom credentials (google.auth.credentials.Credentials) to use \"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/vertexai.html"}
+{"id": "fcda3b7ab57b-1", "text": "\"The default custom credentials (google.auth.credentials.Credentials) to use \"\n \"when making API calls. If not provided, credentials will be ascertained from \"\n \"the environment.\"\n @property\n def _default_params(self) -> Dict[str, Any]:\n base_params = {\n \"temperature\": self.temperature,\n \"max_output_tokens\": self.max_output_tokens,\n \"top_k\": self.top_k,\n \"top_p\": self.top_p,\n }\n return {**base_params}\n def _predict(self, prompt: str, stop: Optional[List[str]] = None) -> str:\n res = self.client.predict(prompt, **self._default_params)\n return self._enforce_stop_words(res.text, stop)\n def _enforce_stop_words(self, text: str, stop: Optional[List[str]] = None) -> str:\n if stop is None and self.stop is not None:\n stop = self.stop\n if stop:\n return enforce_stop_tokens(text, stop)\n return text\n @property\n def _llm_type(self) -> str:\n return \"vertexai\"\n @classmethod\n def _try_init_vertexai(cls, values: Dict) -> None:\n allowed_params = [\"project\", \"location\", \"credentials\"]\n params = {k: v for k, v in values.items() if k in allowed_params}\n init_vertexai(**params)\n return None\n[docs]class VertexAI(_VertexAICommon, LLM):\n \"\"\"Wrapper around Google Vertex AI large language models.\"\"\"\n model_name: str = \"text-bison\"\n tuned_model_name: Optional[str] = None\n \"The name of a tuned model, if it's provided, model_name is ignored.\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/vertexai.html"}
+{"id": "fcda3b7ab57b-2", "text": "\"The name of a tuned model, if it's provided, model_name is ignored.\"\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in environment.\"\"\"\n cls._try_init_vertexai(values)\n try:\n from vertexai.preview.language_models import TextGenerationModel\n except ImportError:\n raise_vertex_import_error()\n tuned_model_name = values.get(\"tuned_model_name\")\n if tuned_model_name:\n values[\"client\"] = TextGenerationModel.get_tuned_model(tuned_model_name)\n else:\n values[\"client\"] = TextGenerationModel.from_pretrained(values[\"model_name\"])\n return values\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call Vertex model to get predictions based on the prompt.\n Args:\n prompt: The prompt to pass into the model.\n stop: A list of stop words (optional).\n run_manager: A Callbackmanager for LLM run, optional.\n Returns:\n The string generated by the model.\n \"\"\"\n return self._predict(prompt, stop)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/vertexai.html"}
+{"id": "372ecf19248d-0", "text": "Source code for langchain.llms.predictionguard\n\"\"\"Wrapper around Prediction Guard APIs.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class PredictionGuard(LLM):\n \"\"\"Wrapper around Prediction Guard large language models.\n To use, you should have the ``predictionguard`` python package installed, and the\n environment variable ``PREDICTIONGUARD_TOKEN`` set with your access token, or pass\n it as a named parameter to the constructor. To use Prediction Guard's API along\n with OpenAI models, set the environment variable ``OPENAI_API_KEY`` with your\n OpenAI API key as well.\n Example:\n .. code-block:: python\n pgllm = PredictionGuard(model=\"MPT-7B-Instruct\",\n token=\"my-access-token\",\n output={\n \"type\": \"boolean\"\n })\n \"\"\"\n client: Any #: :meta private:\n model: Optional[str] = \"MPT-7B-Instruct\"\n \"\"\"Model name to use.\"\"\"\n output: Optional[Dict[str, Any]] = None\n \"\"\"The output type or structure for controlling the LLM output.\"\"\"\n max_tokens: int = 256\n \"\"\"Denotes the number of tokens to predict per generation.\"\"\"\n temperature: float = 0.75\n \"\"\"A non-negative float that tunes the degree of randomness in generation.\"\"\"\n token: Optional[str] = None\n \"\"\"Your Prediction Guard access token.\"\"\"\n stop: Optional[List[str]] = None", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/predictionguard.html"}
+{"id": "372ecf19248d-1", "text": "\"\"\"Your Prediction Guard access token.\"\"\"\n stop: Optional[List[str]] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the access token and python package exists in environment.\"\"\"\n token = get_from_dict_or_env(values, \"token\", \"PREDICTIONGUARD_TOKEN\")\n try:\n import predictionguard as pg\n values[\"client\"] = pg.Client(token=token)\n except ImportError:\n raise ImportError(\n \"Could not import predictionguard python package. \"\n \"Please install it with `pip install predictionguard`.\"\n )\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling the Prediction Guard API.\"\"\"\n return {\n \"max_tokens\": self.max_tokens,\n \"temperature\": self.temperature,\n }\n @property\n def _identifying_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model\": self.model}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"predictionguard\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call out to Prediction Guard's model API.\n Args:\n prompt: The prompt to pass into the model.\n Returns:\n The string generated by the model.\n Example:", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/predictionguard.html"}
+{"id": "372ecf19248d-2", "text": "Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = pgllm(\"Tell me a joke.\")\n \"\"\"\n import predictionguard as pg\n params = self._default_params\n if self.stop is not None and stop is not None:\n raise ValueError(\"`stop` found in both the input and default params.\")\n elif self.stop is not None:\n params[\"stop_sequences\"] = self.stop\n else:\n params[\"stop_sequences\"] = stop\n response = pg.Completion.create(\n model=self.model,\n prompt=prompt,\n output=self.output,\n temperature=params[\"temperature\"],\n max_tokens=params[\"max_tokens\"],\n )\n text = response[\"choices\"][0][\"text\"]\n # If stop tokens are provided, Prediction Guard's endpoint returns them.\n # In order to make this consistent with other endpoints, we strip them.\n if stop is not None or self.stop is not None:\n text = enforce_stop_tokens(text, params[\"stop_sequences\"])\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/predictionguard.html"}
+{"id": "3e23e4c538bf-0", "text": "Source code for langchain.llms.petals\n\"\"\"Wrapper around Petals API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class Petals(LLM):\n \"\"\"Wrapper around Petals Bloom models.\n To use, you should have the ``petals`` python package installed, and the\n environment variable ``HUGGINGFACE_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import petals\n petals = Petals()\n \"\"\"\n client: Any\n \"\"\"The client to use for the API calls.\"\"\"\n tokenizer: Any\n \"\"\"The tokenizer to use for the API calls.\"\"\"\n model_name: str = \"bigscience/bloom-petals\"\n \"\"\"The model to use.\"\"\"\n temperature: float = 0.7\n \"\"\"What sampling temperature to use\"\"\"\n max_new_tokens: int = 256\n \"\"\"The maximum number of new tokens to generate in the completion.\"\"\"\n top_p: float = 0.9\n \"\"\"The cumulative probability for top-p sampling.\"\"\"\n top_k: Optional[int] = None\n \"\"\"The number of highest probability vocabulary tokens\n to keep for top-k-filtering.\"\"\"\n do_sample: bool = True\n \"\"\"Whether or not to use sampling; use greedy decoding otherwise.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/petals.html"}
+{"id": "3e23e4c538bf-1", "text": "\"\"\"Whether or not to use sampling; use greedy decoding otherwise.\"\"\"\n max_length: Optional[int] = None\n \"\"\"The maximum length of the sequence to be generated.\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call\n not explicitly specified.\"\"\"\n huggingface_api_key: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.forbid\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"WARNING! {field_name} is not default parameter.\n {field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n huggingface_api_key = get_from_dict_or_env(\n values, \"huggingface_api_key\", \"HUGGINGFACE_API_KEY\"\n )\n try:\n from petals import DistributedBloomForCausalLM\n from transformers import BloomTokenizerFast", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/petals.html"}
+{"id": "3e23e4c538bf-2", "text": "from petals import DistributedBloomForCausalLM\n from transformers import BloomTokenizerFast\n model_name = values[\"model_name\"]\n values[\"tokenizer\"] = BloomTokenizerFast.from_pretrained(model_name)\n values[\"client\"] = DistributedBloomForCausalLM.from_pretrained(model_name)\n values[\"huggingface_api_key\"] = huggingface_api_key\n except ImportError:\n raise ValueError(\n \"Could not import transformers or petals python package.\"\n \"Please install with `pip install -U transformers petals`.\"\n )\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling Petals API.\"\"\"\n normal_params = {\n \"temperature\": self.temperature,\n \"max_new_tokens\": self.max_new_tokens,\n \"top_p\": self.top_p,\n \"top_k\": self.top_k,\n \"do_sample\": self.do_sample,\n \"max_length\": self.max_length,\n }\n return {**normal_params, **self.model_kwargs}\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_name\": self.model_name}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"petals\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call the Petals API.\"\"\"\n params = self._default_params", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/petals.html"}
+{"id": "3e23e4c538bf-3", "text": "\"\"\"Call the Petals API.\"\"\"\n params = self._default_params\n inputs = self.tokenizer(prompt, return_tensors=\"pt\")[\"input_ids\"]\n outputs = self.client.generate(inputs, **params)\n text = self.tokenizer.decode(outputs[0])\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/petals.html"}
+{"id": "8a3f0e10b1ca-0", "text": "Source code for langchain.llms.replicate\n\"\"\"Wrapper around Replicate API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class Replicate(LLM):\n \"\"\"Wrapper around Replicate models.\n To use, you should have the ``replicate`` python package installed,\n and the environment variable ``REPLICATE_API_TOKEN`` set with your API token.\n You can find your token here: https://replicate.com/account\n The model param is required, but any other model parameters can also\n be passed in with the format input={model_param: value, ...}\n Example:\n .. code-block:: python\n from langchain.llms import Replicate\n replicate = Replicate(model=\"stability-ai/stable-diffusion: \\\n 27b93a2413e7f36cd83da926f365628\\\n 0b2931564ff050bf9575f1fdf9bcd7478\",\n input={\"image_dimensions\": \"512x512\"})\n \"\"\"\n model: str\n input: Dict[str, Any] = Field(default_factory=dict)\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n replicate_api_token: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.forbid\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/replicate.html"}
+{"id": "8a3f0e10b1ca-1", "text": "\"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"{field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n replicate_api_token = get_from_dict_or_env(\n values, \"REPLICATE_API_TOKEN\", \"REPLICATE_API_TOKEN\"\n )\n values[\"replicate_api_token\"] = replicate_api_token\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of model.\"\"\"\n return \"replicate\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call to replicate endpoint.\"\"\"\n try:\n import replicate as replicate_python\n except ImportError:\n raise ImportError(\n \"Could not import replicate python package. \"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/replicate.html"}
+{"id": "8a3f0e10b1ca-2", "text": "raise ImportError(\n \"Could not import replicate python package. \"\n \"Please install it with `pip install replicate`.\"\n )\n # get the model and version\n model_str, version_str = self.model.split(\":\")\n model = replicate_python.models.get(model_str)\n version = model.versions.get(version_str)\n # sort through the openapi schema to get the name of the first input\n input_properties = sorted(\n version.openapi_schema[\"components\"][\"schemas\"][\"Input\"][\n \"properties\"\n ].items(),\n key=lambda item: item[1].get(\"x-order\", 0),\n )\n first_input_name = input_properties[0][0]\n inputs = {first_input_name: prompt, **self.input}\n iterator = replicate_python.run(self.model, input={**inputs})\n return \"\".join([output for output in iterator])\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/replicate.html"}
+{"id": "b56a5c174365-0", "text": "Source code for langchain.llms.deepinfra\n\"\"\"Wrapper around DeepInfra APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nDEFAULT_MODEL_ID = \"google/flan-t5-xl\"\n[docs]class DeepInfra(LLM):\n \"\"\"Wrapper around DeepInfra deployed models.\n To use, you should have the ``requests`` python package installed, and the\n environment variable ``DEEPINFRA_API_TOKEN`` set with your API token, or pass\n it as a named parameter to the constructor.\n Only supports `text-generation` and `text2text-generation` for now.\n Example:\n .. code-block:: python\n from langchain.llms import DeepInfra\n di = DeepInfra(model_id=\"google/flan-t5-xl\",\n deepinfra_api_token=\"my-api-key\")\n \"\"\"\n model_id: str = DEFAULT_MODEL_ID\n model_kwargs: Optional[dict] = None\n deepinfra_api_token: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n deepinfra_api_token = get_from_dict_or_env(\n values, \"deepinfra_api_token\", \"DEEPINFRA_API_TOKEN\"\n )\n values[\"deepinfra_api_token\"] = deepinfra_api_token\n return values\n @property", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/deepinfra.html"}
+{"id": "b56a5c174365-1", "text": "return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"model_id\": self.model_id},\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"deepinfra\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call out to DeepInfra's inference API endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = di(\"Tell me a joke.\")\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n # HTTP headers for authorization\n headers = {\n \"Authorization\": f\"bearer {self.deepinfra_api_token}\",\n \"Content-Type\": \"application/json\",\n }\n try:\n res = requests.post(\n f\"https://api.deepinfra.com/v1/inference/{self.model_id}\",\n headers=headers,\n json={\"input\": prompt, **_model_kwargs},\n )\n except requests.exceptions.RequestException as e:\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n if res.status_code != 200:\n raise ValueError(\n \"Error raised by inference API HTTP code: %s, %s\"\n % (res.status_code, res.text)\n )\n try:", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/deepinfra.html"}
+{"id": "b56a5c174365-2", "text": "% (res.status_code, res.text)\n )\n try:\n t = res.json()\n text = t[\"results\"][0][\"generated_text\"]\n except requests.exceptions.JSONDecodeError as e:\n raise ValueError(\n f\"Error raised by inference API: {e}.\\nResponse: {res.text}\"\n )\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/deepinfra.html"}
+{"id": "d73cbb9a2c56-0", "text": "Source code for langchain.llms.baseten\n\"\"\"Wrapper around Baseten deployed model API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nlogger = logging.getLogger(__name__)\n[docs]class Baseten(LLM):\n \"\"\"Use your Baseten models in Langchain\n To use, you should have the ``baseten`` python package installed,\n and run ``baseten.login()`` with your Baseten API key.\n The required ``model`` param can be either a model id or model\n version id. Using a model version ID will result in\n slightly faster invocation.\n Any other model parameters can also\n be passed in with the format input={model_param: value, ...}\n The Baseten model must accept a dictionary of input with the key\n \"prompt\" and return a dictionary with a key \"data\" which maps\n to a list of response strings.\n Example:\n .. code-block:: python\n from langchain.llms import Baseten\n my_model = Baseten(model=\"MODEL_ID\")\n output = my_model(\"prompt\")\n \"\"\"\n model: str\n input: Dict[str, Any] = Field(default_factory=dict)\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of model.\"\"\"\n return \"baseten\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/baseten.html"}
+{"id": "d73cbb9a2c56-1", "text": "\"\"\"Return type of model.\"\"\"\n return \"baseten\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call to Baseten deployed model endpoint.\"\"\"\n try:\n import baseten\n except ImportError as exc:\n raise ValueError(\n \"Could not import Baseten Python package. \"\n \"Please install it with `pip install baseten`.\"\n ) from exc\n # get the model and version\n try:\n model = baseten.deployed_model_version_id(self.model)\n response = model.predict({\"prompt\": prompt})\n except baseten.common.core.ApiError:\n model = baseten.deployed_model_id(self.model)\n response = model.predict({\"prompt\": prompt})\n return \"\".join(response)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/baseten.html"}
+{"id": "df79ddfb6efb-0", "text": "Source code for langchain.llms.ai21\n\"\"\"Wrapper around AI21 APIs.\"\"\"\nfrom typing import Any, Dict, List, Optional\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.utils import get_from_dict_or_env\nclass AI21PenaltyData(BaseModel):\n \"\"\"Parameters for AI21 penalty data.\"\"\"\n scale: int = 0\n applyToWhitespaces: bool = True\n applyToPunctuations: bool = True\n applyToNumbers: bool = True\n applyToStopwords: bool = True\n applyToEmojis: bool = True\n[docs]class AI21(LLM):\n \"\"\"Wrapper around AI21 large language models.\n To use, you should have the environment variable ``AI21_API_KEY``\n set with your API key.\n Example:\n .. code-block:: python\n from langchain.llms import AI21\n ai21 = AI21(model=\"j2-jumbo-instruct\")\n \"\"\"\n model: str = \"j2-jumbo-instruct\"\n \"\"\"Model name to use.\"\"\"\n temperature: float = 0.7\n \"\"\"What sampling temperature to use.\"\"\"\n maxTokens: int = 256\n \"\"\"The maximum number of tokens to generate in the completion.\"\"\"\n minTokens: int = 0\n \"\"\"The minimum number of tokens to generate in the completion.\"\"\"\n topP: float = 1.0\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n presencePenalty: AI21PenaltyData = AI21PenaltyData()\n \"\"\"Penalizes repeated tokens.\"\"\"\n countPenalty: AI21PenaltyData = AI21PenaltyData()", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/ai21.html"}
+{"id": "df79ddfb6efb-1", "text": "countPenalty: AI21PenaltyData = AI21PenaltyData()\n \"\"\"Penalizes repeated tokens according to count.\"\"\"\n frequencyPenalty: AI21PenaltyData = AI21PenaltyData()\n \"\"\"Penalizes repeated tokens according to frequency.\"\"\"\n numResults: int = 1\n \"\"\"How many completions to generate for each prompt.\"\"\"\n logitBias: Optional[Dict[str, float]] = None\n \"\"\"Adjust the probability of specific tokens being generated.\"\"\"\n ai21_api_key: Optional[str] = None\n stop: Optional[List[str]] = None\n base_url: Optional[str] = None\n \"\"\"Base url to use, if None decides based on model name.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key exists in environment.\"\"\"\n ai21_api_key = get_from_dict_or_env(values, \"ai21_api_key\", \"AI21_API_KEY\")\n values[\"ai21_api_key\"] = ai21_api_key\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling AI21 API.\"\"\"\n return {\n \"temperature\": self.temperature,\n \"maxTokens\": self.maxTokens,\n \"minTokens\": self.minTokens,\n \"topP\": self.topP,\n \"presencePenalty\": self.presencePenalty.dict(),\n \"countPenalty\": self.countPenalty.dict(),\n \"frequencyPenalty\": self.frequencyPenalty.dict(),\n \"numResults\": self.numResults,\n \"logitBias\": self.logitBias,\n }\n @property", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/ai21.html"}
+{"id": "df79ddfb6efb-2", "text": "\"logitBias\": self.logitBias,\n }\n @property\n def _identifying_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model\": self.model}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"ai21\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call out to AI21's complete endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = ai21(\"Tell me a joke.\")\n \"\"\"\n if self.stop is not None and stop is not None:\n raise ValueError(\"`stop` found in both the input and default params.\")\n elif self.stop is not None:\n stop = self.stop\n elif stop is None:\n stop = []\n if self.base_url is not None:\n base_url = self.base_url\n else:\n if self.model in (\"j1-grande-instruct\",):\n base_url = \"https://api.ai21.com/studio/v1/experimental\"\n else:\n base_url = \"https://api.ai21.com/studio/v1\"\n response = requests.post(\n url=f\"{base_url}/{self.model}/complete\",\n headers={\"Authorization\": f\"Bearer {self.ai21_api_key}\"},", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/ai21.html"}
+{"id": "df79ddfb6efb-3", "text": "headers={\"Authorization\": f\"Bearer {self.ai21_api_key}\"},\n json={\"prompt\": prompt, \"stopSequences\": stop, **self._default_params},\n )\n if response.status_code != 200:\n optional_detail = response.json().get(\"error\")\n raise ValueError(\n f\"AI21 /complete call failed with status code {response.status_code}.\"\n f\" Details: {optional_detail}\"\n )\n response_json = response.json()\n return response_json[\"completions\"][0][\"data\"][\"text\"]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/ai21.html"}
+{"id": "31e2cb48f314-0", "text": "Source code for langchain.llms.openlm\nfrom typing import Any, Dict\nfrom pydantic import root_validator\nfrom langchain.llms.openai import BaseOpenAI\n[docs]class OpenLM(BaseOpenAI):\n @property\n def _invocation_params(self) -> Dict[str, Any]:\n return {**{\"model\": self.model_name}, **super()._invocation_params}\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n try:\n import openlm\n values[\"client\"] = openlm.Completion\n except ImportError:\n raise ValueError(\n \"Could not import openlm python package. \"\n \"Please install it with `pip install openlm`.\"\n )\n if values[\"streaming\"]:\n raise ValueError(\"Streaming not supported with openlm\")\n return values\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/openlm.html"}
+{"id": "28b3748908ef-0", "text": "Source code for langchain.llms.self_hosted\n\"\"\"Run model inference on self-hosted remote hardware.\"\"\"\nimport importlib.util\nimport logging\nimport pickle\nfrom typing import Any, Callable, List, Mapping, Optional\nfrom pydantic import Extra\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nlogger = logging.getLogger(__name__)\ndef _generate_text(\n pipeline: Any,\n prompt: str,\n *args: Any,\n stop: Optional[List[str]] = None,\n **kwargs: Any,\n) -> str:\n \"\"\"Inference function to send to the remote hardware.\n Accepts a pipeline callable (or, more likely,\n a key pointing to the model on the cluster's object store)\n and returns text predictions for each document\n in the batch.\n \"\"\"\n text = pipeline(prompt, *args, **kwargs)\n if stop is not None:\n text = enforce_stop_tokens(text, stop)\n return text\ndef _send_pipeline_to_device(pipeline: Any, device: int) -> Any:\n \"\"\"Send a pipeline to a device on the cluster.\"\"\"\n if isinstance(pipeline, str):\n with open(pipeline, \"rb\") as f:\n pipeline = pickle.load(f)\n if importlib.util.find_spec(\"torch\") is not None:\n import torch\n cuda_device_count = torch.cuda.device_count()\n if device < -1 or (device >= cuda_device_count):\n raise ValueError(\n f\"Got device=={device}, \"\n f\"device is required to be within [-1, {cuda_device_count})\"\n )\n if device < 0 and cuda_device_count > 0:", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/self_hosted.html"}
+{"id": "28b3748908ef-1", "text": ")\n if device < 0 and cuda_device_count > 0:\n logger.warning(\n \"Device has %d GPUs available. \"\n \"Provide device={deviceId} to `from_model_id` to use available\"\n \"GPUs for execution. deviceId is -1 for CPU and \"\n \"can be a positive integer associated with CUDA device id.\",\n cuda_device_count,\n )\n pipeline.device = torch.device(device)\n pipeline.model = pipeline.model.to(pipeline.device)\n return pipeline\n[docs]class SelfHostedPipeline(LLM):\n \"\"\"Run model inference on self-hosted remote hardware.\n Supported hardware includes auto-launched instances on AWS, GCP, Azure,\n and Lambda, as well as servers specified\n by IP address and SSH credentials (such as on-prem, or another\n cloud like Paperspace, Coreweave, etc.).\n To use, you should have the ``runhouse`` python package installed.\n Example for custom pipeline and inference functions:\n .. code-block:: python\n from langchain.llms import SelfHostedPipeline\n from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\n import runhouse as rh\n def load_pipeline():\n tokenizer = AutoTokenizer.from_pretrained(\"gpt2\")\n model = AutoModelForCausalLM.from_pretrained(\"gpt2\")\n return pipeline(\n \"text-generation\", model=model, tokenizer=tokenizer,\n max_new_tokens=10\n )\n def inference_fn(pipeline, prompt, stop = None):\n return pipeline(prompt)[0][\"generated_text\"]\n gpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\n llm = SelfHostedPipeline(\n model_load_fn=load_pipeline,", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/self_hosted.html"}
+{"id": "28b3748908ef-2", "text": "llm = SelfHostedPipeline(\n model_load_fn=load_pipeline,\n hardware=gpu,\n model_reqs=model_reqs, inference_fn=inference_fn\n )\n Example for <2GB model (can be serialized and sent directly to the server):\n .. code-block:: python\n from langchain.llms import SelfHostedPipeline\n import runhouse as rh\n gpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\n my_model = ...\n llm = SelfHostedPipeline.from_pipeline(\n pipeline=my_model,\n hardware=gpu,\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n )\n Example passing model path for larger models:\n .. code-block:: python\n from langchain.llms import SelfHostedPipeline\n import runhouse as rh\n import pickle\n from transformers import pipeline\n generator = pipeline(model=\"gpt2\")\n rh.blob(pickle.dumps(generator), path=\"models/pipeline.pkl\"\n ).save().to(gpu, path=\"models\")\n llm = SelfHostedPipeline.from_pipeline(\n pipeline=\"models/pipeline.pkl\",\n hardware=gpu,\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n )\n \"\"\"\n pipeline_ref: Any #: :meta private:\n client: Any #: :meta private:\n inference_fn: Callable = _generate_text #: :meta private:\n \"\"\"Inference function to send to the remote hardware.\"\"\"\n hardware: Any\n \"\"\"Remote hardware to send the inference function to.\"\"\"\n model_load_fn: Callable\n \"\"\"Function to load the model remotely on the server.\"\"\"\n load_fn_kwargs: Optional[dict] = None", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/self_hosted.html"}
+{"id": "28b3748908ef-3", "text": "load_fn_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model load function.\"\"\"\n model_reqs: List[str] = [\"./\", \"torch\"]\n \"\"\"Requirements to install on hardware to inference the model.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n def __init__(self, **kwargs: Any):\n \"\"\"Init the pipeline with an auxiliary function.\n The load function must be in global scope to be imported\n and run on the server, i.e. in a module and not a REPL or closure.\n Then, initialize the remote inference function.\n \"\"\"\n super().__init__(**kwargs)\n try:\n import runhouse as rh\n except ImportError:\n raise ImportError(\n \"Could not import runhouse python package. \"\n \"Please install it with `pip install runhouse`.\"\n )\n remote_load_fn = rh.function(fn=self.model_load_fn).to(\n self.hardware, reqs=self.model_reqs\n )\n _load_fn_kwargs = self.load_fn_kwargs or {}\n self.pipeline_ref = remote_load_fn.remote(**_load_fn_kwargs)\n self.client = rh.function(fn=self.inference_fn).to(\n self.hardware, reqs=self.model_reqs\n )\n[docs] @classmethod\n def from_pipeline(\n cls,\n pipeline: Any,\n hardware: Any,\n model_reqs: Optional[List[str]] = None,\n device: int = 0,\n **kwargs: Any,\n ) -> LLM:\n \"\"\"Init the SelfHostedPipeline from a pipeline object or string.\"\"\"\n if not isinstance(pipeline, str):\n logger.warning(", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/self_hosted.html"}
+{"id": "28b3748908ef-4", "text": "if not isinstance(pipeline, str):\n logger.warning(\n \"Serializing pipeline to send to remote hardware. \"\n \"Note, it can be quite slow\"\n \"to serialize and send large models with each execution. \"\n \"Consider sending the pipeline\"\n \"to the cluster and passing the path to the pipeline instead.\"\n )\n load_fn_kwargs = {\"pipeline\": pipeline, \"device\": device}\n return cls(\n load_fn_kwargs=load_fn_kwargs,\n model_load_fn=_send_pipeline_to_device,\n hardware=hardware,\n model_reqs=[\"transformers\", \"torch\"] + (model_reqs or []),\n **kwargs,\n )\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"hardware\": self.hardware},\n }\n @property\n def _llm_type(self) -> str:\n return \"self_hosted_llm\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n return self.client(pipeline=self.pipeline_ref, prompt=prompt, stop=stop)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/self_hosted.html"}
+{"id": "91d08857f164-0", "text": "Source code for langchain.llms.modal\n\"\"\"Wrapper around Modal API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nlogger = logging.getLogger(__name__)\n[docs]class Modal(LLM):\n \"\"\"Wrapper around Modal large language models.\n To use, you should have the ``modal-client`` python package installed.\n Any parameters that are valid to be passed to the call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import Modal\n modal = Modal(endpoint_url=\"\")\n \"\"\"\n endpoint_url: str = \"\"\n \"\"\"model endpoint to use\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not\n explicitly specified.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.forbid\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"{field_name} was transfered to model_kwargs.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/modal.html"}
+{"id": "91d08857f164-1", "text": "logger.warning(\n f\"\"\"{field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"endpoint_url\": self.endpoint_url},\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"modal\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call to Modal endpoint.\"\"\"\n params = self.model_kwargs or {}\n response = requests.post(\n url=self.endpoint_url,\n headers={\n \"Content-Type\": \"application/json\",\n },\n json={\"prompt\": prompt, **params},\n )\n try:\n if prompt in response.json()[\"prompt\"]:\n response_json = response.json()\n except KeyError:\n raise ValueError(\"LangChain requires 'prompt' key in response.\")\n text = response_json[\"prompt\"]\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/modal.html"}
+{"id": "a238c1f73048-0", "text": "Source code for langchain.llms.beam\n\"\"\"Wrapper around Beam API.\"\"\"\nimport base64\nimport json\nimport logging\nimport subprocess\nimport textwrap\nimport time\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\nDEFAULT_NUM_TRIES = 10\nDEFAULT_SLEEP_TIME = 4\n[docs]class Beam(LLM):\n \"\"\"Wrapper around Beam API for gpt2 large language model.\n To use, you should have the ``beam-sdk`` python package installed,\n and the environment variable ``BEAM_CLIENT_ID`` set with your client id\n and ``BEAM_CLIENT_SECRET`` set with your client secret. Information on how\n to get these is available here: https://docs.beam.cloud/account/api-keys.\n The wrapper can then be called as follows, where the name, cpu, memory, gpu,\n python version, and python packages can be updated accordingly. Once deployed,\n the instance can be called.\n Example:\n .. code-block:: python\n llm = Beam(model_name=\"gpt2\",\n name=\"langchain-gpt2\",\n cpu=8,\n memory=\"32Gi\",\n gpu=\"A10G\",\n python_version=\"python3.8\",\n python_packages=[\n \"diffusers[torch]>=0.10\",\n \"transformers\",\n \"torch\",\n \"pillow\",\n \"accelerate\",\n \"safetensors\",\n \"xformers\",],\n max_length=50)\n llm._deploy()", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/beam.html"}
+{"id": "a238c1f73048-1", "text": "max_length=50)\n llm._deploy()\n call_result = llm._call(input)\n \"\"\"\n model_name: str = \"\"\n name: str = \"\"\n cpu: str = \"\"\n memory: str = \"\"\n gpu: str = \"\"\n python_version: str = \"\"\n python_packages: List[str] = []\n max_length: str = \"\"\n url: str = \"\"\n \"\"\"model endpoint to use\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not\n explicitly specified.\"\"\"\n beam_client_id: str = \"\"\n beam_client_secret: str = \"\"\n app_id: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.forbid\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"{field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/beam.html"}
+{"id": "a238c1f73048-2", "text": "@root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n beam_client_id = get_from_dict_or_env(\n values, \"beam_client_id\", \"BEAM_CLIENT_ID\"\n )\n beam_client_secret = get_from_dict_or_env(\n values, \"beam_client_secret\", \"BEAM_CLIENT_SECRET\"\n )\n values[\"beam_client_id\"] = beam_client_id\n values[\"beam_client_secret\"] = beam_client_secret\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"model_name\": self.model_name,\n \"name\": self.name,\n \"cpu\": self.cpu,\n \"memory\": self.memory,\n \"gpu\": self.gpu,\n \"python_version\": self.python_version,\n \"python_packages\": self.python_packages,\n \"max_length\": self.max_length,\n \"model_kwargs\": self.model_kwargs,\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"beam\"\n[docs] def app_creation(self) -> None:\n \"\"\"Creates a Python file which will contain your Beam app definition.\"\"\"\n script = textwrap.dedent(\n \"\"\"\\\n import beam\n # The environment your code will run on\n app = beam.App(\n name=\"{name}\",\n cpu={cpu},\n memory=\"{memory}\",\n gpu=\"{gpu}\",\n python_version=\"{python_version}\",\n python_packages={python_packages},\n )\n app.Trigger.RestAPI(", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/beam.html"}
+{"id": "a238c1f73048-3", "text": "python_packages={python_packages},\n )\n app.Trigger.RestAPI(\n inputs={{\"prompt\": beam.Types.String(), \"max_length\": beam.Types.String()}},\n outputs={{\"text\": beam.Types.String()}},\n handler=\"run.py:beam_langchain\",\n )\n \"\"\"\n )\n script_name = \"app.py\"\n with open(script_name, \"w\") as file:\n file.write(\n script.format(\n name=self.name,\n cpu=self.cpu,\n memory=self.memory,\n gpu=self.gpu,\n python_version=self.python_version,\n python_packages=self.python_packages,\n )\n )\n[docs] def run_creation(self) -> None:\n \"\"\"Creates a Python file which will be deployed on beam.\"\"\"\n script = textwrap.dedent(\n \"\"\"\n import os\n import transformers\n from transformers import GPT2LMHeadModel, GPT2Tokenizer\n model_name = \"{model_name}\"\n def beam_langchain(**inputs):\n prompt = inputs[\"prompt\"]\n length = inputs[\"max_length\"]\n tokenizer = GPT2Tokenizer.from_pretrained(model_name)\n model = GPT2LMHeadModel.from_pretrained(model_name)\n encodedPrompt = tokenizer.encode(prompt, return_tensors='pt')\n outputs = model.generate(encodedPrompt, max_length=int(length),\n do_sample=True, pad_token_id=tokenizer.eos_token_id)\n output = tokenizer.decode(outputs[0], skip_special_tokens=True)\n print(output)\n return {{\"text\": output}}\n \"\"\"\n )\n script_name = \"run.py\"\n with open(script_name, \"w\") as file:\n file.write(script.format(model_name=self.model_name))", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/beam.html"}
+{"id": "a238c1f73048-4", "text": "file.write(script.format(model_name=self.model_name))\n def _deploy(self) -> str:\n \"\"\"Call to Beam.\"\"\"\n try:\n import beam # type: ignore\n if beam.__path__ == \"\":\n raise ImportError\n except ImportError:\n raise ImportError(\n \"Could not import beam python package. \"\n \"Please install it with `curl \"\n \"https://raw.githubusercontent.com/slai-labs\"\n \"/get-beam/main/get-beam.sh -sSfL | sh`.\"\n )\n self.app_creation()\n self.run_creation()\n process = subprocess.run(\n \"beam deploy app.py\", shell=True, capture_output=True, text=True\n )\n if process.returncode == 0:\n output = process.stdout\n logger.info(output)\n lines = output.split(\"\\n\")\n for line in lines:\n if line.startswith(\" i Send requests to: https://apps.beam.cloud/\"):\n self.app_id = line.split(\"/\")[-1]\n self.url = line.split(\":\")[1].strip()\n return self.app_id\n raise ValueError(\n f\"\"\"Failed to retrieve the appID from the deployment output.\n Deployment output: {output}\"\"\"\n )\n else:\n raise ValueError(f\"Deployment failed. Error: {process.stderr}\")\n @property\n def authorization(self) -> str:\n if self.beam_client_id:\n credential_str = self.beam_client_id + \":\" + self.beam_client_secret\n else:\n credential_str = self.beam_client_secret\n return base64.b64encode(credential_str.encode()).decode()\n def _call(\n self,\n prompt: str,\n stop: Optional[list] = None,", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/beam.html"}
+{"id": "a238c1f73048-5", "text": "self,\n prompt: str,\n stop: Optional[list] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call to Beam.\"\"\"\n url = \"https://apps.beam.cloud/\" + self.app_id if self.app_id else self.url\n payload = {\"prompt\": prompt, \"max_length\": self.max_length}\n headers = {\n \"Accept\": \"*/*\",\n \"Accept-Encoding\": \"gzip, deflate\",\n \"Authorization\": \"Basic \" + self.authorization,\n \"Connection\": \"keep-alive\",\n \"Content-Type\": \"application/json\",\n }\n for _ in range(DEFAULT_NUM_TRIES):\n request = requests.post(url, headers=headers, data=json.dumps(payload))\n if request.status_code == 200:\n return request.json()[\"text\"]\n time.sleep(DEFAULT_SLEEP_TIME)\n logger.warning(\"Unable to successfully call model.\")\n return \"\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/beam.html"}
+{"id": "9409707c4934-0", "text": "Source code for langchain.llms.openai\n\"\"\"Wrapper around OpenAI APIs.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport sys\nimport warnings\nfrom typing import (\n AbstractSet,\n Any,\n Callable,\n Collection,\n Dict,\n Generator,\n List,\n Literal,\n Mapping,\n Optional,\n Set,\n Tuple,\n Union,\n)\nfrom pydantic import Extra, Field, root_validator\nfrom tenacity import (\n before_sleep_log,\n retry,\n retry_if_exception_type,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.llms.base import BaseLLM\nfrom langchain.schema import Generation, LLMResult\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\ndef update_token_usage(\n keys: Set[str], response: Dict[str, Any], token_usage: Dict[str, Any]\n) -> None:\n \"\"\"Update token usage.\"\"\"\n _keys_to_use = keys.intersection(response[\"usage\"])\n for _key in _keys_to_use:\n if _key not in token_usage:\n token_usage[_key] = response[\"usage\"][_key]\n else:\n token_usage[_key] += response[\"usage\"][_key]\ndef _update_response(response: Dict[str, Any], stream_response: Dict[str, Any]) -> None:\n \"\"\"Update response from the stream response.\"\"\"\n response[\"choices\"][0][\"text\"] += stream_response[\"choices\"][0][\"text\"]\n response[\"choices\"][0][\"finish_reason\"] = stream_response[\"choices\"][0][\n \"finish_reason\"\n ]", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/openai.html"}
+{"id": "9409707c4934-1", "text": "\"finish_reason\"\n ]\n response[\"choices\"][0][\"logprobs\"] = stream_response[\"choices\"][0][\"logprobs\"]\ndef _streaming_response_template() -> Dict[str, Any]:\n return {\n \"choices\": [\n {\n \"text\": \"\",\n \"finish_reason\": None,\n \"logprobs\": None,\n }\n ]\n }\ndef _create_retry_decorator(llm: Union[BaseOpenAI, OpenAIChat]) -> Callable[[Any], Any]:\n import openai\n min_seconds = 4\n max_seconds = 10\n # Wait 2^x * 1 second between each retry starting with\n # 4 seconds, then up to 10 seconds, then 10 seconds afterwards\n return retry(\n reraise=True,\n stop=stop_after_attempt(llm.max_retries),\n wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),\n retry=(\n retry_if_exception_type(openai.error.Timeout)\n | retry_if_exception_type(openai.error.APIError)\n | retry_if_exception_type(openai.error.APIConnectionError)\n | retry_if_exception_type(openai.error.RateLimitError)\n | retry_if_exception_type(openai.error.ServiceUnavailableError)\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\ndef completion_with_retry(llm: Union[BaseOpenAI, OpenAIChat], **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the completion call.\"\"\"\n retry_decorator = _create_retry_decorator(llm)\n @retry_decorator\n def _completion_with_retry(**kwargs: Any) -> Any:\n return llm.client.create(**kwargs)\n return _completion_with_retry(**kwargs)", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/openai.html"}
+{"id": "9409707c4934-2", "text": "return llm.client.create(**kwargs)\n return _completion_with_retry(**kwargs)\nasync def acompletion_with_retry(\n llm: Union[BaseOpenAI, OpenAIChat], **kwargs: Any\n) -> Any:\n \"\"\"Use tenacity to retry the async completion call.\"\"\"\n retry_decorator = _create_retry_decorator(llm)\n @retry_decorator\n async def _completion_with_retry(**kwargs: Any) -> Any:\n # Use OpenAI's async api https://github.com/openai/openai-python#async-api\n return await llm.client.acreate(**kwargs)\n return await _completion_with_retry(**kwargs)\nclass BaseOpenAI(BaseLLM):\n \"\"\"Wrapper around OpenAI large language models.\"\"\"\n client: Any #: :meta private:\n model_name: str = Field(\"text-davinci-003\", alias=\"model\")\n \"\"\"Model name to use.\"\"\"\n temperature: float = 0.7\n \"\"\"What sampling temperature to use.\"\"\"\n max_tokens: int = 256\n \"\"\"The maximum number of tokens to generate in the completion.\n -1 returns as many tokens as possible given the prompt and\n the models maximal context size.\"\"\"\n top_p: float = 1\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n frequency_penalty: float = 0\n \"\"\"Penalizes repeated tokens according to frequency.\"\"\"\n presence_penalty: float = 0\n \"\"\"Penalizes repeated tokens.\"\"\"\n n: int = 1\n \"\"\"How many completions to generate for each prompt.\"\"\"\n best_of: int = 1\n \"\"\"Generates best_of completions server-side and returns the \"best\".\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/openai.html"}
+{"id": "9409707c4934-3", "text": "model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not explicitly specified.\"\"\"\n openai_api_key: Optional[str] = None\n openai_api_base: Optional[str] = None\n openai_organization: Optional[str] = None\n # to support explicit proxy for OpenAI\n openai_proxy: Optional[str] = None\n batch_size: int = 20\n \"\"\"Batch size to use when passing multiple documents to generate.\"\"\"\n request_timeout: Optional[Union[float, Tuple[float, float]]] = None\n \"\"\"Timeout for requests to OpenAI completion API. Default is 600 seconds.\"\"\"\n logit_bias: Optional[Dict[str, float]] = Field(default_factory=dict)\n \"\"\"Adjust the probability of specific tokens being generated.\"\"\"\n max_retries: int = 6\n \"\"\"Maximum number of retries to make when generating.\"\"\"\n streaming: bool = False\n \"\"\"Whether to stream the results or not.\"\"\"\n allowed_special: Union[Literal[\"all\"], AbstractSet[str]] = set()\n \"\"\"Set of special tokens that are allowed\u3002\"\"\"\n disallowed_special: Union[Literal[\"all\"], Collection[str]] = \"all\"\n \"\"\"Set of special tokens that are not allowed\u3002\"\"\"\n def __new__(cls, **data: Any) -> Union[OpenAIChat, BaseOpenAI]: # type: ignore\n \"\"\"Initialize the OpenAI object.\"\"\"\n model_name = data.get(\"model_name\", \"\")\n if model_name.startswith(\"gpt-3.5-turbo\") or model_name.startswith(\"gpt-4\"):\n warnings.warn(\n \"You are trying to use a chat model. This way of initializing it is \"\n \"no longer supported. Instead, please use: \"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/openai.html"}
+{"id": "9409707c4934-4", "text": "\"no longer supported. Instead, please use: \"\n \"`from langchain.chat_models import ChatOpenAI`\"\n )\n return OpenAIChat(**data)\n return super().__new__(cls)\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.ignore\n allow_population_by_field_name = True\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = cls.all_required_field_names()\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n if field_name not in all_required_field_names:\n logger.warning(\n f\"\"\"WARNING! {field_name} is not default parameter.\n {field_name} was transferred to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n invalid_model_kwargs = all_required_field_names.intersection(extra.keys())\n if invalid_model_kwargs:\n raise ValueError(\n f\"Parameters {invalid_model_kwargs} should be specified explicitly. \"\n f\"Instead they were passed in as part of `model_kwargs` parameter.\"\n )\n values[\"model_kwargs\"] = extra\n return values\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n values[\"openai_api_key\"] = get_from_dict_or_env(\n values, \"openai_api_key\", \"OPENAI_API_KEY\"\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/openai.html"}
+{"id": "9409707c4934-5", "text": "values, \"openai_api_key\", \"OPENAI_API_KEY\"\n )\n values[\"openai_api_base\"] = get_from_dict_or_env(\n values,\n \"openai_api_base\",\n \"OPENAI_API_BASE\",\n default=\"\",\n )\n values[\"openai_proxy\"] = get_from_dict_or_env(\n values,\n \"openai_proxy\",\n \"OPENAI_PROXY\",\n default=\"\",\n )\n values[\"openai_organization\"] = get_from_dict_or_env(\n values,\n \"openai_organization\",\n \"OPENAI_ORGANIZATION\",\n default=\"\",\n )\n try:\n import openai\n values[\"client\"] = openai.Completion\n except ImportError:\n raise ImportError(\n \"Could not import openai python package. \"\n \"Please install it with `pip install openai`.\"\n )\n if values[\"streaming\"] and values[\"n\"] > 1:\n raise ValueError(\"Cannot stream results when n > 1.\")\n if values[\"streaming\"] and values[\"best_of\"] > 1:\n raise ValueError(\"Cannot stream results when best_of > 1.\")\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling OpenAI API.\"\"\"\n normal_params = {\n \"temperature\": self.temperature,\n \"max_tokens\": self.max_tokens,\n \"top_p\": self.top_p,\n \"frequency_penalty\": self.frequency_penalty,\n \"presence_penalty\": self.presence_penalty,\n \"n\": self.n,\n \"request_timeout\": self.request_timeout,\n \"logit_bias\": self.logit_bias,\n }", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/openai.html"}
+{"id": "9409707c4934-6", "text": "\"logit_bias\": self.logit_bias,\n }\n # Azure gpt-35-turbo doesn't support best_of\n # don't specify best_of if it is 1\n if self.best_of > 1:\n normal_params[\"best_of\"] = self.best_of\n return {**normal_params, **self.model_kwargs}\n def _generate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> LLMResult:\n \"\"\"Call out to OpenAI's endpoint with k unique prompts.\n Args:\n prompts: The prompts to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The full LLM output.\n Example:\n .. code-block:: python\n response = openai.generate([\"Tell me a joke.\"])\n \"\"\"\n # TODO: write a unit test for this\n params = self._invocation_params\n sub_prompts = self.get_sub_prompts(params, prompts, stop)\n choices = []\n token_usage: Dict[str, int] = {}\n # Get the token usage from the response.\n # Includes prompt, completion, and total tokens used.\n _keys = {\"completion_tokens\", \"prompt_tokens\", \"total_tokens\"}\n for _prompts in sub_prompts:\n if self.streaming:\n if len(_prompts) > 1:\n raise ValueError(\"Cannot stream results with multiple prompts.\")\n params[\"stream\"] = True\n response = _streaming_response_template()\n for stream_resp in completion_with_retry(\n self, prompt=_prompts, **params\n ):", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/openai.html"}
+{"id": "9409707c4934-7", "text": "self, prompt=_prompts, **params\n ):\n if run_manager:\n run_manager.on_llm_new_token(\n stream_resp[\"choices\"][0][\"text\"],\n verbose=self.verbose,\n logprobs=stream_resp[\"choices\"][0][\"logprobs\"],\n )\n _update_response(response, stream_resp)\n choices.extend(response[\"choices\"])\n else:\n response = completion_with_retry(self, prompt=_prompts, **params)\n choices.extend(response[\"choices\"])\n if not self.streaming:\n # Can't update token usage if streaming\n update_token_usage(_keys, response, token_usage)\n return self.create_llm_result(choices, prompts, token_usage)\n async def _agenerate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n ) -> LLMResult:\n \"\"\"Call out to OpenAI's endpoint async with k unique prompts.\"\"\"\n params = self._invocation_params\n sub_prompts = self.get_sub_prompts(params, prompts, stop)\n choices = []\n token_usage: Dict[str, int] = {}\n # Get the token usage from the response.\n # Includes prompt, completion, and total tokens used.\n _keys = {\"completion_tokens\", \"prompt_tokens\", \"total_tokens\"}\n for _prompts in sub_prompts:\n if self.streaming:\n if len(_prompts) > 1:\n raise ValueError(\"Cannot stream results with multiple prompts.\")\n params[\"stream\"] = True\n response = _streaming_response_template()\n async for stream_resp in await acompletion_with_retry(\n self, prompt=_prompts, **params", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/openai.html"}
+{"id": "9409707c4934-8", "text": "self, prompt=_prompts, **params\n ):\n if run_manager:\n await run_manager.on_llm_new_token(\n stream_resp[\"choices\"][0][\"text\"],\n verbose=self.verbose,\n logprobs=stream_resp[\"choices\"][0][\"logprobs\"],\n )\n _update_response(response, stream_resp)\n choices.extend(response[\"choices\"])\n else:\n response = await acompletion_with_retry(self, prompt=_prompts, **params)\n choices.extend(response[\"choices\"])\n if not self.streaming:\n # Can't update token usage if streaming\n update_token_usage(_keys, response, token_usage)\n return self.create_llm_result(choices, prompts, token_usage)\n def get_sub_prompts(\n self,\n params: Dict[str, Any],\n prompts: List[str],\n stop: Optional[List[str]] = None,\n ) -> List[List[str]]:\n \"\"\"Get the sub prompts for llm call.\"\"\"\n if stop is not None:\n if \"stop\" in params:\n raise ValueError(\"`stop` found in both the input and default params.\")\n params[\"stop\"] = stop\n if params[\"max_tokens\"] == -1:\n if len(prompts) != 1:\n raise ValueError(\n \"max_tokens set to -1 not supported for multiple inputs.\"\n )\n params[\"max_tokens\"] = self.max_tokens_for_prompt(prompts[0])\n sub_prompts = [\n prompts[i : i + self.batch_size]\n for i in range(0, len(prompts), self.batch_size)\n ]\n return sub_prompts\n def create_llm_result(\n self, choices: Any, prompts: List[str], token_usage: Dict[str, int]", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/openai.html"}
+{"id": "9409707c4934-9", "text": ") -> LLMResult:\n \"\"\"Create the LLMResult from the choices and prompts.\"\"\"\n generations = []\n for i, _ in enumerate(prompts):\n sub_choices = choices[i * self.n : (i + 1) * self.n]\n generations.append(\n [\n Generation(\n text=choice[\"text\"],\n generation_info=dict(\n finish_reason=choice.get(\"finish_reason\"),\n logprobs=choice.get(\"logprobs\"),\n ),\n )\n for choice in sub_choices\n ]\n )\n llm_output = {\"token_usage\": token_usage, \"model_name\": self.model_name}\n return LLMResult(generations=generations, llm_output=llm_output)\n def stream(self, prompt: str, stop: Optional[List[str]] = None) -> Generator:\n \"\"\"Call OpenAI with streaming flag and return the resulting generator.\n BETA: this is a beta feature while we figure out the right abstraction.\n Once that happens, this interface could change.\n Args:\n prompt: The prompts to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n A generator representing the stream of tokens from OpenAI.\n Example:\n .. code-block:: python\n generator = openai.stream(\"Tell me a joke.\")\n for token in generator:\n yield token\n \"\"\"\n params = self.prep_streaming_params(stop)\n generator = self.client.create(prompt=prompt, **params)\n return generator\n def prep_streaming_params(self, stop: Optional[List[str]] = None) -> Dict[str, Any]:\n \"\"\"Prepare the params for streaming.\"\"\"\n params = self._invocation_params", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/openai.html"}
+{"id": "9409707c4934-10", "text": "\"\"\"Prepare the params for streaming.\"\"\"\n params = self._invocation_params\n if \"best_of\" in params and params[\"best_of\"] != 1:\n raise ValueError(\"OpenAI only supports best_of == 1 for streaming\")\n if stop is not None:\n if \"stop\" in params:\n raise ValueError(\"`stop` found in both the input and default params.\")\n params[\"stop\"] = stop\n params[\"stream\"] = True\n return params\n @property\n def _invocation_params(self) -> Dict[str, Any]:\n \"\"\"Get the parameters used to invoke the model.\"\"\"\n openai_creds: Dict[str, Any] = {\n \"api_key\": self.openai_api_key,\n \"api_base\": self.openai_api_base,\n \"organization\": self.openai_organization,\n }\n if self.openai_proxy:\n import openai\n openai.proxy = {\"http\": self.openai_proxy, \"https\": self.openai_proxy} # type: ignore[assignment] # noqa: E501\n return {**openai_creds, **self._default_params}\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_name\": self.model_name}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"openai\"\n def get_token_ids(self, text: str) -> List[int]:\n \"\"\"Get the token IDs using the tiktoken package.\"\"\"\n # tiktoken NOT supported for Python < 3.8\n if sys.version_info[1] < 8:\n return super().get_num_tokens(text)", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/openai.html"}
+{"id": "9409707c4934-11", "text": "return super().get_num_tokens(text)\n try:\n import tiktoken\n except ImportError:\n raise ImportError(\n \"Could not import tiktoken python package. \"\n \"This is needed in order to calculate get_num_tokens. \"\n \"Please install it with `pip install tiktoken`.\"\n )\n enc = tiktoken.encoding_for_model(self.model_name)\n return enc.encode(\n text,\n allowed_special=self.allowed_special,\n disallowed_special=self.disallowed_special,\n )\n def modelname_to_contextsize(self, modelname: str) -> int:\n \"\"\"Calculate the maximum number of tokens possible to generate for a model.\n Args:\n modelname: The modelname we want to know the context size for.\n Returns:\n The maximum context size\n Example:\n .. code-block:: python\n max_tokens = openai.modelname_to_contextsize(\"text-davinci-003\")\n \"\"\"\n model_token_mapping = {\n \"gpt-4\": 8192,\n \"gpt-4-0314\": 8192,\n \"gpt-4-32k\": 32768,\n \"gpt-4-32k-0314\": 32768,\n \"gpt-3.5-turbo\": 4096,\n \"gpt-3.5-turbo-0301\": 4096,\n \"text-ada-001\": 2049,\n \"ada\": 2049,\n \"text-babbage-001\": 2040,\n \"babbage\": 2049,\n \"text-curie-001\": 2049,\n \"curie\": 2049,\n \"davinci\": 2049,", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/openai.html"}
+{"id": "9409707c4934-12", "text": "\"davinci\": 2049,\n \"text-davinci-003\": 4097,\n \"text-davinci-002\": 4097,\n \"code-davinci-002\": 8001,\n \"code-davinci-001\": 8001,\n \"code-cushman-002\": 2048,\n \"code-cushman-001\": 2048,\n }\n # handling finetuned models\n if \"ft-\" in modelname:\n modelname = modelname.split(\":\")[0]\n context_size = model_token_mapping.get(modelname, None)\n if context_size is None:\n raise ValueError(\n f\"Unknown model: {modelname}. Please provide a valid OpenAI model name.\"\n \"Known models are: \" + \", \".join(model_token_mapping.keys())\n )\n return context_size\n def max_tokens_for_prompt(self, prompt: str) -> int:\n \"\"\"Calculate the maximum number of tokens possible to generate for a prompt.\n Args:\n prompt: The prompt to pass into the model.\n Returns:\n The maximum number of tokens to generate for a prompt.\n Example:\n .. code-block:: python\n max_tokens = openai.max_token_for_prompt(\"Tell me a joke.\")\n \"\"\"\n num_tokens = self.get_num_tokens(prompt)\n # get max context size for model by name\n max_size = self.modelname_to_contextsize(self.model_name)\n return max_size - num_tokens\n[docs]class OpenAI(BaseOpenAI):\n \"\"\"Wrapper around OpenAI large language models.\n To use, you should have the ``openai`` python package installed, and the\n environment variable ``OPENAI_API_KEY`` set with your API key.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/openai.html"}
+{"id": "9409707c4934-13", "text": "environment variable ``OPENAI_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the openai.create call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import OpenAI\n openai = OpenAI(model_name=\"text-davinci-003\")\n \"\"\"\n @property\n def _invocation_params(self) -> Dict[str, Any]:\n return {**{\"model\": self.model_name}, **super()._invocation_params}\n[docs]class AzureOpenAI(BaseOpenAI):\n \"\"\"Wrapper around Azure-specific OpenAI large language models.\n To use, you should have the ``openai`` python package installed, and the\n environment variable ``OPENAI_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the openai.create call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import AzureOpenAI\n openai = AzureOpenAI(model_name=\"text-davinci-003\")\n \"\"\"\n deployment_name: str = \"\"\n \"\"\"Deployment name to use.\"\"\"\n openai_api_type: str = \"azure\"\n openai_api_version: str = \"\"\n @root_validator()\n def validate_azure_settings(cls, values: Dict) -> Dict:\n values[\"openai_api_version\"] = get_from_dict_or_env(\n values,\n \"openai_api_version\",\n \"OPENAI_API_VERSION\",\n )\n values[\"openai_api_type\"] = get_from_dict_or_env(\n values,\n \"openai_api_type\",\n \"OPENAI_API_TYPE\",\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/openai.html"}
+{"id": "9409707c4934-14", "text": "\"openai_api_type\",\n \"OPENAI_API_TYPE\",\n )\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n return {\n **{\"deployment_name\": self.deployment_name},\n **super()._identifying_params,\n }\n @property\n def _invocation_params(self) -> Dict[str, Any]:\n openai_params = {\n \"engine\": self.deployment_name,\n \"api_type\": self.openai_api_type,\n \"api_version\": self.openai_api_version,\n }\n return {**openai_params, **super()._invocation_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"azure\"\n[docs]class OpenAIChat(BaseLLM):\n \"\"\"Wrapper around OpenAI Chat large language models.\n To use, you should have the ``openai`` python package installed, and the\n environment variable ``OPENAI_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the openai.create call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import OpenAIChat\n openaichat = OpenAIChat(model_name=\"gpt-3.5-turbo\")\n \"\"\"\n client: Any #: :meta private:\n model_name: str = \"gpt-3.5-turbo\"\n \"\"\"Model name to use.\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not explicitly specified.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/openai.html"}
+{"id": "9409707c4934-15", "text": "\"\"\"Holds any model parameters valid for `create` call not explicitly specified.\"\"\"\n openai_api_key: Optional[str] = None\n openai_api_base: Optional[str] = None\n # to support explicit proxy for OpenAI\n openai_proxy: Optional[str] = None\n max_retries: int = 6\n \"\"\"Maximum number of retries to make when generating.\"\"\"\n prefix_messages: List = Field(default_factory=list)\n \"\"\"Series of messages for Chat input.\"\"\"\n streaming: bool = False\n \"\"\"Whether to stream the results or not.\"\"\"\n allowed_special: Union[Literal[\"all\"], AbstractSet[str]] = set()\n \"\"\"Set of special tokens that are allowed\u3002\"\"\"\n disallowed_special: Union[Literal[\"all\"], Collection[str]] = \"all\"\n \"\"\"Set of special tokens that are not allowed\u3002\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.ignore\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n openai_api_key = get_from_dict_or_env(", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/openai.html"}
+{"id": "9409707c4934-16", "text": "openai_api_key = get_from_dict_or_env(\n values, \"openai_api_key\", \"OPENAI_API_KEY\"\n )\n openai_api_base = get_from_dict_or_env(\n values,\n \"openai_api_base\",\n \"OPENAI_API_BASE\",\n default=\"\",\n )\n openai_proxy = get_from_dict_or_env(\n values,\n \"openai_proxy\",\n \"OPENAI_PROXY\",\n default=\"\",\n )\n openai_organization = get_from_dict_or_env(\n values, \"openai_organization\", \"OPENAI_ORGANIZATION\", default=\"\"\n )\n try:\n import openai\n openai.api_key = openai_api_key\n if openai_api_base:\n openai.api_base = openai_api_base\n if openai_organization:\n openai.organization = openai_organization\n if openai_proxy:\n openai.proxy = {\"http\": openai_proxy, \"https\": openai_proxy} # type: ignore[assignment] # noqa: E501\n except ImportError:\n raise ImportError(\n \"Could not import openai python package. \"\n \"Please install it with `pip install openai`.\"\n )\n try:\n values[\"client\"] = openai.ChatCompletion\n except AttributeError:\n raise ValueError(\n \"`openai` has no `ChatCompletion` attribute, this is likely \"\n \"due to an old version of the openai package. Try upgrading it \"\n \"with `pip install --upgrade openai`.\"\n )\n warnings.warn(\n \"You are trying to use a chat model. This way of initializing it is \"\n \"no longer supported. Instead, please use: \"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/openai.html"}
+{"id": "9409707c4934-17", "text": "\"no longer supported. Instead, please use: \"\n \"`from langchain.chat_models import ChatOpenAI`\"\n )\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling OpenAI API.\"\"\"\n return self.model_kwargs\n def _get_chat_params(\n self, prompts: List[str], stop: Optional[List[str]] = None\n ) -> Tuple:\n if len(prompts) > 1:\n raise ValueError(\n f\"OpenAIChat currently only supports single prompt, got {prompts}\"\n )\n messages = self.prefix_messages + [{\"role\": \"user\", \"content\": prompts[0]}]\n params: Dict[str, Any] = {**{\"model\": self.model_name}, **self._default_params}\n if stop is not None:\n if \"stop\" in params:\n raise ValueError(\"`stop` found in both the input and default params.\")\n params[\"stop\"] = stop\n if params.get(\"max_tokens\") == -1:\n # for ChatGPT api, omitting max_tokens is equivalent to having no limit\n del params[\"max_tokens\"]\n return messages, params\n def _generate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> LLMResult:\n messages, params = self._get_chat_params(prompts, stop)\n if self.streaming:\n response = \"\"\n params[\"stream\"] = True\n for stream_resp in completion_with_retry(self, messages=messages, **params):\n token = stream_resp[\"choices\"][0][\"delta\"].get(\"content\", \"\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/openai.html"}
+{"id": "9409707c4934-18", "text": "token = stream_resp[\"choices\"][0][\"delta\"].get(\"content\", \"\")\n response += token\n if run_manager:\n run_manager.on_llm_new_token(\n token,\n )\n return LLMResult(\n generations=[[Generation(text=response)]],\n )\n else:\n full_response = completion_with_retry(self, messages=messages, **params)\n llm_output = {\n \"token_usage\": full_response[\"usage\"],\n \"model_name\": self.model_name,\n }\n return LLMResult(\n generations=[\n [Generation(text=full_response[\"choices\"][0][\"message\"][\"content\"])]\n ],\n llm_output=llm_output,\n )\n async def _agenerate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n ) -> LLMResult:\n messages, params = self._get_chat_params(prompts, stop)\n if self.streaming:\n response = \"\"\n params[\"stream\"] = True\n async for stream_resp in await acompletion_with_retry(\n self, messages=messages, **params\n ):\n token = stream_resp[\"choices\"][0][\"delta\"].get(\"content\", \"\")\n response += token\n if run_manager:\n await run_manager.on_llm_new_token(\n token,\n )\n return LLMResult(\n generations=[[Generation(text=response)]],\n )\n else:\n full_response = await acompletion_with_retry(\n self, messages=messages, **params\n )\n llm_output = {\n \"token_usage\": full_response[\"usage\"],", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/openai.html"}
+{"id": "9409707c4934-19", "text": "llm_output = {\n \"token_usage\": full_response[\"usage\"],\n \"model_name\": self.model_name,\n }\n return LLMResult(\n generations=[\n [Generation(text=full_response[\"choices\"][0][\"message\"][\"content\"])]\n ],\n llm_output=llm_output,\n )\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_name\": self.model_name}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"openai-chat\"\n[docs] def get_token_ids(self, text: str) -> List[int]:\n \"\"\"Get the token IDs using the tiktoken package.\"\"\"\n # tiktoken NOT supported for Python < 3.8\n if sys.version_info[1] < 8:\n return super().get_token_ids(text)\n try:\n import tiktoken\n except ImportError:\n raise ImportError(\n \"Could not import tiktoken python package. \"\n \"This is needed in order to calculate get_num_tokens. \"\n \"Please install it with `pip install tiktoken`.\"\n )\n enc = tiktoken.encoding_for_model(self.model_name)\n return enc.encode(\n text,\n allowed_special=self.allowed_special,\n disallowed_special=self.disallowed_special,\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/openai.html"}
+{"id": "a21ad27f64e6-0", "text": "Source code for langchain.llms.mosaicml\n\"\"\"Wrapper around MosaicML APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nINSTRUCTION_KEY = \"### Instruction:\"\nRESPONSE_KEY = \"### Response:\"\nINTRO_BLURB = (\n \"Below is an instruction that describes a task. \"\n \"Write a response that appropriately completes the request.\"\n)\nPROMPT_FOR_GENERATION_FORMAT = \"\"\"{intro}\n{instruction_key}\n{instruction}\n{response_key}\n\"\"\".format(\n intro=INTRO_BLURB,\n instruction_key=INSTRUCTION_KEY,\n instruction=\"{instruction}\",\n response_key=RESPONSE_KEY,\n)\n[docs]class MosaicML(LLM):\n \"\"\"Wrapper around MosaicML's LLM inference service.\n To use, you should have the\n environment variable ``MOSAICML_API_TOKEN`` set with your API token, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.llms import MosaicML\n endpoint_url = (\n \"https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict\"\n )\n mosaic_llm = MosaicML(\n endpoint_url=endpoint_url,\n mosaicml_api_token=\"my-api-key\"\n )\n \"\"\"\n endpoint_url: str = (", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/mosaicml.html"}
+{"id": "a21ad27f64e6-1", "text": ")\n \"\"\"\n endpoint_url: str = (\n \"https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict\"\n )\n \"\"\"Endpoint URL to use.\"\"\"\n inject_instruction_format: bool = False\n \"\"\"Whether to inject the instruction format into the prompt.\"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n retry_sleep: float = 1.0\n \"\"\"How long to try sleeping for if a rate limit is encountered\"\"\"\n mosaicml_api_token: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n mosaicml_api_token = get_from_dict_or_env(\n values, \"mosaicml_api_token\", \"MOSAICML_API_TOKEN\"\n )\n values[\"mosaicml_api_token\"] = mosaicml_api_token\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n _model_kwargs = self.model_kwargs or {}\n return {\n **{\"endpoint_url\": self.endpoint_url},\n **{\"model_kwargs\": _model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"mosaicml\"\n def _transform_prompt(self, prompt: str) -> str:\n \"\"\"Transform prompt.\"\"\"\n if self.inject_instruction_format:\n prompt = PROMPT_FOR_GENERATION_FORMAT.format(\n instruction=prompt,\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/mosaicml.html"}
+{"id": "a21ad27f64e6-2", "text": "instruction=prompt,\n )\n return prompt\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n is_retry: bool = False,\n ) -> str:\n \"\"\"Call out to a MosaicML LLM inference endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = mosaic_llm(\"Tell me a joke.\")\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n prompt = self._transform_prompt(prompt)\n payload = {\"input_strings\": [prompt]}\n payload.update(_model_kwargs)\n # HTTP headers for authorization\n headers = {\n \"Authorization\": f\"{self.mosaicml_api_token}\",\n \"Content-Type\": \"application/json\",\n }\n # send request\n try:\n response = requests.post(self.endpoint_url, headers=headers, json=payload)\n except requests.exceptions.RequestException as e:\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n try:\n parsed_response = response.json()\n if \"error\" in parsed_response:\n # if we get rate limited, try sleeping for 1 second\n if (\n not is_retry\n and \"rate limit exceeded\" in parsed_response[\"error\"].lower()\n ):\n import time\n time.sleep(self.retry_sleep)\n return self._call(prompt, stop, run_manager, is_retry=True)\n raise ValueError(", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/mosaicml.html"}
+{"id": "a21ad27f64e6-3", "text": "raise ValueError(\n f\"Error raised by inference API: {parsed_response['error']}\"\n )\n if \"data\" not in parsed_response:\n raise ValueError(\n f\"Error raised by inference API, no key data: {parsed_response}\"\n )\n generated_text = parsed_response[\"data\"]\n except requests.exceptions.JSONDecodeError as e:\n raise ValueError(\n f\"Error raised by inference API: {e}.\\nResponse: {response.text}\"\n )\n text = generated_text[0][len(prompt) :]\n # TODO: replace when MosaicML supports custom stop tokens natively\n if stop is not None:\n text = enforce_stop_tokens(text, stop)\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/mosaicml.html"}
+{"id": "a317a43f272f-0", "text": "Source code for langchain.llms.huggingface_text_gen_inference\n\"\"\"Wrapper around Huggingface text generation inference API.\"\"\"\nfrom functools import partial\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\n[docs]class HuggingFaceTextGenInference(LLM):\n \"\"\"\n HuggingFace text generation inference API.\n This class is a wrapper around the HuggingFace text generation inference API.\n It is used to generate text from a given prompt.\n Attributes:\n - max_new_tokens: The maximum number of tokens to generate.\n - top_k: The number of top-k tokens to consider when generating text.\n - top_p: The cumulative probability threshold for generating text.\n - typical_p: The typical probability threshold for generating text.\n - temperature: The temperature to use when generating text.\n - repetition_penalty: The repetition penalty to use when generating text.\n - stop_sequences: A list of stop sequences to use when generating text.\n - seed: The seed to use when generating text.\n - inference_server_url: The URL of the inference server to use.\n - timeout: The timeout value in seconds to use while connecting to inference server.\n - client: The client object used to communicate with the inference server.\n Methods:\n - _call: Generates text based on a given prompt and stop sequences.\n - _llm_type: Returns the type of LLM.\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n # Basic Example (no streaming)\n llm = HuggingFaceTextGenInference(\n inference_server_url = \"http://localhost:8010/\",", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_text_gen_inference.html"}
+{"id": "a317a43f272f-1", "text": "inference_server_url = \"http://localhost:8010/\",\n max_new_tokens = 512,\n top_k = 10,\n top_p = 0.95,\n typical_p = 0.95,\n temperature = 0.01,\n repetition_penalty = 1.03,\n )\n print(llm(\"What is Deep Learning?\"))\n \n # Streaming response example\n from langchain.callbacks import streaming_stdout\n \n callbacks = [streaming_stdout.StreamingStdOutCallbackHandler()]\n llm = HuggingFaceTextGenInference(\n inference_server_url = \"http://localhost:8010/\",\n max_new_tokens = 512,\n top_k = 10,\n top_p = 0.95,\n typical_p = 0.95,\n temperature = 0.01,\n repetition_penalty = 1.03,\n callbacks = callbacks,\n stream = True\n )\n print(llm(\"What is Deep Learning?\"))\n \n \"\"\"\n max_new_tokens: int = 512\n top_k: Optional[int] = None\n top_p: Optional[float] = 0.95\n typical_p: Optional[float] = 0.95\n temperature: float = 0.8\n repetition_penalty: Optional[float] = None\n stop_sequences: List[str] = Field(default_factory=list)\n seed: Optional[int] = None\n inference_server_url: str = \"\"\n timeout: int = 120\n stream: bool = False\n client: Any\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_text_gen_inference.html"}
+{"id": "a317a43f272f-2", "text": "@root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that python package exists in environment.\"\"\"\n try:\n import text_generation\n values[\"client\"] = text_generation.Client(\n values[\"inference_server_url\"], timeout=values[\"timeout\"]\n )\n except ImportError:\n raise ImportError(\n \"Could not import text_generation python package. \"\n \"Please install it with `pip install text_generation`.\"\n )\n return values\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"hf_textgen_inference\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n if stop is None:\n stop = self.stop_sequences\n else:\n stop += self.stop_sequences\n if not self.stream:\n res = self.client.generate(\n prompt,\n stop_sequences=stop,\n max_new_tokens=self.max_new_tokens,\n top_k=self.top_k,\n top_p=self.top_p,\n typical_p=self.typical_p,\n temperature=self.temperature,\n repetition_penalty=self.repetition_penalty,\n seed=self.seed,\n )\n # remove stop sequences from the end of the generated text\n for stop_seq in stop:\n if stop_seq in res.generated_text:\n res.generated_text = res.generated_text[\n : res.generated_text.index(stop_seq)\n ]\n text = res.generated_text\n else:\n text_callback = None\n if run_manager:\n text_callback = partial(", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_text_gen_inference.html"}
+{"id": "a317a43f272f-3", "text": "text_callback = None\n if run_manager:\n text_callback = partial(\n run_manager.on_llm_new_token, verbose=self.verbose\n )\n params = {\n \"stop_sequences\": stop,\n \"max_new_tokens\": self.max_new_tokens,\n \"top_k\": self.top_k,\n \"top_p\": self.top_p,\n \"typical_p\": self.typical_p,\n \"temperature\": self.temperature,\n \"repetition_penalty\": self.repetition_penalty,\n \"seed\": self.seed,\n }\n text = \"\"\n for res in self.client.generate_stream(prompt, **params):\n token = res.token\n is_stop = False\n for stop_seq in stop:\n if stop_seq in token.text:\n is_stop = True\n break\n if is_stop:\n break\n if not token.special:\n if text_callback:\n text_callback(token.text)\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_text_gen_inference.html"}
+{"id": "44c87699d824-0", "text": "Source code for langchain.llms.aviary\n\"\"\"Wrapper around Aviary\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nTIMEOUT = 60\n[docs]class Aviary(LLM):\n \"\"\"Allow you to use an Aviary.\n Aviary is a backend for hosted models. You can\n find out more about aviary at\n http://github.com/ray-project/aviary\n Has no dependencies, since it connects to backend\n directly.\n To get a list of the models supported on an\n aviary, follow the instructions on the web site to\n install the aviary CLI and then use:\n `aviary models`\n You must at least specify the environment\n variable or parameter AVIARY_URL.\n You may optionally specify the environment variable\n or parameter AVIARY_TOKEN.\n Example:\n .. code-block:: python\n from langchain.llms import Aviary\n light = Aviary(aviary_url='AVIARY_URL',\n model='amazon/LightGPT')\n result = light.predict('How do you make fried rice?')\n \"\"\"\n model: str\n aviary_url: str\n aviary_token: str = Field(\"\", exclude=True)\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/aviary.html"}
+{"id": "44c87699d824-1", "text": "\"\"\"Validate that api key and python package exists in environment.\"\"\"\n aviary_url = get_from_dict_or_env(values, \"aviary_url\", \"AVIARY_URL\")\n if not aviary_url.endswith(\"/\"):\n aviary_url += \"/\"\n values[\"aviary_url\"] = aviary_url\n aviary_token = get_from_dict_or_env(\n values, \"aviary_token\", \"AVIARY_TOKEN\", default=\"\"\n )\n values[\"aviary_token\"] = aviary_token\n aviary_endpoint = aviary_url + \"models\"\n headers = {\"Authorization\": f\"Bearer {aviary_token}\"} if aviary_token else {}\n try:\n response = requests.get(aviary_endpoint, headers=headers)\n result = response.json()\n # Confirm model is available\n if values[\"model\"] not in result:\n raise ValueError(\n f\"{aviary_url} does not support model {values['model']}.\"\n )\n except requests.exceptions.RequestException as e:\n raise ValueError(e)\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"aviary_url\": self.aviary_url,\n \"aviary_token\": self.aviary_token,\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"aviary\"\n @property\n def headers(self) -> Dict[str, str]:\n if self.aviary_token:\n return {\"Authorization\": f\"Bearer {self.aviary_token}\"}\n else:\n return {}\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/aviary.html"}
+{"id": "44c87699d824-2", "text": "prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call out to Aviary\n Args:\n prompt: The prompt to pass into the model.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = aviary(\"Tell me a joke.\")\n \"\"\"\n url = self.aviary_url + \"query/\" + self.model.replace(\"/\", \"--\")\n response = requests.post(\n url,\n headers=self.headers,\n json={\"prompt\": prompt},\n timeout=TIMEOUT,\n )\n try:\n text = response.json()[self.model][\"generated_text\"]\n except requests.JSONDecodeError as e:\n raise ValueError(\n f\"Error decoding JSON from {url}. Text response: {response.text}\",\n ) from e\n if stop:\n text = enforce_stop_tokens(text, stop)\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/aviary.html"}
+{"id": "8f9321254114-0", "text": "Source code for langchain.llms.pipelineai\n\"\"\"Wrapper around Pipeline Cloud API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import BaseModel, Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class PipelineAI(LLM, BaseModel):\n \"\"\"Wrapper around PipelineAI large language models.\n To use, you should have the ``pipeline-ai`` python package installed,\n and the environment variable ``PIPELINE_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain import PipelineAI\n pipeline = PipelineAI(pipeline_key=\"\")\n \"\"\"\n pipeline_key: str = \"\"\n \"\"\"The id or tag of the target pipeline\"\"\"\n pipeline_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any pipeline parameters valid for `create` call not\n explicitly specified.\"\"\"\n pipeline_api_key: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.forbid\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"pipeline_kwargs\", {})\n for field_name in list(values):", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/pipelineai.html"}
+{"id": "8f9321254114-1", "text": "extra = values.get(\"pipeline_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"{field_name} was transfered to pipeline_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"pipeline_kwargs\"] = extra\n return values\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n pipeline_api_key = get_from_dict_or_env(\n values, \"pipeline_api_key\", \"PIPELINE_API_KEY\"\n )\n values[\"pipeline_api_key\"] = pipeline_api_key\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"pipeline_key\": self.pipeline_key},\n **{\"pipeline_kwargs\": self.pipeline_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"pipeline_ai\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call to Pipeline Cloud endpoint.\"\"\"\n try:\n from pipeline import PipelineCloud\n except ImportError:\n raise ValueError(\n \"Could not import pipeline-ai python package. \"\n \"Please install it with `pip install pipeline-ai`.\"\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/pipelineai.html"}
+{"id": "8f9321254114-2", "text": "\"Please install it with `pip install pipeline-ai`.\"\n )\n client = PipelineCloud(token=self.pipeline_api_key)\n params = self.pipeline_kwargs or {}\n run = client.run_pipeline(self.pipeline_key, [prompt, params])\n try:\n text = run.result_preview[0][0]\n except AttributeError:\n raise AttributeError(\n f\"A pipeline run should have a `result_preview` attribute.\"\n f\"Run was: {run}\"\n )\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the pipeline parameters\n text = enforce_stop_tokens(text, stop)\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/pipelineai.html"}
+{"id": "fca098a09614-0", "text": "Source code for langchain.llms.forefrontai\n\"\"\"Wrapper around ForefrontAI APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\n[docs]class ForefrontAI(LLM):\n \"\"\"Wrapper around ForefrontAI large language models.\n To use, you should have the environment variable ``FOREFRONTAI_API_KEY``\n set with your API key.\n Example:\n .. code-block:: python\n from langchain.llms import ForefrontAI\n forefrontai = ForefrontAI(endpoint_url=\"\")\n \"\"\"\n endpoint_url: str = \"\"\n \"\"\"Model name to use.\"\"\"\n temperature: float = 0.7\n \"\"\"What sampling temperature to use.\"\"\"\n length: int = 256\n \"\"\"The maximum number of tokens to generate in the completion.\"\"\"\n top_p: float = 1.0\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n top_k: int = 40\n \"\"\"The number of highest probability vocabulary tokens to\n keep for top-k-filtering.\"\"\"\n repetition_penalty: int = 1\n \"\"\"Penalizes repeated tokens according to frequency.\"\"\"\n forefrontai_api_key: Optional[str] = None\n base_url: Optional[str] = None\n \"\"\"Base url to use, if None decides based on model name.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/forefrontai.html"}
+{"id": "fca098a09614-1", "text": "@root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key exists in environment.\"\"\"\n forefrontai_api_key = get_from_dict_or_env(\n values, \"forefrontai_api_key\", \"FOREFRONTAI_API_KEY\"\n )\n values[\"forefrontai_api_key\"] = forefrontai_api_key\n return values\n @property\n def _default_params(self) -> Mapping[str, Any]:\n \"\"\"Get the default parameters for calling ForefrontAI API.\"\"\"\n return {\n \"temperature\": self.temperature,\n \"length\": self.length,\n \"top_p\": self.top_p,\n \"top_k\": self.top_k,\n \"repetition_penalty\": self.repetition_penalty,\n }\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"endpoint_url\": self.endpoint_url}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"forefrontai\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call out to ForefrontAI's complete endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = ForefrontAI(\"Tell me a joke.\")\n \"\"\"\n response = requests.post(\n url=self.endpoint_url,", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/forefrontai.html"}
+{"id": "fca098a09614-2", "text": "\"\"\"\n response = requests.post(\n url=self.endpoint_url,\n headers={\n \"Authorization\": f\"Bearer {self.forefrontai_api_key}\",\n \"Content-Type\": \"application/json\",\n },\n json={\"text\": prompt, **self._default_params},\n )\n response_json = response.json()\n text = response_json[\"result\"][0][\"completion\"]\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/forefrontai.html"}
+{"id": "4e4b2eb83482-0", "text": "Source code for langchain.llms.huggingface_pipeline\n\"\"\"Wrapper around HuggingFace Pipeline APIs.\"\"\"\nimport importlib.util\nimport logging\nfrom typing import Any, List, Mapping, Optional\nfrom pydantic import Extra\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nDEFAULT_MODEL_ID = \"gpt2\"\nDEFAULT_TASK = \"text-generation\"\nVALID_TASKS = (\"text2text-generation\", \"text-generation\", \"summarization\")\nlogger = logging.getLogger(__name__)\n[docs]class HuggingFacePipeline(LLM):\n \"\"\"Wrapper around HuggingFace Pipeline API.\n To use, you should have the ``transformers`` python package installed.\n Only supports `text-generation`, `text2text-generation` and `summarization` for now.\n Example using from_model_id:\n .. code-block:: python\n from langchain.llms import HuggingFacePipeline\n hf = HuggingFacePipeline.from_model_id(\n model_id=\"gpt2\",\n task=\"text-generation\",\n pipeline_kwargs={\"max_new_tokens\": 10},\n )\n Example passing pipeline in directly:\n .. code-block:: python\n from langchain.llms import HuggingFacePipeline\n from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\n model_id = \"gpt2\"\n tokenizer = AutoTokenizer.from_pretrained(model_id)\n model = AutoModelForCausalLM.from_pretrained(model_id)\n pipe = pipeline(\n \"text-generation\", model=model, tokenizer=tokenizer, max_new_tokens=10\n )\n hf = HuggingFacePipeline(pipeline=pipe)\n \"\"\"\n pipeline: Any #: :meta private:", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_pipeline.html"}
+{"id": "4e4b2eb83482-1", "text": "\"\"\"\n pipeline: Any #: :meta private:\n model_id: str = DEFAULT_MODEL_ID\n \"\"\"Model name to use.\"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments passed to the model.\"\"\"\n pipeline_kwargs: Optional[dict] = None\n \"\"\"Key word arguments passed to the pipeline.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @classmethod\n def from_model_id(\n cls,\n model_id: str,\n task: str,\n device: int = -1,\n model_kwargs: Optional[dict] = None,\n pipeline_kwargs: Optional[dict] = None,\n **kwargs: Any,\n ) -> LLM:\n \"\"\"Construct the pipeline object from model_id and task.\"\"\"\n try:\n from transformers import (\n AutoModelForCausalLM,\n AutoModelForSeq2SeqLM,\n AutoTokenizer,\n )\n from transformers import pipeline as hf_pipeline\n except ImportError:\n raise ValueError(\n \"Could not import transformers python package. \"\n \"Please install it with `pip install transformers`.\"\n )\n _model_kwargs = model_kwargs or {}\n tokenizer = AutoTokenizer.from_pretrained(model_id, **_model_kwargs)\n try:\n if task == \"text-generation\":\n model = AutoModelForCausalLM.from_pretrained(model_id, **_model_kwargs)\n elif task in (\"text2text-generation\", \"summarization\"):\n model = AutoModelForSeq2SeqLM.from_pretrained(model_id, **_model_kwargs)\n else:\n raise ValueError(\n f\"Got invalid task {task}, \"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_pipeline.html"}
+{"id": "4e4b2eb83482-2", "text": "else:\n raise ValueError(\n f\"Got invalid task {task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n except ImportError as e:\n raise ValueError(\n f\"Could not load the {task} model due to missing dependencies.\"\n ) from e\n if importlib.util.find_spec(\"torch\") is not None:\n import torch\n cuda_device_count = torch.cuda.device_count()\n if device < -1 or (device >= cuda_device_count):\n raise ValueError(\n f\"Got device=={device}, \"\n f\"device is required to be within [-1, {cuda_device_count})\"\n )\n if device < 0 and cuda_device_count > 0:\n logger.warning(\n \"Device has %d GPUs available. \"\n \"Provide device={deviceId} to `from_model_id` to use available\"\n \"GPUs for execution. deviceId is -1 (default) for CPU and \"\n \"can be a positive integer associated with CUDA device id.\",\n cuda_device_count,\n )\n if \"trust_remote_code\" in _model_kwargs:\n _model_kwargs = {\n k: v for k, v in _model_kwargs.items() if k != \"trust_remote_code\"\n }\n _pipeline_kwargs = pipeline_kwargs or {}\n pipeline = hf_pipeline(\n task=task,\n model=model,\n tokenizer=tokenizer,\n device=device,\n model_kwargs=_model_kwargs,\n **_pipeline_kwargs,\n )\n if pipeline.task not in VALID_TASKS:\n raise ValueError(\n f\"Got invalid task {pipeline.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n return cls(", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_pipeline.html"}
+{"id": "4e4b2eb83482-3", "text": ")\n return cls(\n pipeline=pipeline,\n model_id=model_id,\n model_kwargs=_model_kwargs,\n pipeline_kwargs=_pipeline_kwargs,\n **kwargs,\n )\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"model_id\": self.model_id,\n \"model_kwargs\": self.model_kwargs,\n \"pipeline_kwargs\": self.pipeline_kwargs,\n }\n @property\n def _llm_type(self) -> str:\n return \"huggingface_pipeline\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n response = self.pipeline(prompt)\n if self.pipeline.task == \"text-generation\":\n # Text generation return includes the starter text.\n text = response[0][\"generated_text\"][len(prompt) :]\n elif self.pipeline.task == \"text2text-generation\":\n text = response[0][\"generated_text\"]\n elif self.pipeline.task == \"summarization\":\n text = response[0][\"summary_text\"]\n else:\n raise ValueError(\n f\"Got invalid task {self.pipeline.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n if stop is not None:\n # This is a bit hacky, but I can't figure out a better way to enforce\n # stop tokens when making calls to huggingface_hub.\n text = enforce_stop_tokens(text, stop)\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_pipeline.html"}
+{"id": "4e4b2eb83482-4", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_pipeline.html"}
+{"id": "218d51ba3501-0", "text": "Source code for langchain.llms.sagemaker_endpoint\n\"\"\"Wrapper around Sagemaker InvokeEndpoint API.\"\"\"\nfrom abc import abstractmethod\nfrom typing import Any, Dict, Generic, List, Mapping, Optional, TypeVar, Union\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nINPUT_TYPE = TypeVar(\"INPUT_TYPE\", bound=Union[str, List[str]])\nOUTPUT_TYPE = TypeVar(\"OUTPUT_TYPE\", bound=Union[str, List[List[float]]])\nclass ContentHandlerBase(Generic[INPUT_TYPE, OUTPUT_TYPE]):\n \"\"\"A handler class to transform input from LLM to a\n format that SageMaker endpoint expects. Similarily,\n the class also handles transforming output from the\n SageMaker endpoint to a format that LLM class expects.\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n class ContentHandler(ContentHandlerBase):\n content_type = \"application/json\"\n accepts = \"application/json\"\n def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:\n input_str = json.dumps({prompt: prompt, **model_kwargs})\n return input_str.encode('utf-8')\n \n def transform_output(self, output: bytes) -> str:\n response_json = json.loads(output.read().decode(\"utf-8\"))\n return response_json[0][\"generated_text\"]\n \"\"\"\n content_type: Optional[str] = \"text/plain\"\n \"\"\"The MIME type of the input data passed to endpoint\"\"\"\n accepts: Optional[str] = \"text/plain\"\n \"\"\"The MIME type of the response data returned from endpoint\"\"\"\n @abstractmethod", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/sagemaker_endpoint.html"}
+{"id": "218d51ba3501-1", "text": "\"\"\"The MIME type of the response data returned from endpoint\"\"\"\n @abstractmethod\n def transform_input(self, prompt: INPUT_TYPE, model_kwargs: Dict) -> bytes:\n \"\"\"Transforms the input to a format that model can accept\n as the request Body. Should return bytes or seekable file\n like object in the format specified in the content_type\n request header.\n \"\"\"\n @abstractmethod\n def transform_output(self, output: bytes) -> OUTPUT_TYPE:\n \"\"\"Transforms the output from the model to string that\n the LLM class expects.\n \"\"\"\nclass LLMContentHandler(ContentHandlerBase[str, str]):\n \"\"\"Content handler for LLM class.\"\"\"\n[docs]class SagemakerEndpoint(LLM):\n \"\"\"Wrapper around custom Sagemaker Inference Endpoints.\n To use, you must supply the endpoint name from your deployed\n Sagemaker model & the region where it is deployed.\n To authenticate, the AWS client uses the following methods to\n automatically load credentials:\n https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n If a specific credential profile should be used, you must pass\n the name of the profile from the ~/.aws/credentials file that is to be used.\n Make sure the credentials / roles used have the required policies to\n access the Sagemaker endpoint.\n See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n from langchain import SagemakerEndpoint\n endpoint_name = (\n \"my-endpoint-name\"\n )\n region_name = (\n \"us-west-2\"\n )\n credentials_profile_name = (\n \"default\"\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/sagemaker_endpoint.html"}
+{"id": "218d51ba3501-2", "text": ")\n credentials_profile_name = (\n \"default\"\n )\n se = SagemakerEndpoint(\n endpoint_name=endpoint_name,\n region_name=region_name,\n credentials_profile_name=credentials_profile_name\n )\n \"\"\"\n client: Any #: :meta private:\n endpoint_name: str = \"\"\n \"\"\"The name of the endpoint from the deployed Sagemaker model.\n Must be unique within an AWS Region.\"\"\"\n region_name: str = \"\"\n \"\"\"The aws region where the Sagemaker model is deployed, eg. `us-west-2`.\"\"\"\n credentials_profile_name: Optional[str] = None\n \"\"\"The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\n has either access keys or role information specified.\n If not specified, the default credential profile or, if on an EC2 instance,\n credentials from IMDS will be used.\n See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n \"\"\"\n content_handler: LLMContentHandler\n \"\"\"The content handler class that provides an input and\n output transform functions to handle formats between LLM\n and the endpoint.\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n from langchain.llms.sagemaker_endpoint import LLMContentHandler\n class ContentHandler(LLMContentHandler):\n content_type = \"application/json\"\n accepts = \"application/json\"\n def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:\n input_str = json.dumps({prompt: prompt, **model_kwargs})\n return input_str.encode('utf-8')\n \n def transform_output(self, output: bytes) -> str:", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/sagemaker_endpoint.html"}
+{"id": "218d51ba3501-3", "text": "def transform_output(self, output: bytes) -> str:\n response_json = json.loads(output.read().decode(\"utf-8\"))\n return response_json[0][\"generated_text\"]\n \"\"\"\n model_kwargs: Optional[Dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n endpoint_kwargs: Optional[Dict] = None\n \"\"\"Optional attributes passed to the invoke_endpoint\n function. See `boto3`_. docs for more info.\n .. _boto3: \n \"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that AWS credentials to and python package exists in environment.\"\"\"\n try:\n import boto3\n try:\n if values[\"credentials_profile_name\"] is not None:\n session = boto3.Session(\n profile_name=values[\"credentials_profile_name\"]\n )\n else:\n # use default credentials\n session = boto3.Session()\n values[\"client\"] = session.client(\n \"sagemaker-runtime\", region_name=values[\"region_name\"]\n )\n except Exception as e:\n raise ValueError(\n \"Could not load credentials to authenticate with AWS client. \"\n \"Please check that credentials in the specified \"\n \"profile name are valid.\"\n ) from e\n except ImportError:\n raise ImportError(\n \"Could not import boto3 python package. \"\n \"Please install it with `pip install boto3`.\"\n )\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/sagemaker_endpoint.html"}
+{"id": "218d51ba3501-4", "text": "@property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n _model_kwargs = self.model_kwargs or {}\n return {\n **{\"endpoint_name\": self.endpoint_name},\n **{\"model_kwargs\": _model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"sagemaker_endpoint\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call out to Sagemaker inference endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = se(\"Tell me a joke.\")\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n _endpoint_kwargs = self.endpoint_kwargs or {}\n body = self.content_handler.transform_input(prompt, _model_kwargs)\n content_type = self.content_handler.content_type\n accepts = self.content_handler.accepts\n # send request\n try:\n response = self.client.invoke_endpoint(\n EndpointName=self.endpoint_name,\n Body=body,\n ContentType=content_type,\n Accept=accepts,\n **_endpoint_kwargs,\n )\n except Exception as e:\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n text = self.content_handler.transform_output(response[\"Body\"])\n if stop is not None:", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/sagemaker_endpoint.html"}
+{"id": "218d51ba3501-5", "text": "if stop is not None:\n # This is a bit hacky, but I can't figure out a better way to enforce\n # stop tokens when making calls to the sagemaker endpoint.\n text = enforce_stop_tokens(text, stop)\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/sagemaker_endpoint.html"}
+{"id": "8453e6974cb9-0", "text": "Source code for langchain.llms.google_palm\n\"\"\"Wrapper arround Google's PaLM Text APIs.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Callable, Dict, List, Optional\nfrom pydantic import BaseModel, root_validator\nfrom tenacity import (\n before_sleep_log,\n retry,\n retry_if_exception_type,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.llms import BaseLLM\nfrom langchain.schema import Generation, LLMResult\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\ndef _create_retry_decorator() -> Callable[[Any], Any]:\n \"\"\"Returns a tenacity retry decorator, preconfigured to handle PaLM exceptions\"\"\"\n try:\n import google.api_core.exceptions\n except ImportError:\n raise ImportError(\n \"Could not import google-api-core python package. \"\n \"Please install it with `pip install google-api-core`.\"\n )\n multiplier = 2\n min_seconds = 1\n max_seconds = 60\n max_retries = 10\n return retry(\n reraise=True,\n stop=stop_after_attempt(max_retries),\n wait=wait_exponential(multiplier=multiplier, min=min_seconds, max=max_seconds),\n retry=(\n retry_if_exception_type(google.api_core.exceptions.ResourceExhausted)\n | retry_if_exception_type(google.api_core.exceptions.ServiceUnavailable)\n | retry_if_exception_type(google.api_core.exceptions.GoogleAPIError)\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/google_palm.html"}
+{"id": "8453e6974cb9-1", "text": "),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\ndef generate_with_retry(llm: GooglePalm, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the completion call.\"\"\"\n retry_decorator = _create_retry_decorator()\n @retry_decorator\n def _generate_with_retry(**kwargs: Any) -> Any:\n return llm.client.generate_text(**kwargs)\n return _generate_with_retry(**kwargs)\ndef _strip_erroneous_leading_spaces(text: str) -> str:\n \"\"\"Strip erroneous leading spaces from text.\n The PaLM API will sometimes erroneously return a single leading space in all\n lines > 1. This function strips that space.\n \"\"\"\n has_leading_space = all(not line or line[0] == \" \" for line in text.split(\"\\n\")[1:])\n if has_leading_space:\n return text.replace(\"\\n \", \"\\n\")\n else:\n return text\n[docs]class GooglePalm(BaseLLM, BaseModel):\n client: Any #: :meta private:\n google_api_key: Optional[str]\n model_name: str = \"models/text-bison-001\"\n \"\"\"Model name to use.\"\"\"\n temperature: float = 0.7\n \"\"\"Run inference with this temperature. Must by in the closed interval\n [0.0, 1.0].\"\"\"\n top_p: Optional[float] = None\n \"\"\"Decode using nucleus sampling: consider the smallest set of tokens whose\n probability sum is at least top_p. Must be in the closed interval [0.0, 1.0].\"\"\"\n top_k: Optional[int] = None\n \"\"\"Decode using top-k sampling: consider the set of top_k most probable tokens.\n Must be positive.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/google_palm.html"}
+{"id": "8453e6974cb9-2", "text": "Must be positive.\"\"\"\n max_output_tokens: Optional[int] = None\n \"\"\"Maximum number of tokens to include in a candidate. Must be greater than zero.\n If unset, will default to 64.\"\"\"\n n: int = 1\n \"\"\"Number of chat completions to generate for each prompt. Note that the API may\n not return the full n completions if duplicates are generated.\"\"\"\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate api key, python package exists.\"\"\"\n google_api_key = get_from_dict_or_env(\n values, \"google_api_key\", \"GOOGLE_API_KEY\"\n )\n try:\n import google.generativeai as genai\n genai.configure(api_key=google_api_key)\n except ImportError:\n raise ImportError(\n \"Could not import google-generativeai python package. \"\n \"Please install it with `pip install google-generativeai`.\"\n )\n values[\"client\"] = genai\n if values[\"temperature\"] is not None and not 0 <= values[\"temperature\"] <= 1:\n raise ValueError(\"temperature must be in the range [0.0, 1.0]\")\n if values[\"top_p\"] is not None and not 0 <= values[\"top_p\"] <= 1:\n raise ValueError(\"top_p must be in the range [0.0, 1.0]\")\n if values[\"top_k\"] is not None and values[\"top_k\"] <= 0:\n raise ValueError(\"top_k must be positive\")\n if values[\"max_output_tokens\"] is not None and values[\"max_output_tokens\"] <= 0:\n raise ValueError(\"max_output_tokens must be greater than zero\")\n return values\n def _generate(", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/google_palm.html"}
+{"id": "8453e6974cb9-3", "text": "return values\n def _generate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> LLMResult:\n generations = []\n for prompt in prompts:\n completion = generate_with_retry(\n self,\n model=self.model_name,\n prompt=prompt,\n stop_sequences=stop,\n temperature=self.temperature,\n top_p=self.top_p,\n top_k=self.top_k,\n max_output_tokens=self.max_output_tokens,\n candidate_count=self.n,\n )\n prompt_generations = []\n for candidate in completion.candidates:\n raw_text = candidate[\"output\"]\n stripped_text = _strip_erroneous_leading_spaces(raw_text)\n prompt_generations.append(Generation(text=stripped_text))\n generations.append(prompt_generations)\n return LLMResult(generations=generations)\n async def _agenerate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n ) -> LLMResult:\n raise NotImplementedError()\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"google_palm\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/google_palm.html"}
+{"id": "00f41dc33a86-0", "text": "Source code for langchain.llms.human\nfrom typing import Any, Callable, List, Mapping, Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\ndef _display_prompt(prompt: str) -> None:\n \"\"\"Displays the given prompt to the user.\"\"\"\n print(f\"\\n{prompt}\")\ndef _collect_user_input(\n separator: Optional[str] = None, stop: Optional[List[str]] = None\n) -> str:\n \"\"\"Collects and returns user input as a single string.\"\"\"\n separator = separator or \"\\n\"\n lines = []\n while True:\n line = input()\n if not line:\n break\n lines.append(line)\n if stop and any(seq in line for seq in stop):\n break\n # Combine all lines into a single string\n multi_line_input = separator.join(lines)\n return multi_line_input\n[docs]class HumanInputLLM(LLM):\n \"\"\"\n A LLM wrapper which returns user input as the response.\n \"\"\"\n input_func: Callable = Field(default_factory=lambda: _collect_user_input)\n prompt_func: Callable[[str], None] = Field(default_factory=lambda: _display_prompt)\n separator: str = \"\\n\"\n input_kwargs: Mapping[str, Any] = {}\n prompt_kwargs: Mapping[str, Any] = {}\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"\n Returns an empty dictionary as there are no identifying parameters.\n \"\"\"\n return {}\n @property\n def _llm_type(self) -> str:\n \"\"\"Returns the type of LLM.\"\"\"\n return \"human-input\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/human.html"}
+{"id": "00f41dc33a86-1", "text": "\"\"\"Returns the type of LLM.\"\"\"\n return \"human-input\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"\n Displays the prompt to the user and returns their input as a response.\n Args:\n prompt (str): The prompt to be displayed to the user.\n stop (Optional[List[str]]): A list of stop strings.\n run_manager (Optional[CallbackManagerForLLMRun]): Currently not used.\n Returns:\n str: The user's input as a response.\n \"\"\"\n self.prompt_func(prompt, **self.prompt_kwargs)\n user_input = self.input_func(\n separator=self.separator, stop=stop, **self.input_kwargs\n )\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the human themselves\n user_input = enforce_stop_tokens(user_input, stop)\n return user_input\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/human.html"}
+{"id": "2bd4ae40bfd3-0", "text": "Source code for langchain.llms.anyscale\n\"\"\"Wrapper around Anyscale\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\n[docs]class Anyscale(LLM):\n \"\"\"Wrapper around Anyscale Services.\n To use, you should have the environment variable ``ANYSCALE_SERVICE_URL``,\n ``ANYSCALE_SERVICE_ROUTE`` and ``ANYSCALE_SERVICE_TOKEN`` set with your Anyscale\n Service, or pass it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.llms import Anyscale\n anyscale = Anyscale(anyscale_service_url=\"SERVICE_URL\",\n anyscale_service_route=\"SERVICE_ROUTE\",\n anyscale_service_token=\"SERVICE_TOKEN\")\n # Use Ray for distributed processing\n import ray\n prompt_list=[]\n @ray.remote\n def send_query(llm, prompt):\n resp = llm(prompt)\n return resp\n futures = [send_query.remote(anyscale, prompt) for prompt in prompt_list]\n results = ray.get(futures)\n \"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model. Reserved for future use\"\"\"\n anyscale_service_url: Optional[str] = None\n anyscale_service_route: Optional[str] = None\n anyscale_service_token: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/anyscale.html"}
+{"id": "2bd4ae40bfd3-1", "text": "@root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n anyscale_service_url = get_from_dict_or_env(\n values, \"anyscale_service_url\", \"ANYSCALE_SERVICE_URL\"\n )\n anyscale_service_route = get_from_dict_or_env(\n values, \"anyscale_service_route\", \"ANYSCALE_SERVICE_ROUTE\"\n )\n anyscale_service_token = get_from_dict_or_env(\n values, \"anyscale_service_token\", \"ANYSCALE_SERVICE_TOKEN\"\n )\n try:\n anyscale_service_endpoint = f\"{anyscale_service_url}/-/route\"\n headers = {\"Authorization\": f\"Bearer {anyscale_service_token}\"}\n requests.get(anyscale_service_endpoint, headers=headers)\n except requests.exceptions.RequestException as e:\n raise ValueError(e)\n values[\"anyscale_service_url\"] = anyscale_service_url\n values[\"anyscale_service_route\"] = anyscale_service_route\n values[\"anyscale_service_token\"] = anyscale_service_token\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"anyscale_service_url\": self.anyscale_service_url,\n \"anyscale_service_route\": self.anyscale_service_route,\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"anyscale\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call out to Anyscale Service endpoint.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/anyscale.html"}
+{"id": "2bd4ae40bfd3-2", "text": ") -> str:\n \"\"\"Call out to Anyscale Service endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = anyscale(\"Tell me a joke.\")\n \"\"\"\n anyscale_service_endpoint = (\n f\"{self.anyscale_service_url}/{self.anyscale_service_route}\"\n )\n headers = {\"Authorization\": f\"Bearer {self.anyscale_service_token}\"}\n body = {\"prompt\": prompt}\n resp = requests.post(anyscale_service_endpoint, headers=headers, json=body)\n if resp.status_code != 200:\n raise ValueError(\n f\"Error returned by service, status code {resp.status_code}\"\n )\n text = resp.text\n if stop is not None:\n # This is a bit hacky, but I can't figure out a better way to enforce\n # stop tokens when making calls to huggingface_hub.\n text = enforce_stop_tokens(text, stop)\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/anyscale.html"}
+{"id": "b61218194d0e-0", "text": "Source code for langchain.llms.nlpcloud\n\"\"\"Wrapper around NLPCloud APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.utils import get_from_dict_or_env\n[docs]class NLPCloud(LLM):\n \"\"\"Wrapper around NLPCloud large language models.\n To use, you should have the ``nlpcloud`` python package installed, and the\n environment variable ``NLPCLOUD_API_KEY`` set with your API key.\n Example:\n .. code-block:: python\n from langchain.llms import NLPCloud\n nlpcloud = NLPCloud(model=\"gpt-neox-20b\")\n \"\"\"\n client: Any #: :meta private:\n model_name: str = \"finetuned-gpt-neox-20b\"\n \"\"\"Model name to use.\"\"\"\n temperature: float = 0.7\n \"\"\"What sampling temperature to use.\"\"\"\n min_length: int = 1\n \"\"\"The minimum number of tokens to generate in the completion.\"\"\"\n max_length: int = 256\n \"\"\"The maximum number of tokens to generate in the completion.\"\"\"\n length_no_input: bool = True\n \"\"\"Whether min_length and max_length should include the length of the input.\"\"\"\n remove_input: bool = True\n \"\"\"Remove input text from API response\"\"\"\n remove_end_sequence: bool = True\n \"\"\"Whether or not to remove the end sequence token.\"\"\"\n bad_words: List[str] = []\n \"\"\"List of tokens not allowed to be generated.\"\"\"\n top_p: int = 1\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/nlpcloud.html"}
+{"id": "b61218194d0e-1", "text": "\"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n top_k: int = 50\n \"\"\"The number of highest probability tokens to keep for top-k filtering.\"\"\"\n repetition_penalty: float = 1.0\n \"\"\"Penalizes repeated tokens. 1.0 means no penalty.\"\"\"\n length_penalty: float = 1.0\n \"\"\"Exponential penalty to the length.\"\"\"\n do_sample: bool = True\n \"\"\"Whether to use sampling (True) or greedy decoding.\"\"\"\n num_beams: int = 1\n \"\"\"Number of beams for beam search.\"\"\"\n early_stopping: bool = False\n \"\"\"Whether to stop beam search at num_beams sentences.\"\"\"\n num_return_sequences: int = 1\n \"\"\"How many completions to generate for each prompt.\"\"\"\n nlpcloud_api_key: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n nlpcloud_api_key = get_from_dict_or_env(\n values, \"nlpcloud_api_key\", \"NLPCLOUD_API_KEY\"\n )\n try:\n import nlpcloud\n values[\"client\"] = nlpcloud.Client(\n values[\"model_name\"], nlpcloud_api_key, gpu=True, lang=\"en\"\n )\n except ImportError:\n raise ImportError(\n \"Could not import nlpcloud python package. \"\n \"Please install it with `pip install nlpcloud`.\"\n )\n return values\n @property\n def _default_params(self) -> Mapping[str, Any]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/nlpcloud.html"}
+{"id": "b61218194d0e-2", "text": "@property\n def _default_params(self) -> Mapping[str, Any]:\n \"\"\"Get the default parameters for calling NLPCloud API.\"\"\"\n return {\n \"temperature\": self.temperature,\n \"min_length\": self.min_length,\n \"max_length\": self.max_length,\n \"length_no_input\": self.length_no_input,\n \"remove_input\": self.remove_input,\n \"remove_end_sequence\": self.remove_end_sequence,\n \"bad_words\": self.bad_words,\n \"top_p\": self.top_p,\n \"top_k\": self.top_k,\n \"repetition_penalty\": self.repetition_penalty,\n \"length_penalty\": self.length_penalty,\n \"do_sample\": self.do_sample,\n \"num_beams\": self.num_beams,\n \"early_stopping\": self.early_stopping,\n \"num_return_sequences\": self.num_return_sequences,\n }\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_name\": self.model_name}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"nlpcloud\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call out to NLPCloud's create endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Not supported by this interface (pass in init method)\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/nlpcloud.html"}
+{"id": "b61218194d0e-3", "text": "The string generated by the model.\n Example:\n .. code-block:: python\n response = nlpcloud(\"Tell me a joke.\")\n \"\"\"\n if stop and len(stop) > 1:\n raise ValueError(\n \"NLPCloud only supports a single stop sequence per generation.\"\n \"Pass in a list of length 1.\"\n )\n elif stop and len(stop) == 1:\n end_sequence = stop[0]\n else:\n end_sequence = None\n response = self.client.generation(\n prompt, end_sequence=end_sequence, **self._default_params\n )\n return response[\"generated_text\"]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/nlpcloud.html"}
+{"id": "0c3a20467387-0", "text": "Source code for langchain.llms.ctransformers\n\"\"\"Wrapper around the C Transformers library.\"\"\"\nfrom typing import Any, Dict, Optional, Sequence\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\n[docs]class CTransformers(LLM):\n \"\"\"Wrapper around the C Transformers LLM interface.\n To use, you should have the ``ctransformers`` python package installed.\n See https://github.com/marella/ctransformers\n Example:\n .. code-block:: python\n from langchain.llms import CTransformers\n llm = CTransformers(model=\"/path/to/ggml-gpt-2.bin\", model_type=\"gpt2\")\n \"\"\"\n client: Any #: :meta private:\n model: str\n \"\"\"The path to a model file or directory or the name of a Hugging Face Hub\n model repo.\"\"\"\n model_type: Optional[str] = None\n \"\"\"The model type.\"\"\"\n model_file: Optional[str] = None\n \"\"\"The name of the model file in repo or directory.\"\"\"\n config: Optional[Dict[str, Any]] = None\n \"\"\"The config parameters.\n See https://github.com/marella/ctransformers#config\"\"\"\n lib: Optional[str] = None\n \"\"\"The path to a shared library or one of `avx2`, `avx`, `basic`.\"\"\"\n @property\n def _identifying_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"model\": self.model,\n \"model_type\": self.model_type,\n \"model_file\": self.model_file,\n \"config\": self.config,\n }\n @property", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/ctransformers.html"}
+{"id": "0c3a20467387-1", "text": "\"config\": self.config,\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"ctransformers\"\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that ``ctransformers`` package is installed.\"\"\"\n try:\n from ctransformers import AutoModelForCausalLM\n except ImportError:\n raise ImportError(\n \"Could not import `ctransformers` package. \"\n \"Please install it with `pip install ctransformers`\"\n )\n config = values[\"config\"] or {}\n values[\"client\"] = AutoModelForCausalLM.from_pretrained(\n values[\"model\"],\n model_type=values[\"model_type\"],\n model_file=values[\"model_file\"],\n lib=values[\"lib\"],\n **config,\n )\n return values\n def _call(\n self,\n prompt: str,\n stop: Optional[Sequence[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Generate text from a prompt.\n Args:\n prompt: The prompt to generate text from.\n stop: A list of sequences to stop generation when encountered.\n Returns:\n The generated text.\n Example:\n .. code-block:: python\n response = llm(\"Tell me a joke.\")\n \"\"\"\n text = []\n _run_manager = run_manager or CallbackManagerForLLMRun.get_noop_manager()\n for chunk in self.client(prompt, stop=stop, stream=True):\n text.append(chunk)", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/ctransformers.html"}
+{"id": "0c3a20467387-2", "text": "text.append(chunk)\n _run_manager.on_llm_new_token(chunk, verbose=self.verbose)\n return \"\".join(text)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/ctransformers.html"}
+{"id": "6cbaabe87e86-0", "text": "Source code for langchain.llms.huggingface_endpoint\n\"\"\"Wrapper around HuggingFace APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nVALID_TASKS = (\"text2text-generation\", \"text-generation\", \"summarization\")\n[docs]class HuggingFaceEndpoint(LLM):\n \"\"\"Wrapper around HuggingFaceHub Inference Endpoints.\n To use, you should have the ``huggingface_hub`` python package installed, and the\n environment variable ``HUGGINGFACEHUB_API_TOKEN`` set with your API token, or pass\n it as a named parameter to the constructor.\n Only supports `text-generation` and `text2text-generation` for now.\n Example:\n .. code-block:: python\n from langchain.llms import HuggingFaceEndpoint\n endpoint_url = (\n \"https://abcdefghijklmnop.us-east-1.aws.endpoints.huggingface.cloud\"\n )\n hf = HuggingFaceEndpoint(\n endpoint_url=endpoint_url,\n huggingfacehub_api_token=\"my-api-key\"\n )\n \"\"\"\n endpoint_url: str = \"\"\n \"\"\"Endpoint URL to use.\"\"\"\n task: Optional[str] = None\n \"\"\"Task to call the model with.\n Should be a task that returns `generated_text` or `summary_text`.\"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n huggingfacehub_api_token: Optional[str] = None\n class Config:", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_endpoint.html"}
+{"id": "6cbaabe87e86-1", "text": "huggingfacehub_api_token: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n huggingfacehub_api_token = get_from_dict_or_env(\n values, \"huggingfacehub_api_token\", \"HUGGINGFACEHUB_API_TOKEN\"\n )\n try:\n from huggingface_hub.hf_api import HfApi\n try:\n HfApi(\n endpoint=\"https://huggingface.co\", # Can be a Private Hub endpoint.\n token=huggingfacehub_api_token,\n ).whoami()\n except Exception as e:\n raise ValueError(\n \"Could not authenticate with huggingface_hub. \"\n \"Please check your API token.\"\n ) from e\n except ImportError:\n raise ValueError(\n \"Could not import huggingface_hub python package. \"\n \"Please install it with `pip install huggingface_hub`.\"\n )\n values[\"huggingfacehub_api_token\"] = huggingfacehub_api_token\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n _model_kwargs = self.model_kwargs or {}\n return {\n **{\"endpoint_url\": self.endpoint_url, \"task\": self.task},\n **{\"model_kwargs\": _model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"huggingface_endpoint\"\n def _call(\n self,", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_endpoint.html"}
+{"id": "6cbaabe87e86-2", "text": "return \"huggingface_endpoint\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call out to HuggingFace Hub's inference endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = hf(\"Tell me a joke.\")\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n # payload samples\n parameter_payload = {\"inputs\": prompt, \"parameters\": _model_kwargs}\n # HTTP headers for authorization\n headers = {\n \"Authorization\": f\"Bearer {self.huggingfacehub_api_token}\",\n \"Content-Type\": \"application/json\",\n }\n # send request\n try:\n response = requests.post(\n self.endpoint_url, headers=headers, json=parameter_payload\n )\n except requests.exceptions.RequestException as e: # This is the correct syntax\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n generated_text = response.json()\n if \"error\" in generated_text:\n raise ValueError(\n f\"Error raised by inference API: {generated_text['error']}\"\n )\n if self.task == \"text-generation\":\n # Text generation return includes the starter text.\n text = generated_text[0][\"generated_text\"][len(prompt) :]\n elif self.task == \"text2text-generation\":\n text = generated_text[0][\"generated_text\"]\n elif self.task == \"summarization\":", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_endpoint.html"}
+{"id": "6cbaabe87e86-3", "text": "elif self.task == \"summarization\":\n text = generated_text[0][\"summary_text\"]\n else:\n raise ValueError(\n f\"Got invalid task {self.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n if stop is not None:\n # This is a bit hacky, but I can't figure out a better way to enforce\n # stop tokens when making calls to huggingface_hub.\n text = enforce_stop_tokens(text, stop)\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_endpoint.html"}
+{"id": "ec486957182e-0", "text": "Source code for langchain.llms.anthropic\n\"\"\"Wrapper around Anthropic APIs.\"\"\"\nimport re\nimport warnings\nfrom typing import Any, Callable, Dict, Generator, List, Mapping, Optional, Tuple, Union\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.llms.base import LLM\nfrom langchain.utils import get_from_dict_or_env\nclass _AnthropicCommon(BaseModel):\n client: Any = None #: :meta private:\n model: str = \"claude-v1\"\n \"\"\"Model name to use.\"\"\"\n max_tokens_to_sample: int = 256\n \"\"\"Denotes the number of tokens to predict per generation.\"\"\"\n temperature: Optional[float] = None\n \"\"\"A non-negative float that tunes the degree of randomness in generation.\"\"\"\n top_k: Optional[int] = None\n \"\"\"Number of most likely tokens to consider at each step.\"\"\"\n top_p: Optional[float] = None\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n streaming: bool = False\n \"\"\"Whether to stream the results.\"\"\"\n default_request_timeout: Optional[Union[float, Tuple[float, float]]] = None\n \"\"\"Timeout for requests to Anthropic Completion API. Default is 600 seconds.\"\"\"\n anthropic_api_key: Optional[str] = None\n HUMAN_PROMPT: Optional[str] = None\n AI_PROMPT: Optional[str] = None\n count_tokens: Optional[Callable[[str], int]] = None\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n anthropic_api_key = get_from_dict_or_env(", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html"}
+{"id": "ec486957182e-1", "text": "anthropic_api_key = get_from_dict_or_env(\n values, \"anthropic_api_key\", \"ANTHROPIC_API_KEY\"\n )\n try:\n import anthropic\n values[\"client\"] = anthropic.Client(\n api_key=anthropic_api_key,\n default_request_timeout=values[\"default_request_timeout\"],\n )\n values[\"HUMAN_PROMPT\"] = anthropic.HUMAN_PROMPT\n values[\"AI_PROMPT\"] = anthropic.AI_PROMPT\n values[\"count_tokens\"] = anthropic.count_tokens\n except ImportError:\n raise ImportError(\n \"Could not import anthropic python package. \"\n \"Please it install it with `pip install anthropic`.\"\n )\n return values\n @property\n def _default_params(self) -> Mapping[str, Any]:\n \"\"\"Get the default parameters for calling Anthropic API.\"\"\"\n d = {\n \"max_tokens_to_sample\": self.max_tokens_to_sample,\n \"model\": self.model,\n }\n if self.temperature is not None:\n d[\"temperature\"] = self.temperature\n if self.top_k is not None:\n d[\"top_k\"] = self.top_k\n if self.top_p is not None:\n d[\"top_p\"] = self.top_p\n return d\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{}, **self._default_params}\n def _get_anthropic_stop(self, stop: Optional[List[str]] = None) -> List[str]:\n if not self.HUMAN_PROMPT or not self.AI_PROMPT:\n raise NameError(\"Please ensure the anthropic package is loaded\")\n if stop is None:\n stop = []", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html"}
+{"id": "ec486957182e-2", "text": "if stop is None:\n stop = []\n # Never want model to invent new turns of Human / Assistant dialog.\n stop.extend([self.HUMAN_PROMPT])\n return stop\n[docs]class Anthropic(LLM, _AnthropicCommon):\n r\"\"\"Wrapper around Anthropic's large language models.\n To use, you should have the ``anthropic`` python package installed, and the\n environment variable ``ANTHROPIC_API_KEY`` set with your API key, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n import anthropic\n from langchain.llms import Anthropic\n model = Anthropic(model=\"\", anthropic_api_key=\"my-api-key\")\n # Simplest invocation, automatically wrapped with HUMAN_PROMPT\n # and AI_PROMPT.\n response = model(\"What are the biggest risks facing humanity?\")\n # Or if you want to use the chat mode, build a few-shot-prompt, or\n # put words in the Assistant's mouth, use HUMAN_PROMPT and AI_PROMPT:\n raw_prompt = \"What are the biggest risks facing humanity?\"\n prompt = f\"{anthropic.HUMAN_PROMPT} {prompt}{anthropic.AI_PROMPT}\"\n response = model(prompt)\n \"\"\"\n @root_validator()\n def raise_warning(cls, values: Dict) -> Dict:\n \"\"\"Raise warning that this class is deprecated.\"\"\"\n warnings.warn(\n \"This Anthropic LLM is deprecated. \"\n \"Please use `from langchain.chat_models import ChatAnthropic` instead\"\n )\n return values\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @property", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html"}
+{"id": "ec486957182e-3", "text": "extra = Extra.forbid\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"anthropic-llm\"\n def _wrap_prompt(self, prompt: str) -> str:\n if not self.HUMAN_PROMPT or not self.AI_PROMPT:\n raise NameError(\"Please ensure the anthropic package is loaded\")\n if prompt.startswith(self.HUMAN_PROMPT):\n return prompt # Already wrapped.\n # Guard against common errors in specifying wrong number of newlines.\n corrected_prompt, n_subs = re.subn(r\"^\\n*Human:\", self.HUMAN_PROMPT, prompt)\n if n_subs == 1:\n return corrected_prompt\n # As a last resort, wrap the prompt ourselves to emulate instruct-style.\n return f\"{self.HUMAN_PROMPT} {prompt}{self.AI_PROMPT} Sure, here you go:\\n\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n r\"\"\"Call out to Anthropic's completion endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n prompt = \"What are the biggest risks facing humanity?\"\n prompt = f\"\\n\\nHuman: {prompt}\\n\\nAssistant:\"\n response = model(prompt)\n \"\"\"\n stop = self._get_anthropic_stop(stop)\n if self.streaming:\n stream_resp = self.client.completion_stream(", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html"}
+{"id": "ec486957182e-4", "text": "if self.streaming:\n stream_resp = self.client.completion_stream(\n prompt=self._wrap_prompt(prompt),\n stop_sequences=stop,\n **self._default_params,\n )\n current_completion = \"\"\n for data in stream_resp:\n delta = data[\"completion\"][len(current_completion) :]\n current_completion = data[\"completion\"]\n if run_manager:\n run_manager.on_llm_new_token(delta, **data)\n return current_completion\n response = self.client.completion(\n prompt=self._wrap_prompt(prompt),\n stop_sequences=stop,\n **self._default_params,\n )\n return response[\"completion\"]\n async def _acall(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n ) -> str:\n \"\"\"Call out to Anthropic's completion endpoint asynchronously.\"\"\"\n stop = self._get_anthropic_stop(stop)\n if self.streaming:\n stream_resp = await self.client.acompletion_stream(\n prompt=self._wrap_prompt(prompt),\n stop_sequences=stop,\n **self._default_params,\n )\n current_completion = \"\"\n async for data in stream_resp:\n delta = data[\"completion\"][len(current_completion) :]\n current_completion = data[\"completion\"]\n if run_manager:\n await run_manager.on_llm_new_token(delta, **data)\n return current_completion\n response = await self.client.acompletion(\n prompt=self._wrap_prompt(prompt),\n stop_sequences=stop,\n **self._default_params,\n )\n return response[\"completion\"]", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html"}
+{"id": "ec486957182e-5", "text": "**self._default_params,\n )\n return response[\"completion\"]\n[docs] def stream(self, prompt: str, stop: Optional[List[str]] = None) -> Generator:\n r\"\"\"Call Anthropic completion_stream and return the resulting generator.\n BETA: this is a beta feature while we figure out the right abstraction.\n Once that happens, this interface could change.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n A generator representing the stream of tokens from Anthropic.\n Example:\n .. code-block:: python\n prompt = \"Write a poem about a stream.\"\n prompt = f\"\\n\\nHuman: {prompt}\\n\\nAssistant:\"\n generator = anthropic.stream(prompt)\n for token in generator:\n yield token\n \"\"\"\n stop = self._get_anthropic_stop(stop)\n return self.client.completion_stream(\n prompt=self._wrap_prompt(prompt),\n stop_sequences=stop,\n **self._default_params,\n )\n[docs] def get_num_tokens(self, text: str) -> int:\n \"\"\"Calculate number of tokens.\"\"\"\n if not self.count_tokens:\n raise NameError(\"Please ensure the anthropic package is loaded\")\n return self.count_tokens(text)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html"}
+{"id": "3f9b725500cf-0", "text": "Source code for langchain.llms.rwkv\n\"\"\"Wrapper for the RWKV model.\nBased on https://github.com/saharNooby/rwkv.cpp/blob/master/rwkv/chat_with_bot.py\n https://github.com/BlinkDL/ChatRWKV/blob/main/v2/chat.py\n\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional, Set\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\n[docs]class RWKV(LLM, BaseModel):\n r\"\"\"Wrapper around RWKV language models.\n To use, you should have the ``rwkv`` python package installed, the\n pre-trained model file, and the model's config information.\n Example:\n .. code-block:: python\n from langchain.llms import RWKV\n model = RWKV(model=\"./models/rwkv-3b-fp16.bin\", strategy=\"cpu fp32\")\n # Simplest invocation\n response = model(\"Once upon a time, \")\n \"\"\"\n model: str\n \"\"\"Path to the pre-trained RWKV model file.\"\"\"\n tokens_path: str\n \"\"\"Path to the RWKV tokens file.\"\"\"\n strategy: str = \"cpu fp32\"\n \"\"\"Token context window.\"\"\"\n rwkv_verbose: bool = True\n \"\"\"Print debug information.\"\"\"\n temperature: float = 1.0\n \"\"\"The temperature to use for sampling.\"\"\"\n top_p: float = 0.5\n \"\"\"The top-p value to use for sampling.\"\"\"\n penalty_alpha_frequency: float = 0.4\n \"\"\"Positive values penalize new tokens based on their existing frequency", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/rwkv.html"}
+{"id": "3f9b725500cf-1", "text": "\"\"\"Positive values penalize new tokens based on their existing frequency\n in the text so far, decreasing the model's likelihood to repeat the same\n line verbatim..\"\"\"\n penalty_alpha_presence: float = 0.4\n \"\"\"Positive values penalize new tokens based on whether they appear\n in the text so far, increasing the model's likelihood to talk about\n new topics..\"\"\"\n CHUNK_LEN: int = 256\n \"\"\"Batch size for prompt processing.\"\"\"\n max_tokens_per_generation: int = 256\n \"\"\"Maximum number of tokens to generate.\"\"\"\n client: Any = None #: :meta private:\n tokenizer: Any = None #: :meta private:\n pipeline: Any = None #: :meta private:\n model_tokens: Any = None #: :meta private:\n model_state: Any = None #: :meta private:\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"verbose\": self.verbose,\n \"top_p\": self.top_p,\n \"temperature\": self.temperature,\n \"penalty_alpha_frequency\": self.penalty_alpha_frequency,\n \"penalty_alpha_presence\": self.penalty_alpha_presence,\n \"CHUNK_LEN\": self.CHUNK_LEN,\n \"max_tokens_per_generation\": self.max_tokens_per_generation,\n }\n @staticmethod\n def _rwkv_param_names() -> Set[str]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"verbose\",\n }\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in the environment.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/rwkv.html"}
+{"id": "3f9b725500cf-2", "text": "\"\"\"Validate that the python package exists in the environment.\"\"\"\n try:\n import tokenizers\n except ImportError:\n raise ImportError(\n \"Could not import tokenizers python package. \"\n \"Please install it with `pip install tokenizers`.\"\n )\n try:\n from rwkv.model import RWKV as RWKVMODEL\n from rwkv.utils import PIPELINE\n values[\"tokenizer\"] = tokenizers.Tokenizer.from_file(values[\"tokens_path\"])\n rwkv_keys = cls._rwkv_param_names()\n model_kwargs = {k: v for k, v in values.items() if k in rwkv_keys}\n model_kwargs[\"verbose\"] = values[\"rwkv_verbose\"]\n values[\"client\"] = RWKVMODEL(\n values[\"model\"], strategy=values[\"strategy\"], **model_kwargs\n )\n values[\"pipeline\"] = PIPELINE(values[\"client\"], values[\"tokens_path\"])\n except ImportError:\n raise ValueError(\n \"Could not import rwkv python package. \"\n \"Please install it with `pip install rwkv`.\"\n )\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"model\": self.model,\n **self._default_params,\n **{k: v for k, v in self.__dict__.items() if k in RWKV._rwkv_param_names()},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return the type of llm.\"\"\"\n return \"rwkv-4\"\n def run_rnn(self, _tokens: List[str], newline_adj: int = 0) -> Any:\n AVOID_REPEAT_TOKENS = []", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/rwkv.html"}
+{"id": "3f9b725500cf-3", "text": "AVOID_REPEAT_TOKENS = []\n AVOID_REPEAT = \"\uff0c\uff1a\uff1f\uff01\"\n for i in AVOID_REPEAT:\n dd = self.pipeline.encode(i)\n assert len(dd) == 1\n AVOID_REPEAT_TOKENS += dd\n tokens = [int(x) for x in _tokens]\n self.model_tokens += tokens\n out: Any = None\n while len(tokens) > 0:\n out, self.model_state = self.client.forward(\n tokens[: self.CHUNK_LEN], self.model_state\n )\n tokens = tokens[self.CHUNK_LEN :]\n END_OF_LINE = 187\n out[END_OF_LINE] += newline_adj # adjust \\n probability\n if self.model_tokens[-1] in AVOID_REPEAT_TOKENS:\n out[self.model_tokens[-1]] = -999999999\n return out\n def rwkv_generate(self, prompt: str) -> str:\n self.model_state = None\n self.model_tokens = []\n logits = self.run_rnn(self.tokenizer.encode(prompt).ids)\n begin = len(self.model_tokens)\n out_last = begin\n occurrence: Dict = {}\n decoded = \"\"\n for i in range(self.max_tokens_per_generation):\n for n in occurrence:\n logits[n] -= (\n self.penalty_alpha_presence\n + occurrence[n] * self.penalty_alpha_frequency\n )\n token = self.pipeline.sample_logits(\n logits, temperature=self.temperature, top_p=self.top_p\n )\n END_OF_TEXT = 0\n if token == END_OF_TEXT:\n break\n if token not in occurrence:\n occurrence[token] = 1\n else:\n occurrence[token] += 1\n logits = self.run_rnn([token])", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/rwkv.html"}
+{"id": "3f9b725500cf-4", "text": "occurrence[token] += 1\n logits = self.run_rnn([token])\n xxx = self.tokenizer.decode(self.model_tokens[out_last:])\n if \"\\ufffd\" not in xxx: # avoid utf-8 display issues\n decoded += xxx\n out_last = begin + i + 1\n if i >= self.max_tokens_per_generation - 100:\n break\n return decoded\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> str:\n r\"\"\"RWKV generation\n Args:\n prompt: The prompt to pass into the model.\n stop: A list of strings to stop generation when encountered.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n prompt = \"Once upon a time, \"\n response = model(prompt, n_predict=55)\n \"\"\"\n text = self.rwkv_generate(prompt)\n if stop is not None:\n text = enforce_stop_tokens(text, stop)\n return text\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/llms/rwkv.html"}
+{"id": "9a5feace4e4c-0", "text": "Source code for langchain.chat_models.azure_openai\n\"\"\"Azure OpenAI chat wrapper.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Dict, Mapping\nfrom pydantic import root_validator\nfrom langchain.chat_models.openai import ChatOpenAI\nfrom langchain.schema import ChatResult\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class AzureChatOpenAI(ChatOpenAI):\n \"\"\"Wrapper around Azure OpenAI Chat Completion API. To use this class you\n must have a deployed model on Azure OpenAI. Use `deployment_name` in the\n constructor to refer to the \"Model deployment name\" in the Azure portal.\n In addition, you should have the ``openai`` python package installed, and the\n following environment variables set or passed in constructor in lower case:\n - ``OPENAI_API_TYPE`` (default: ``azure``)\n - ``OPENAI_API_KEY``\n - ``OPENAI_API_BASE``\n - ``OPENAI_API_VERSION``\n - ``OPENAI_PROXY``\n For exmaple, if you have `gpt-35-turbo` deployed, with the deployment name\n `35-turbo-dev`, the constructor should look like:\n .. code-block:: python\n AzureChatOpenAI(\n deployment_name=\"35-turbo-dev\",\n openai_api_version=\"2023-03-15-preview\",\n )\n Be aware the API version may change.\n Any parameters that are valid to be passed to the openai.create call can be passed\n in, even if not explicitly saved on this class.\n \"\"\"\n deployment_name: str = \"\"\n openai_api_type: str = \"azure\"\n openai_api_base: str = \"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/azure_openai.html"}
+{"id": "9a5feace4e4c-1", "text": "openai_api_base: str = \"\"\n openai_api_version: str = \"\"\n openai_api_key: str = \"\"\n openai_organization: str = \"\"\n openai_proxy: str = \"\"\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n values[\"openai_api_key\"] = get_from_dict_or_env(\n values,\n \"openai_api_key\",\n \"OPENAI_API_KEY\",\n )\n values[\"openai_api_base\"] = get_from_dict_or_env(\n values,\n \"openai_api_base\",\n \"OPENAI_API_BASE\",\n )\n values[\"openai_api_version\"] = get_from_dict_or_env(\n values,\n \"openai_api_version\",\n \"OPENAI_API_VERSION\",\n )\n values[\"openai_api_type\"] = get_from_dict_or_env(\n values,\n \"openai_api_type\",\n \"OPENAI_API_TYPE\",\n )\n values[\"openai_organization\"] = get_from_dict_or_env(\n values,\n \"openai_organization\",\n \"OPENAI_ORGANIZATION\",\n default=\"\",\n )\n values[\"openai_proxy\"] = get_from_dict_or_env(\n values,\n \"openai_proxy\",\n \"OPENAI_PROXY\",\n default=\"\",\n )\n try:\n import openai\n except ImportError:\n raise ImportError(\n \"Could not import openai python package. \"\n \"Please install it with `pip install openai`.\"\n )\n try:\n values[\"client\"] = openai.ChatCompletion\n except AttributeError:\n raise ValueError(", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/azure_openai.html"}
+{"id": "9a5feace4e4c-2", "text": "except AttributeError:\n raise ValueError(\n \"`openai` has no `ChatCompletion` attribute, this is likely \"\n \"due to an old version of the openai package. Try upgrading it \"\n \"with `pip install --upgrade openai`.\"\n )\n if values[\"n\"] < 1:\n raise ValueError(\"n must be at least 1.\")\n if values[\"n\"] > 1 and values[\"streaming\"]:\n raise ValueError(\"n must be 1 when streaming.\")\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling OpenAI API.\"\"\"\n return {\n **super()._default_params,\n \"engine\": self.deployment_name,\n }\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**self._default_params}\n @property\n def _invocation_params(self) -> Mapping[str, Any]:\n openai_creds = {\n \"api_type\": self.openai_api_type,\n \"api_version\": self.openai_api_version,\n }\n return {**openai_creds, **super()._invocation_params}\n @property\n def _llm_type(self) -> str:\n return \"azure-openai-chat\"\n def _create_chat_result(self, response: Mapping[str, Any]) -> ChatResult:\n for res in response[\"choices\"]:\n if res.get(\"finish_reason\", None) == \"content_filter\":\n raise ValueError(\n \"Azure has not provided the response due to a content\"\n \" filter being triggered\"\n )\n return super()._create_chat_result(response)\nBy Harrison Chase", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/azure_openai.html"}
+{"id": "9a5feace4e4c-3", "text": ")\n return super()._create_chat_result(response)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/azure_openai.html"}
+{"id": "ee32f1594022-0", "text": "Source code for langchain.chat_models.promptlayer_openai\n\"\"\"PromptLayer wrapper.\"\"\"\nimport datetime\nfrom typing import Any, List, Mapping, Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.schema import BaseMessage, ChatResult\n[docs]class PromptLayerChatOpenAI(ChatOpenAI):\n \"\"\"Wrapper around OpenAI Chat large language models and PromptLayer.\n To use, you should have the ``openai`` and ``promptlayer`` python\n package installed, and the environment variable ``OPENAI_API_KEY``\n and ``PROMPTLAYER_API_KEY`` set with your openAI API key and\n promptlayer key respectively.\n All parameters that can be passed to the OpenAI LLM can also\n be passed here. The PromptLayerChatOpenAI adds to optional\n parameters:\n ``pl_tags``: List of strings to tag the request with.\n ``return_pl_id``: If True, the PromptLayer request ID will be\n returned in the ``generation_info`` field of the\n ``Generation`` object.\n Example:\n .. code-block:: python\n from langchain.chat_models import PromptLayerChatOpenAI\n openai = PromptLayerChatOpenAI(model_name=\"gpt-3.5-turbo\")\n \"\"\"\n pl_tags: Optional[List[str]]\n return_pl_id: Optional[bool] = False\n def _generate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> ChatResult:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/promptlayer_openai.html"}
+{"id": "ee32f1594022-1", "text": ") -> ChatResult:\n \"\"\"Call ChatOpenAI generate and then call PromptLayer API to log the request.\"\"\"\n from promptlayer.utils import get_api_key, promptlayer_api_request\n request_start_time = datetime.datetime.now().timestamp()\n generated_responses = super()._generate(messages, stop, run_manager)\n request_end_time = datetime.datetime.now().timestamp()\n message_dicts, params = super()._create_message_dicts(messages, stop)\n for i, generation in enumerate(generated_responses.generations):\n response_dict, params = super()._create_message_dicts(\n [generation.message], stop\n )\n pl_request_id = promptlayer_api_request(\n \"langchain.PromptLayerChatOpenAI\",\n \"langchain\",\n message_dicts,\n params,\n self.pl_tags,\n response_dict,\n request_start_time,\n request_end_time,\n get_api_key(),\n return_pl_id=self.return_pl_id,\n )\n if self.return_pl_id:\n if generation.generation_info is None or not isinstance(\n generation.generation_info, dict\n ):\n generation.generation_info = {}\n generation.generation_info[\"pl_request_id\"] = pl_request_id\n return generated_responses\n async def _agenerate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n ) -> ChatResult:\n \"\"\"Call ChatOpenAI agenerate and then call PromptLayer to log.\"\"\"\n from promptlayer.utils import get_api_key, promptlayer_api_request_async\n request_start_time = datetime.datetime.now().timestamp()\n generated_responses = await super()._agenerate(messages, stop, run_manager)", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/promptlayer_openai.html"}
+{"id": "ee32f1594022-2", "text": "generated_responses = await super()._agenerate(messages, stop, run_manager)\n request_end_time = datetime.datetime.now().timestamp()\n message_dicts, params = super()._create_message_dicts(messages, stop)\n for i, generation in enumerate(generated_responses.generations):\n response_dict, params = super()._create_message_dicts(\n [generation.message], stop\n )\n pl_request_id = await promptlayer_api_request_async(\n \"langchain.PromptLayerChatOpenAI.async\",\n \"langchain\",\n message_dicts,\n params,\n self.pl_tags,\n response_dict,\n request_start_time,\n request_end_time,\n get_api_key(),\n return_pl_id=self.return_pl_id,\n )\n if self.return_pl_id:\n if generation.generation_info is None or not isinstance(\n generation.generation_info, dict\n ):\n generation.generation_info = {}\n generation.generation_info[\"pl_request_id\"] = pl_request_id\n return generated_responses\n @property\n def _llm_type(self) -> str:\n return \"promptlayer-openai-chat\"\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n return {\n **super()._identifying_params,\n \"pl_tags\": self.pl_tags,\n \"return_pl_id\": self.return_pl_id,\n }\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/promptlayer_openai.html"}
+{"id": "0c1e4b78ffca-0", "text": "Source code for langchain.chat_models.vertexai\n\"\"\"Wrapper around Google VertexAI chat-based models.\"\"\"\nfrom dataclasses import dataclass, field\nfrom typing import Dict, List, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.llms.vertexai import _VertexAICommon\nfrom langchain.schema import (\n AIMessage,\n BaseMessage,\n ChatGeneration,\n ChatResult,\n HumanMessage,\n SystemMessage,\n)\nfrom langchain.utilities.vertexai import raise_vertex_import_error\n@dataclass\nclass _MessagePair:\n \"\"\"InputOutputTextPair represents a pair of input and output texts.\"\"\"\n question: HumanMessage\n answer: AIMessage\n@dataclass\nclass _ChatHistory:\n \"\"\"InputOutputTextPair represents a pair of input and output texts.\"\"\"\n history: List[_MessagePair] = field(default_factory=list)\n system_message: Optional[SystemMessage] = None\ndef _parse_chat_history(history: List[BaseMessage]) -> _ChatHistory:\n \"\"\"Parse a sequence of messages into history.\n A sequence should be either (SystemMessage, HumanMessage, AIMessage,\n HumanMessage, AIMessage, ...) or (HumanMessage, AIMessage, HumanMessage,\n AIMessage, ...).\n Args:\n history: The list of messages to re-create the history of the chat.\n Returns:\n A parsed chat history.\n Raises:\n ValueError: If a sequence of message is odd, or a human message is not followed\n by a message from AI (e.g., Human, Human, AI or AI, AI, Human).\n \"\"\"\n if not history:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/vertexai.html"}
+{"id": "0c1e4b78ffca-1", "text": "\"\"\"\n if not history:\n return _ChatHistory()\n first_message = history[0]\n system_message = first_message if isinstance(first_message, SystemMessage) else None\n chat_history = _ChatHistory(system_message=system_message)\n messages_left = history[1:] if system_message else history\n if len(messages_left) % 2 != 0:\n raise ValueError(\n f\"Amount of messages in history should be even, got {len(messages_left)}!\"\n )\n for question, answer in zip(messages_left[::2], messages_left[1::2]):\n if not isinstance(question, HumanMessage) or not isinstance(answer, AIMessage):\n raise ValueError(\n \"A human message should follow a bot one, \"\n f\"got {question.type}, {answer.type}.\"\n )\n chat_history.history.append(_MessagePair(question=question, answer=answer))\n return chat_history\n[docs]class ChatVertexAI(_VertexAICommon, BaseChatModel):\n \"\"\"Wrapper around Vertex AI large language models.\"\"\"\n model_name: str = \"chat-bison\"\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in environment.\"\"\"\n cls._try_init_vertexai(values)\n try:\n from vertexai.preview.language_models import ChatModel\n except ImportError:\n raise_vertex_import_error()\n values[\"client\"] = ChatModel.from_pretrained(values[\"model_name\"])\n return values\n def _generate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> ChatResult:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/vertexai.html"}
+{"id": "0c1e4b78ffca-2", "text": ") -> ChatResult:\n \"\"\"Generate next turn in the conversation.\n Args:\n messages: The history of the conversation as a list of messages.\n stop: The list of stop words (optional).\n run_manager: The Callbackmanager for LLM run, it's not used at the moment.\n Returns:\n The ChatResult that contains outputs generated by the model.\n Raises:\n ValueError: if the last message in the list is not from human.\n \"\"\"\n if not messages:\n raise ValueError(\n \"You should provide at least one message to start the chat!\"\n )\n question = messages[-1]\n if not isinstance(question, HumanMessage):\n raise ValueError(\n f\"Last message in the list should be from human, got {question.type}.\"\n )\n history = _parse_chat_history(messages[:-1])\n context = history.system_message.content if history.system_message else None\n chat = self.client.start_chat(context=context, **self._default_params)\n for pair in history.history:\n chat._history.append((pair.question.content, pair.answer.content))\n response = chat.send_message(question.content, **self._default_params)\n text = self._enforce_stop_words(response.text, stop)\n return ChatResult(generations=[ChatGeneration(message=AIMessage(content=text))])\n async def _agenerate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n ) -> ChatResult:\n raise NotImplementedError(\n \"\"\"Vertex AI doesn't support async requests at the moment.\"\"\"\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/vertexai.html"}
+{"id": "0c1e4b78ffca-3", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/vertexai.html"}
+{"id": "ef05ca6fd5bd-0", "text": "Source code for langchain.chat_models.openai\n\"\"\"OpenAI chat wrapper.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport sys\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Dict,\n List,\n Mapping,\n Optional,\n Tuple,\n Union,\n)\nfrom pydantic import Extra, Field, root_validator\nfrom tenacity import (\n before_sleep_log,\n retry,\n retry_if_exception_type,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.schema import (\n AIMessage,\n BaseMessage,\n ChatGeneration,\n ChatMessage,\n ChatResult,\n HumanMessage,\n SystemMessage,\n)\nfrom langchain.utils import get_from_dict_or_env\nif TYPE_CHECKING:\n import tiktoken\nlogger = logging.getLogger(__name__)\ndef _import_tiktoken() -> Any:\n try:\n import tiktoken\n except ImportError:\n raise ValueError(\n \"Could not import tiktoken python package. \"\n \"This is needed in order to calculate get_token_ids. \"\n \"Please install it with `pip install tiktoken`.\"\n )\n return tiktoken\ndef _create_retry_decorator(llm: ChatOpenAI) -> Callable[[Any], Any]:\n import openai\n min_seconds = 1\n max_seconds = 60\n # Wait 2^x * 1 second between each retry starting with\n # 4 seconds, then up to 10 seconds, then 10 seconds afterwards\n return retry(", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"}
+{"id": "ef05ca6fd5bd-1", "text": "return retry(\n reraise=True,\n stop=stop_after_attempt(llm.max_retries),\n wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),\n retry=(\n retry_if_exception_type(openai.error.Timeout)\n | retry_if_exception_type(openai.error.APIError)\n | retry_if_exception_type(openai.error.APIConnectionError)\n | retry_if_exception_type(openai.error.RateLimitError)\n | retry_if_exception_type(openai.error.ServiceUnavailableError)\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\nasync def acompletion_with_retry(llm: ChatOpenAI, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the async completion call.\"\"\"\n retry_decorator = _create_retry_decorator(llm)\n @retry_decorator\n async def _completion_with_retry(**kwargs: Any) -> Any:\n # Use OpenAI's async api https://github.com/openai/openai-python#async-api\n return await llm.client.acreate(**kwargs)\n return await _completion_with_retry(**kwargs)\ndef _convert_dict_to_message(_dict: dict) -> BaseMessage:\n role = _dict[\"role\"]\n if role == \"user\":\n return HumanMessage(content=_dict[\"content\"])\n elif role == \"assistant\":\n return AIMessage(content=_dict[\"content\"])\n elif role == \"system\":\n return SystemMessage(content=_dict[\"content\"])\n else:\n return ChatMessage(content=_dict[\"content\"], role=role)\ndef _convert_message_to_dict(message: BaseMessage) -> dict:\n if isinstance(message, ChatMessage):\n message_dict = {\"role\": message.role, \"content\": message.content}\n elif isinstance(message, HumanMessage):", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"}
+{"id": "ef05ca6fd5bd-2", "text": "elif isinstance(message, HumanMessage):\n message_dict = {\"role\": \"user\", \"content\": message.content}\n elif isinstance(message, AIMessage):\n message_dict = {\"role\": \"assistant\", \"content\": message.content}\n elif isinstance(message, SystemMessage):\n message_dict = {\"role\": \"system\", \"content\": message.content}\n else:\n raise ValueError(f\"Got unknown type {message}\")\n if \"name\" in message.additional_kwargs:\n message_dict[\"name\"] = message.additional_kwargs[\"name\"]\n return message_dict\n[docs]class ChatOpenAI(BaseChatModel):\n \"\"\"Wrapper around OpenAI Chat large language models.\n To use, you should have the ``openai`` python package installed, and the\n environment variable ``OPENAI_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the openai.create call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.chat_models import ChatOpenAI\n openai = ChatOpenAI(model_name=\"gpt-3.5-turbo\")\n \"\"\"\n client: Any #: :meta private:\n model_name: str = Field(default=\"gpt-3.5-turbo\", alias=\"model\")\n \"\"\"Model name to use.\"\"\"\n temperature: float = 0.7\n \"\"\"What sampling temperature to use.\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not explicitly specified.\"\"\"\n openai_api_key: Optional[str] = None\n \"\"\"Base URL path for API requests, \n leave blank if not using a proxy or service emulator.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"}
+{"id": "ef05ca6fd5bd-3", "text": "leave blank if not using a proxy or service emulator.\"\"\"\n openai_api_base: Optional[str] = None\n openai_organization: Optional[str] = None\n # to support explicit proxy for OpenAI\n openai_proxy: Optional[str] = None\n request_timeout: Optional[Union[float, Tuple[float, float]]] = None\n \"\"\"Timeout for requests to OpenAI completion API. Default is 600 seconds.\"\"\"\n max_retries: int = 6\n \"\"\"Maximum number of retries to make when generating.\"\"\"\n streaming: bool = False\n \"\"\"Whether to stream the results or not.\"\"\"\n n: int = 1\n \"\"\"Number of chat completions to generate for each prompt.\"\"\"\n max_tokens: Optional[int] = None\n \"\"\"Maximum number of tokens to generate.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.ignore\n allow_population_by_field_name = True\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = cls.all_required_field_names()\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n if field_name not in all_required_field_names:\n logger.warning(\n f\"\"\"WARNING! {field_name} is not default parameter.\n {field_name} was transferred to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n invalid_model_kwargs = all_required_field_names.intersection(extra.keys())\n if invalid_model_kwargs:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"}
+{"id": "ef05ca6fd5bd-4", "text": "if invalid_model_kwargs:\n raise ValueError(\n f\"Parameters {invalid_model_kwargs} should be specified explicitly. \"\n f\"Instead they were passed in as part of `model_kwargs` parameter.\"\n )\n values[\"model_kwargs\"] = extra\n return values\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n values[\"openai_api_key\"] = get_from_dict_or_env(\n values, \"openai_api_key\", \"OPENAI_API_KEY\"\n )\n values[\"openai_organization\"] = get_from_dict_or_env(\n values,\n \"openai_organization\",\n \"OPENAI_ORGANIZATION\",\n default=\"\",\n )\n values[\"openai_api_base\"] = get_from_dict_or_env(\n values,\n \"openai_api_base\",\n \"OPENAI_API_BASE\",\n default=\"\",\n )\n values[\"openai_proxy\"] = get_from_dict_or_env(\n values,\n \"openai_proxy\",\n \"OPENAI_PROXY\",\n default=\"\",\n )\n try:\n import openai\n except ImportError:\n raise ValueError(\n \"Could not import openai python package. \"\n \"Please install it with `pip install openai`.\"\n )\n try:\n values[\"client\"] = openai.ChatCompletion\n except AttributeError:\n raise ValueError(\n \"`openai` has no `ChatCompletion` attribute, this is likely \"\n \"due to an old version of the openai package. Try upgrading it \"\n \"with `pip install --upgrade openai`.\"\n )\n if values[\"n\"] < 1:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"}
+{"id": "ef05ca6fd5bd-5", "text": ")\n if values[\"n\"] < 1:\n raise ValueError(\"n must be at least 1.\")\n if values[\"n\"] > 1 and values[\"streaming\"]:\n raise ValueError(\"n must be 1 when streaming.\")\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling OpenAI API.\"\"\"\n return {\n \"model\": self.model_name,\n \"request_timeout\": self.request_timeout,\n \"max_tokens\": self.max_tokens,\n \"stream\": self.streaming,\n \"n\": self.n,\n \"temperature\": self.temperature,\n **self.model_kwargs,\n }\n def _create_retry_decorator(self) -> Callable[[Any], Any]:\n import openai\n min_seconds = 1\n max_seconds = 60\n # Wait 2^x * 1 second between each retry starting with\n # 4 seconds, then up to 10 seconds, then 10 seconds afterwards\n return retry(\n reraise=True,\n stop=stop_after_attempt(self.max_retries),\n wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),\n retry=(\n retry_if_exception_type(openai.error.Timeout)\n | retry_if_exception_type(openai.error.APIError)\n | retry_if_exception_type(openai.error.APIConnectionError)\n | retry_if_exception_type(openai.error.RateLimitError)\n | retry_if_exception_type(openai.error.ServiceUnavailableError)\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\n[docs] def completion_with_retry(self, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the completion call.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"}
+{"id": "ef05ca6fd5bd-6", "text": "\"\"\"Use tenacity to retry the completion call.\"\"\"\n retry_decorator = self._create_retry_decorator()\n @retry_decorator\n def _completion_with_retry(**kwargs: Any) -> Any:\n return self.client.create(**kwargs)\n return _completion_with_retry(**kwargs)\n def _combine_llm_outputs(self, llm_outputs: List[Optional[dict]]) -> dict:\n overall_token_usage: dict = {}\n for output in llm_outputs:\n if output is None:\n # Happens in streaming\n continue\n token_usage = output[\"token_usage\"]\n for k, v in token_usage.items():\n if k in overall_token_usage:\n overall_token_usage[k] += v\n else:\n overall_token_usage[k] = v\n return {\"token_usage\": overall_token_usage, \"model_name\": self.model_name}\n def _generate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> ChatResult:\n message_dicts, params = self._create_message_dicts(messages, stop)\n if self.streaming:\n inner_completion = \"\"\n role = \"assistant\"\n params[\"stream\"] = True\n for stream_resp in self.completion_with_retry(\n messages=message_dicts, **params\n ):\n role = stream_resp[\"choices\"][0][\"delta\"].get(\"role\", role)\n token = stream_resp[\"choices\"][0][\"delta\"].get(\"content\", \"\")\n inner_completion += token\n if run_manager:\n run_manager.on_llm_new_token(token)\n message = _convert_dict_to_message(\n {\"content\": inner_completion, \"role\": role}\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"}
+{"id": "ef05ca6fd5bd-7", "text": "{\"content\": inner_completion, \"role\": role}\n )\n return ChatResult(generations=[ChatGeneration(message=message)])\n response = self.completion_with_retry(messages=message_dicts, **params)\n return self._create_chat_result(response)\n def _create_message_dicts(\n self, messages: List[BaseMessage], stop: Optional[List[str]]\n ) -> Tuple[List[Dict[str, Any]], Dict[str, Any]]:\n params = dict(self._invocation_params)\n if stop is not None:\n if \"stop\" in params:\n raise ValueError(\"`stop` found in both the input and default params.\")\n params[\"stop\"] = stop\n message_dicts = [_convert_message_to_dict(m) for m in messages]\n return message_dicts, params\n def _create_chat_result(self, response: Mapping[str, Any]) -> ChatResult:\n generations = []\n for res in response[\"choices\"]:\n message = _convert_dict_to_message(res[\"message\"])\n gen = ChatGeneration(message=message)\n generations.append(gen)\n llm_output = {\"token_usage\": response[\"usage\"], \"model_name\": self.model_name}\n return ChatResult(generations=generations, llm_output=llm_output)\n async def _agenerate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n ) -> ChatResult:\n message_dicts, params = self._create_message_dicts(messages, stop)\n if self.streaming:\n inner_completion = \"\"\n role = \"assistant\"\n params[\"stream\"] = True\n async for stream_resp in await acompletion_with_retry(", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"}
+{"id": "ef05ca6fd5bd-8", "text": "async for stream_resp in await acompletion_with_retry(\n self, messages=message_dicts, **params\n ):\n role = stream_resp[\"choices\"][0][\"delta\"].get(\"role\", role)\n token = stream_resp[\"choices\"][0][\"delta\"].get(\"content\", \"\")\n inner_completion += token\n if run_manager:\n await run_manager.on_llm_new_token(token)\n message = _convert_dict_to_message(\n {\"content\": inner_completion, \"role\": role}\n )\n return ChatResult(generations=[ChatGeneration(message=message)])\n else:\n response = await acompletion_with_retry(\n self, messages=message_dicts, **params\n )\n return self._create_chat_result(response)\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_name\": self.model_name}, **self._default_params}\n @property\n def _invocation_params(self) -> Mapping[str, Any]:\n \"\"\"Get the parameters used to invoke the model.\"\"\"\n openai_creds: Dict[str, Any] = {\n \"api_key\": self.openai_api_key,\n \"api_base\": self.openai_api_base,\n \"organization\": self.openai_organization,\n \"model\": self.model_name,\n }\n if self.openai_proxy:\n import openai\n openai.proxy = {\"http\": self.openai_proxy, \"https\": self.openai_proxy} # type: ignore[assignment] # noqa: E501\n return {**openai_creds, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of chat model.\"\"\"\n return \"openai-chat\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"}
+{"id": "ef05ca6fd5bd-9", "text": "\"\"\"Return type of chat model.\"\"\"\n return \"openai-chat\"\n def _get_encoding_model(self) -> Tuple[str, tiktoken.Encoding]:\n tiktoken_ = _import_tiktoken()\n model = self.model_name\n if model == \"gpt-3.5-turbo\":\n # gpt-3.5-turbo may change over time.\n # Returning num tokens assuming gpt-3.5-turbo-0301.\n model = \"gpt-3.5-turbo-0301\"\n elif model == \"gpt-4\":\n # gpt-4 may change over time.\n # Returning num tokens assuming gpt-4-0314.\n model = \"gpt-4-0314\"\n # Returns the number of tokens used by a list of messages.\n try:\n encoding = tiktoken_.encoding_for_model(model)\n except KeyError:\n logger.warning(\"Warning: model not found. Using cl100k_base encoding.\")\n model = \"cl100k_base\"\n encoding = tiktoken_.get_encoding(model)\n return model, encoding\n[docs] def get_token_ids(self, text: str) -> List[int]:\n \"\"\"Get the tokens present in the text with tiktoken package.\"\"\"\n # tiktoken NOT supported for Python 3.7 or below\n if sys.version_info[1] <= 7:\n return super().get_token_ids(text)\n _, encoding_model = self._get_encoding_model()\n return encoding_model.encode(text)\n[docs] def get_num_tokens_from_messages(self, messages: List[BaseMessage]) -> int:\n \"\"\"Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"}
+{"id": "ef05ca6fd5bd-10", "text": "Official documentation: https://github.com/openai/openai-cookbook/blob/\n main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb\"\"\"\n if sys.version_info[1] <= 7:\n return super().get_num_tokens_from_messages(messages)\n model, encoding = self._get_encoding_model()\n if model == \"gpt-3.5-turbo-0301\":\n # every message follows {role/name}\\n{content}\\n\n tokens_per_message = 4\n # if there's a name, the role is omitted\n tokens_per_name = -1\n elif model == \"gpt-4-0314\":\n tokens_per_message = 3\n tokens_per_name = 1\n else:\n raise NotImplementedError(\n f\"get_num_tokens_from_messages() is not presently implemented \"\n f\"for model {model}.\"\n \"See https://github.com/openai/openai-python/blob/main/chatml.md for \"\n \"information on how messages are converted to tokens.\"\n )\n num_tokens = 0\n messages_dict = [_convert_message_to_dict(m) for m in messages]\n for message in messages_dict:\n num_tokens += tokens_per_message\n for key, value in message.items():\n num_tokens += len(encoding.encode(value))\n if key == \"name\":\n num_tokens += tokens_per_name\n # every reply is primed with assistant\n num_tokens += 3\n return num_tokens\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"}
+{"id": "9c94f72abc3f-0", "text": "Source code for langchain.chat_models.google_palm\n\"\"\"Wrapper around Google's PaLM Chat API.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import TYPE_CHECKING, Any, Callable, Dict, List, Mapping, Optional\nfrom pydantic import BaseModel, root_validator\nfrom tenacity import (\n before_sleep_log,\n retry,\n retry_if_exception_type,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.schema import (\n AIMessage,\n BaseMessage,\n ChatGeneration,\n ChatMessage,\n ChatResult,\n HumanMessage,\n SystemMessage,\n)\nfrom langchain.utils import get_from_dict_or_env\nif TYPE_CHECKING:\n import google.generativeai as genai\nlogger = logging.getLogger(__name__)\nclass ChatGooglePalmError(Exception):\n pass\ndef _truncate_at_stop_tokens(\n text: str,\n stop: Optional[List[str]],\n) -> str:\n \"\"\"Truncates text at the earliest stop token found.\"\"\"\n if stop is None:\n return text\n for stop_token in stop:\n stop_token_idx = text.find(stop_token)\n if stop_token_idx != -1:\n text = text[:stop_token_idx]\n return text\ndef _response_to_result(\n response: genai.types.ChatResponse,\n stop: Optional[List[str]],\n) -> ChatResult:\n \"\"\"Converts a PaLM API response into a LangChain ChatResult.\"\"\"\n if not response.candidates:\n raise ChatGooglePalmError(\"ChatResponse must have at least one candidate.\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/google_palm.html"}
+{"id": "9c94f72abc3f-1", "text": "raise ChatGooglePalmError(\"ChatResponse must have at least one candidate.\")\n generations: List[ChatGeneration] = []\n for candidate in response.candidates:\n author = candidate.get(\"author\")\n if author is None:\n raise ChatGooglePalmError(f\"ChatResponse must have an author: {candidate}\")\n content = _truncate_at_stop_tokens(candidate.get(\"content\", \"\"), stop)\n if content is None:\n raise ChatGooglePalmError(f\"ChatResponse must have a content: {candidate}\")\n if author == \"ai\":\n generations.append(\n ChatGeneration(text=content, message=AIMessage(content=content))\n )\n elif author == \"human\":\n generations.append(\n ChatGeneration(\n text=content,\n message=HumanMessage(content=content),\n )\n )\n else:\n generations.append(\n ChatGeneration(\n text=content,\n message=ChatMessage(role=author, content=content),\n )\n )\n return ChatResult(generations=generations)\ndef _messages_to_prompt_dict(\n input_messages: List[BaseMessage],\n) -> genai.types.MessagePromptDict:\n \"\"\"Converts a list of LangChain messages into a PaLM API MessagePrompt structure.\"\"\"\n import google.generativeai as genai\n context: str = \"\"\n examples: List[genai.types.MessageDict] = []\n messages: List[genai.types.MessageDict] = []\n remaining = list(enumerate(input_messages))\n while remaining:\n index, input_message = remaining.pop(0)\n if isinstance(input_message, SystemMessage):\n if index != 0:\n raise ChatGooglePalmError(\"System message must be first input message.\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/google_palm.html"}
+{"id": "9c94f72abc3f-2", "text": "raise ChatGooglePalmError(\"System message must be first input message.\")\n context = input_message.content\n elif isinstance(input_message, HumanMessage) and input_message.example:\n if messages:\n raise ChatGooglePalmError(\n \"Message examples must come before other messages.\"\n )\n _, next_input_message = remaining.pop(0)\n if isinstance(next_input_message, AIMessage) and next_input_message.example:\n examples.extend(\n [\n genai.types.MessageDict(\n author=\"human\", content=input_message.content\n ),\n genai.types.MessageDict(\n author=\"ai\", content=next_input_message.content\n ),\n ]\n )\n else:\n raise ChatGooglePalmError(\n \"Human example message must be immediately followed by an \"\n \" AI example response.\"\n )\n elif isinstance(input_message, AIMessage) and input_message.example:\n raise ChatGooglePalmError(\n \"AI example message must be immediately preceded by a Human \"\n \"example message.\"\n )\n elif isinstance(input_message, AIMessage):\n messages.append(\n genai.types.MessageDict(author=\"ai\", content=input_message.content)\n )\n elif isinstance(input_message, HumanMessage):\n messages.append(\n genai.types.MessageDict(author=\"human\", content=input_message.content)\n )\n elif isinstance(input_message, ChatMessage):\n messages.append(\n genai.types.MessageDict(\n author=input_message.role, content=input_message.content\n )\n )\n else:\n raise ChatGooglePalmError(\n \"Messages without an explicit role not supported by PaLM API.\"\n )\n return genai.types.MessagePromptDict(\n context=context,\n examples=examples,", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/google_palm.html"}
+{"id": "9c94f72abc3f-3", "text": "context=context,\n examples=examples,\n messages=messages,\n )\ndef _create_retry_decorator() -> Callable[[Any], Any]:\n \"\"\"Returns a tenacity retry decorator, preconfigured to handle PaLM exceptions\"\"\"\n import google.api_core.exceptions\n multiplier = 2\n min_seconds = 1\n max_seconds = 60\n max_retries = 10\n return retry(\n reraise=True,\n stop=stop_after_attempt(max_retries),\n wait=wait_exponential(multiplier=multiplier, min=min_seconds, max=max_seconds),\n retry=(\n retry_if_exception_type(google.api_core.exceptions.ResourceExhausted)\n | retry_if_exception_type(google.api_core.exceptions.ServiceUnavailable)\n | retry_if_exception_type(google.api_core.exceptions.GoogleAPIError)\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\ndef chat_with_retry(llm: ChatGooglePalm, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the completion call.\"\"\"\n retry_decorator = _create_retry_decorator()\n @retry_decorator\n def _chat_with_retry(**kwargs: Any) -> Any:\n return llm.client.chat(**kwargs)\n return _chat_with_retry(**kwargs)\nasync def achat_with_retry(llm: ChatGooglePalm, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the async completion call.\"\"\"\n retry_decorator = _create_retry_decorator()\n @retry_decorator\n async def _achat_with_retry(**kwargs: Any) -> Any:\n # Use OpenAI's async api https://github.com/openai/openai-python#async-api\n return await llm.client.chat_async(**kwargs)\n return await _achat_with_retry(**kwargs)", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/google_palm.html"}
+{"id": "9c94f72abc3f-4", "text": "return await _achat_with_retry(**kwargs)\n[docs]class ChatGooglePalm(BaseChatModel, BaseModel):\n \"\"\"Wrapper around Google's PaLM Chat API.\n To use you must have the google.generativeai Python package installed and\n either:\n 1. The ``GOOGLE_API_KEY``` environment varaible set with your API key, or\n 2. Pass your API key using the google_api_key kwarg to the ChatGoogle\n constructor.\n Example:\n .. code-block:: python\n from langchain.chat_models import ChatGooglePalm\n chat = ChatGooglePalm()\n \"\"\"\n client: Any #: :meta private:\n model_name: str = \"models/chat-bison-001\"\n \"\"\"Model name to use.\"\"\"\n google_api_key: Optional[str] = None\n temperature: Optional[float] = None\n \"\"\"Run inference with this temperature. Must by in the closed\n interval [0.0, 1.0].\"\"\"\n top_p: Optional[float] = None\n \"\"\"Decode using nucleus sampling: consider the smallest set of tokens whose\n probability sum is at least top_p. Must be in the closed interval [0.0, 1.0].\"\"\"\n top_k: Optional[int] = None\n \"\"\"Decode using top-k sampling: consider the set of top_k most probable tokens.\n Must be positive.\"\"\"\n n: int = 1\n \"\"\"Number of chat completions to generate for each prompt. Note that the API may\n not return the full n completions if duplicates are generated.\"\"\"\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate api key, python package exists, temperature, top_p, and top_k.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/google_palm.html"}
+{"id": "9c94f72abc3f-5", "text": "\"\"\"Validate api key, python package exists, temperature, top_p, and top_k.\"\"\"\n google_api_key = get_from_dict_or_env(\n values, \"google_api_key\", \"GOOGLE_API_KEY\"\n )\n try:\n import google.generativeai as genai\n genai.configure(api_key=google_api_key)\n except ImportError:\n raise ChatGooglePalmError(\n \"Could not import google.generativeai python package. \"\n \"Please install it with `pip install google-generativeai`\"\n )\n values[\"client\"] = genai\n if values[\"temperature\"] is not None and not 0 <= values[\"temperature\"] <= 1:\n raise ValueError(\"temperature must be in the range [0.0, 1.0]\")\n if values[\"top_p\"] is not None and not 0 <= values[\"top_p\"] <= 1:\n raise ValueError(\"top_p must be in the range [0.0, 1.0]\")\n if values[\"top_k\"] is not None and values[\"top_k\"] <= 0:\n raise ValueError(\"top_k must be positive\")\n return values\n def _generate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> ChatResult:\n prompt = _messages_to_prompt_dict(messages)\n response: genai.types.ChatResponse = chat_with_retry(\n self,\n model=self.model_name,\n prompt=prompt,\n temperature=self.temperature,\n top_p=self.top_p,\n top_k=self.top_k,\n candidate_count=self.n,\n )\n return _response_to_result(response, stop)", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/google_palm.html"}
+{"id": "9c94f72abc3f-6", "text": ")\n return _response_to_result(response, stop)\n async def _agenerate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n ) -> ChatResult:\n prompt = _messages_to_prompt_dict(messages)\n response: genai.types.ChatResponse = await achat_with_retry(\n self,\n model=self.model_name,\n prompt=prompt,\n temperature=self.temperature,\n top_p=self.top_p,\n top_k=self.top_k,\n candidate_count=self.n,\n )\n return _response_to_result(response, stop)\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"model_name\": self.model_name,\n \"temperature\": self.temperature,\n \"top_p\": self.top_p,\n \"top_k\": self.top_k,\n \"n\": self.n,\n }\n @property\n def _llm_type(self) -> str:\n return \"google-palm-chat\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/google_palm.html"}
+{"id": "e92bf4bdc192-0", "text": "Source code for langchain.chat_models.anthropic\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.llms.anthropic import _AnthropicCommon\nfrom langchain.schema import (\n AIMessage,\n BaseMessage,\n ChatGeneration,\n ChatMessage,\n ChatResult,\n HumanMessage,\n SystemMessage,\n)\n[docs]class ChatAnthropic(BaseChatModel, _AnthropicCommon):\n r\"\"\"Wrapper around Anthropic's large language model.\n To use, you should have the ``anthropic`` python package installed, and the\n environment variable ``ANTHROPIC_API_KEY`` set with your API key, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n import anthropic\n from langchain.llms import Anthropic\n model = ChatAnthropic(model=\"\", anthropic_api_key=\"my-api-key\")\n \"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of chat model.\"\"\"\n return \"anthropic-chat\"\n def _convert_one_message_to_text(self, message: BaseMessage) -> str:\n if isinstance(message, ChatMessage):\n message_text = f\"\\n\\n{message.role.capitalize()}: {message.content}\"\n elif isinstance(message, HumanMessage):\n message_text = f\"{self.HUMAN_PROMPT} {message.content}\"\n elif isinstance(message, AIMessage):", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/anthropic.html"}
+{"id": "e92bf4bdc192-1", "text": "elif isinstance(message, AIMessage):\n message_text = f\"{self.AI_PROMPT} {message.content}\"\n elif isinstance(message, SystemMessage):\n message_text = f\"{self.HUMAN_PROMPT} {message.content}\"\n else:\n raise ValueError(f\"Got unknown type {message}\")\n return message_text\n def _convert_messages_to_text(self, messages: List[BaseMessage]) -> str:\n \"\"\"Format a list of strings into a single string with necessary newlines.\n Args:\n messages (List[BaseMessage]): List of BaseMessage to combine.\n Returns:\n str: Combined string with necessary newlines.\n \"\"\"\n return \"\".join(\n self._convert_one_message_to_text(message) for message in messages\n )\n def _convert_messages_to_prompt(self, messages: List[BaseMessage]) -> str:\n \"\"\"Format a list of messages into a full prompt for the Anthropic model\n Args:\n messages (List[BaseMessage]): List of BaseMessage to combine.\n Returns:\n str: Combined string with necessary HUMAN_PROMPT and AI_PROMPT tags.\n \"\"\"\n if not self.AI_PROMPT:\n raise NameError(\"Please ensure the anthropic package is loaded\")\n if not isinstance(messages[-1], AIMessage):\n messages.append(AIMessage(content=\"\"))\n text = self._convert_messages_to_text(messages)\n return (\n text.rstrip()\n ) # trim off the trailing ' ' that might come from the \"Assistant: \"\n def _generate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> ChatResult:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/anthropic.html"}
+{"id": "e92bf4bdc192-2", "text": ") -> ChatResult:\n prompt = self._convert_messages_to_prompt(messages)\n params: Dict[str, Any] = {\"prompt\": prompt, **self._default_params}\n if stop:\n params[\"stop_sequences\"] = stop\n if self.streaming:\n completion = \"\"\n stream_resp = self.client.completion_stream(**params)\n for data in stream_resp:\n delta = data[\"completion\"][len(completion) :]\n completion = data[\"completion\"]\n if run_manager:\n run_manager.on_llm_new_token(\n delta,\n )\n else:\n response = self.client.completion(**params)\n completion = response[\"completion\"]\n message = AIMessage(content=completion)\n return ChatResult(generations=[ChatGeneration(message=message)])\n async def _agenerate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n ) -> ChatResult:\n prompt = self._convert_messages_to_prompt(messages)\n params: Dict[str, Any] = {\"prompt\": prompt, **self._default_params}\n if stop:\n params[\"stop_sequences\"] = stop\n if self.streaming:\n completion = \"\"\n stream_resp = await self.client.acompletion_stream(**params)\n async for data in stream_resp:\n delta = data[\"completion\"][len(completion) :]\n completion = data[\"completion\"]\n if run_manager:\n await run_manager.on_llm_new_token(\n delta,\n )\n else:\n response = await self.client.acompletion(**params)\n completion = response[\"completion\"]\n message = AIMessage(content=completion)", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/anthropic.html"}
+{"id": "e92bf4bdc192-3", "text": "completion = response[\"completion\"]\n message = AIMessage(content=completion)\n return ChatResult(generations=[ChatGeneration(message=message)])\n[docs] def get_num_tokens(self, text: str) -> int:\n \"\"\"Calculate number of tokens.\"\"\"\n if not self.count_tokens:\n raise NameError(\"Please ensure the anthropic package is loaded\")\n return self.count_tokens(text)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chat_models/anthropic.html"}
+{"id": "fc6046c4dbfd-0", "text": "Source code for langchain.prompts.few_shot\n\"\"\"Prompt template that contains few shot examples.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.prompts.base import (\n DEFAULT_FORMATTER_MAPPING,\n StringPromptTemplate,\n check_valid_template,\n)\nfrom langchain.prompts.example_selector.base import BaseExampleSelector\nfrom langchain.prompts.prompt import PromptTemplate\n[docs]class FewShotPromptTemplate(StringPromptTemplate):\n \"\"\"Prompt template that contains few shot examples.\"\"\"\n examples: Optional[List[dict]] = None\n \"\"\"Examples to format into the prompt.\n Either this or example_selector should be provided.\"\"\"\n example_selector: Optional[BaseExampleSelector] = None\n \"\"\"ExampleSelector to choose the examples to format into the prompt.\n Either this or examples should be provided.\"\"\"\n example_prompt: PromptTemplate\n \"\"\"PromptTemplate used to format an individual example.\"\"\"\n suffix: str\n \"\"\"A prompt template string to put after the examples.\"\"\"\n input_variables: List[str]\n \"\"\"A list of the names of the variables the prompt template expects.\"\"\"\n example_separator: str = \"\\n\\n\"\n \"\"\"String separator used to join the prefix, the examples, and suffix.\"\"\"\n prefix: str = \"\"\n \"\"\"A prompt template string to put before the examples.\"\"\"\n template_format: str = \"f-string\"\n \"\"\"The format of the prompt template. Options are: 'f-string', 'jinja2'.\"\"\"\n validate_template: bool = True\n \"\"\"Whether or not to try validating the template.\"\"\"\n @root_validator(pre=True)\n def check_examples_and_selector(cls, values: Dict) -> Dict:\n \"\"\"Check that one and only one of examples/example_selector are provided.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/few_shot.html"}
+{"id": "fc6046c4dbfd-1", "text": "\"\"\"Check that one and only one of examples/example_selector are provided.\"\"\"\n examples = values.get(\"examples\", None)\n example_selector = values.get(\"example_selector\", None)\n if examples and example_selector:\n raise ValueError(\n \"Only one of 'examples' and 'example_selector' should be provided\"\n )\n if examples is None and example_selector is None:\n raise ValueError(\n \"One of 'examples' and 'example_selector' should be provided\"\n )\n return values\n @root_validator()\n def template_is_valid(cls, values: Dict) -> Dict:\n \"\"\"Check that prefix, suffix and input variables are consistent.\"\"\"\n if values[\"validate_template\"]:\n check_valid_template(\n values[\"prefix\"] + values[\"suffix\"],\n values[\"template_format\"],\n values[\"input_variables\"] + list(values[\"partial_variables\"]),\n )\n return values\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n def _get_examples(self, **kwargs: Any) -> List[dict]:\n if self.examples is not None:\n return self.examples\n elif self.example_selector is not None:\n return self.example_selector.select_examples(kwargs)\n else:\n raise ValueError\n[docs] def format(self, **kwargs: Any) -> str:\n \"\"\"Format the prompt with the inputs.\n Args:\n kwargs: Any arguments to be passed to the prompt template.\n Returns:\n A formatted string.\n Example:\n .. code-block:: python\n prompt.format(variable1=\"foo\")\n \"\"\"\n kwargs = self._merge_partial_and_user_variables(**kwargs)\n # Get the examples to use.", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/few_shot.html"}
+{"id": "fc6046c4dbfd-2", "text": "# Get the examples to use.\n examples = self._get_examples(**kwargs)\n examples = [\n {k: e[k] for k in self.example_prompt.input_variables} for e in examples\n ]\n # Format the examples.\n example_strings = [\n self.example_prompt.format(**example) for example in examples\n ]\n # Create the overall template.\n pieces = [self.prefix, *example_strings, self.suffix]\n template = self.example_separator.join([piece for piece in pieces if piece])\n # Format the template with the input variables.\n return DEFAULT_FORMATTER_MAPPING[self.template_format](template, **kwargs)\n @property\n def _prompt_type(self) -> str:\n \"\"\"Return the prompt type key.\"\"\"\n return \"few_shot\"\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return a dictionary of the prompt.\"\"\"\n if self.example_selector:\n raise ValueError(\"Saving an example selector is not currently supported\")\n return super().dict(**kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/few_shot.html"}
+{"id": "96fca020a6dd-0", "text": "Source code for langchain.prompts.chat\n\"\"\"Chat prompt template.\"\"\"\nfrom __future__ import annotations\nfrom abc import ABC, abstractmethod\nfrom pathlib import Path\nfrom typing import Any, Callable, List, Sequence, Tuple, Type, TypeVar, Union\nfrom pydantic import BaseModel, Field\nfrom langchain.memory.buffer import get_buffer_string\nfrom langchain.prompts.base import BasePromptTemplate, StringPromptTemplate\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import (\n AIMessage,\n BaseMessage,\n ChatMessage,\n HumanMessage,\n PromptValue,\n SystemMessage,\n)\nclass BaseMessagePromptTemplate(BaseModel, ABC):\n @abstractmethod\n def format_messages(self, **kwargs: Any) -> List[BaseMessage]:\n \"\"\"To messages.\"\"\"\n @property\n @abstractmethod\n def input_variables(self) -> List[str]:\n \"\"\"Input variables for this prompt template.\"\"\"\n[docs]class MessagesPlaceholder(BaseMessagePromptTemplate):\n \"\"\"Prompt template that assumes variable is already list of messages.\"\"\"\n variable_name: str\n[docs] def format_messages(self, **kwargs: Any) -> List[BaseMessage]:\n \"\"\"To a BaseMessage.\"\"\"\n value = kwargs[self.variable_name]\n if not isinstance(value, list):\n raise ValueError(\n f\"variable {self.variable_name} should be a list of base messages, \"\n f\"got {value}\"\n )\n for v in value:\n if not isinstance(v, BaseMessage):\n raise ValueError(\n f\"variable {self.variable_name} should be a list of base messages,\"\n f\" got {value}\"\n )\n return value\n @property\n def input_variables(self) -> List[str]:\n \"\"\"Input variables for this prompt template.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/chat.html"}
+{"id": "96fca020a6dd-1", "text": "\"\"\"Input variables for this prompt template.\"\"\"\n return [self.variable_name]\nMessagePromptTemplateT = TypeVar(\n \"MessagePromptTemplateT\", bound=\"BaseStringMessagePromptTemplate\"\n)\nclass BaseStringMessagePromptTemplate(BaseMessagePromptTemplate, ABC):\n prompt: StringPromptTemplate\n additional_kwargs: dict = Field(default_factory=dict)\n @classmethod\n def from_template(\n cls: Type[MessagePromptTemplateT],\n template: str,\n template_format: str = \"f-string\",\n **kwargs: Any,\n ) -> MessagePromptTemplateT:\n prompt = PromptTemplate.from_template(template, template_format=template_format)\n return cls(prompt=prompt, **kwargs)\n @classmethod\n def from_template_file(\n cls: Type[MessagePromptTemplateT],\n template_file: Union[str, Path],\n input_variables: List[str],\n **kwargs: Any,\n ) -> MessagePromptTemplateT:\n prompt = PromptTemplate.from_file(template_file, input_variables)\n return cls(prompt=prompt, **kwargs)\n @abstractmethod\n def format(self, **kwargs: Any) -> BaseMessage:\n \"\"\"To a BaseMessage.\"\"\"\n def format_messages(self, **kwargs: Any) -> List[BaseMessage]:\n return [self.format(**kwargs)]\n @property\n def input_variables(self) -> List[str]:\n return self.prompt.input_variables\nclass ChatMessagePromptTemplate(BaseStringMessagePromptTemplate):\n role: str\n def format(self, **kwargs: Any) -> BaseMessage:\n text = self.prompt.format(**kwargs)\n return ChatMessage(\n content=text, role=self.role, additional_kwargs=self.additional_kwargs\n )\nclass HumanMessagePromptTemplate(BaseStringMessagePromptTemplate):", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/chat.html"}
+{"id": "96fca020a6dd-2", "text": ")\nclass HumanMessagePromptTemplate(BaseStringMessagePromptTemplate):\n def format(self, **kwargs: Any) -> BaseMessage:\n text = self.prompt.format(**kwargs)\n return HumanMessage(content=text, additional_kwargs=self.additional_kwargs)\nclass AIMessagePromptTemplate(BaseStringMessagePromptTemplate):\n def format(self, **kwargs: Any) -> BaseMessage:\n text = self.prompt.format(**kwargs)\n return AIMessage(content=text, additional_kwargs=self.additional_kwargs)\nclass SystemMessagePromptTemplate(BaseStringMessagePromptTemplate):\n def format(self, **kwargs: Any) -> BaseMessage:\n text = self.prompt.format(**kwargs)\n return SystemMessage(content=text, additional_kwargs=self.additional_kwargs)\nclass ChatPromptValue(PromptValue):\n messages: List[BaseMessage]\n def to_string(self) -> str:\n \"\"\"Return prompt as string.\"\"\"\n return get_buffer_string(self.messages)\n def to_messages(self) -> List[BaseMessage]:\n \"\"\"Return prompt as messages.\"\"\"\n return self.messages\n[docs]class BaseChatPromptTemplate(BasePromptTemplate, ABC):\n[docs] def format(self, **kwargs: Any) -> str:\n return self.format_prompt(**kwargs).to_string()\n[docs] def format_prompt(self, **kwargs: Any) -> PromptValue:\n messages = self.format_messages(**kwargs)\n return ChatPromptValue(messages=messages)\n[docs] @abstractmethod\n def format_messages(self, **kwargs: Any) -> List[BaseMessage]:\n \"\"\"Format kwargs into a list of messages.\"\"\"\n[docs]class ChatPromptTemplate(BaseChatPromptTemplate, ABC):\n input_variables: List[str]\n messages: List[Union[BaseMessagePromptTemplate, BaseMessage]]\n @classmethod", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/chat.html"}
+{"id": "96fca020a6dd-3", "text": "messages: List[Union[BaseMessagePromptTemplate, BaseMessage]]\n @classmethod\n def from_template(cls, template: str, **kwargs: Any) -> ChatPromptTemplate:\n prompt_template = PromptTemplate.from_template(template, **kwargs)\n message = HumanMessagePromptTemplate(prompt=prompt_template)\n return cls.from_messages([message])\n @classmethod\n def from_role_strings(\n cls, string_messages: List[Tuple[str, str]]\n ) -> ChatPromptTemplate:\n messages = [\n ChatMessagePromptTemplate(\n prompt=PromptTemplate.from_template(template), role=role\n )\n for role, template in string_messages\n ]\n return cls.from_messages(messages)\n @classmethod\n def from_strings(\n cls, string_messages: List[Tuple[Type[BaseMessagePromptTemplate], str]]\n ) -> ChatPromptTemplate:\n messages = [\n role(prompt=PromptTemplate.from_template(template))\n for role, template in string_messages\n ]\n return cls.from_messages(messages)\n @classmethod\n def from_messages(\n cls, messages: Sequence[Union[BaseMessagePromptTemplate, BaseMessage]]\n ) -> ChatPromptTemplate:\n input_vars = set()\n for message in messages:\n if isinstance(message, BaseMessagePromptTemplate):\n input_vars.update(message.input_variables)\n return cls(input_variables=list(input_vars), messages=messages)\n[docs] def format(self, **kwargs: Any) -> str:\n return self.format_prompt(**kwargs).to_string()\n[docs] def format_messages(self, **kwargs: Any) -> List[BaseMessage]:\n kwargs = self._merge_partial_and_user_variables(**kwargs)\n result = []\n for message_template in self.messages:", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/chat.html"}
+{"id": "96fca020a6dd-4", "text": "result = []\n for message_template in self.messages:\n if isinstance(message_template, BaseMessage):\n result.extend([message_template])\n elif isinstance(message_template, BaseMessagePromptTemplate):\n rel_params = {\n k: v\n for k, v in kwargs.items()\n if k in message_template.input_variables\n }\n message = message_template.format_messages(**rel_params)\n result.extend(message)\n else:\n raise ValueError(f\"Unexpected input: {message_template}\")\n return result\n[docs] def partial(self, **kwargs: Union[str, Callable[[], str]]) -> BasePromptTemplate:\n raise NotImplementedError\n @property\n def _prompt_type(self) -> str:\n raise NotImplementedError\n[docs] def save(self, file_path: Union[Path, str]) -> None:\n raise NotImplementedError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/chat.html"}
+{"id": "9d63fb169c0b-0", "text": "Source code for langchain.prompts.base\n\"\"\"BasePrompt schema definition.\"\"\"\nfrom __future__ import annotations\nimport json\nfrom abc import ABC, abstractmethod\nfrom pathlib import Path\nfrom typing import Any, Callable, Dict, List, Mapping, Optional, Set, Union\nimport yaml\nfrom pydantic import BaseModel, Extra, Field, root_validator\nfrom langchain.formatting import formatter\nfrom langchain.schema import BaseMessage, BaseOutputParser, HumanMessage, PromptValue\ndef jinja2_formatter(template: str, **kwargs: Any) -> str:\n \"\"\"Format a template using jinja2.\"\"\"\n try:\n from jinja2 import Template\n except ImportError:\n raise ImportError(\n \"jinja2 not installed, which is needed to use the jinja2_formatter. \"\n \"Please install it with `pip install jinja2`.\"\n )\n return Template(template).render(**kwargs)\ndef validate_jinja2(template: str, input_variables: List[str]) -> None:\n input_variables_set = set(input_variables)\n valid_variables = _get_jinja2_variables_from_template(template)\n missing_variables = valid_variables - input_variables_set\n extra_variables = input_variables_set - valid_variables\n error_message = \"\"\n if missing_variables:\n error_message += f\"Missing variables: {missing_variables} \"\n if extra_variables:\n error_message += f\"Extra variables: {extra_variables}\"\n if error_message:\n raise KeyError(error_message.strip())\ndef _get_jinja2_variables_from_template(template: str) -> Set[str]:\n try:\n from jinja2 import Environment, meta\n except ImportError:\n raise ImportError(\n \"jinja2 not installed, which is needed to use the jinja2_formatter. \"", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/base.html"}
+{"id": "9d63fb169c0b-1", "text": "\"Please install it with `pip install jinja2`.\"\n )\n env = Environment()\n ast = env.parse(template)\n variables = meta.find_undeclared_variables(ast)\n return variables\nDEFAULT_FORMATTER_MAPPING: Dict[str, Callable] = {\n \"f-string\": formatter.format,\n \"jinja2\": jinja2_formatter,\n}\nDEFAULT_VALIDATOR_MAPPING: Dict[str, Callable] = {\n \"f-string\": formatter.validate_input_variables,\n \"jinja2\": validate_jinja2,\n}\ndef check_valid_template(\n template: str, template_format: str, input_variables: List[str]\n) -> None:\n \"\"\"Check that template string is valid.\"\"\"\n if template_format not in DEFAULT_FORMATTER_MAPPING:\n valid_formats = list(DEFAULT_FORMATTER_MAPPING)\n raise ValueError(\n f\"Invalid template format. Got `{template_format}`;\"\n f\" should be one of {valid_formats}\"\n )\n try:\n validator_func = DEFAULT_VALIDATOR_MAPPING[template_format]\n validator_func(template, input_variables)\n except KeyError as e:\n raise ValueError(\n \"Invalid prompt schema; check for mismatched or missing input parameters. \"\n + str(e)\n )\nclass StringPromptValue(PromptValue):\n text: str\n def to_string(self) -> str:\n \"\"\"Return prompt as string.\"\"\"\n return self.text\n def to_messages(self) -> List[BaseMessage]:\n \"\"\"Return prompt as messages.\"\"\"\n return [HumanMessage(content=self.text)]\n[docs]class BasePromptTemplate(BaseModel, ABC):\n \"\"\"Base class for all prompt templates, returning a prompt.\"\"\"\n input_variables: List[str]\n \"\"\"A list of the names of the variables the prompt template expects.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/base.html"}
+{"id": "9d63fb169c0b-2", "text": "\"\"\"A list of the names of the variables the prompt template expects.\"\"\"\n output_parser: Optional[BaseOutputParser] = None\n \"\"\"How to parse the output of calling an LLM on this formatted prompt.\"\"\"\n partial_variables: Mapping[str, Union[str, Callable[[], str]]] = Field(\n default_factory=dict\n )\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] @abstractmethod\n def format_prompt(self, **kwargs: Any) -> PromptValue:\n \"\"\"Create Chat Messages.\"\"\"\n @root_validator()\n def validate_variable_names(cls, values: Dict) -> Dict:\n \"\"\"Validate variable names do not include restricted names.\"\"\"\n if \"stop\" in values[\"input_variables\"]:\n raise ValueError(\n \"Cannot have an input variable named 'stop', as it is used internally,\"\n \" please rename.\"\n )\n if \"stop\" in values[\"partial_variables\"]:\n raise ValueError(\n \"Cannot have an partial variable named 'stop', as it is used \"\n \"internally, please rename.\"\n )\n overall = set(values[\"input_variables\"]).intersection(\n values[\"partial_variables\"]\n )\n if overall:\n raise ValueError(\n f\"Found overlapping input and partial variables: {overall}\"\n )\n return values\n[docs] def partial(self, **kwargs: Union[str, Callable[[], str]]) -> BasePromptTemplate:\n \"\"\"Return a partial of the prompt template.\"\"\"\n prompt_dict = self.__dict__.copy()\n prompt_dict[\"input_variables\"] = list(\n set(self.input_variables).difference(kwargs)\n )\n prompt_dict[\"partial_variables\"] = {**self.partial_variables, **kwargs}", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/base.html"}
+{"id": "9d63fb169c0b-3", "text": "prompt_dict[\"partial_variables\"] = {**self.partial_variables, **kwargs}\n return type(self)(**prompt_dict)\n def _merge_partial_and_user_variables(self, **kwargs: Any) -> Dict[str, Any]:\n # Get partial params:\n partial_kwargs = {\n k: v if isinstance(v, str) else v()\n for k, v in self.partial_variables.items()\n }\n return {**partial_kwargs, **kwargs}\n[docs] @abstractmethod\n def format(self, **kwargs: Any) -> str:\n \"\"\"Format the prompt with the inputs.\n Args:\n kwargs: Any arguments to be passed to the prompt template.\n Returns:\n A formatted string.\n Example:\n .. code-block:: python\n prompt.format(variable1=\"foo\")\n \"\"\"\n @property\n def _prompt_type(self) -> str:\n \"\"\"Return the prompt type key.\"\"\"\n raise NotImplementedError\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return dictionary representation of prompt.\"\"\"\n prompt_dict = super().dict(**kwargs)\n prompt_dict[\"_type\"] = self._prompt_type\n return prompt_dict\n[docs] def save(self, file_path: Union[Path, str]) -> None:\n \"\"\"Save the prompt.\n Args:\n file_path: Path to directory to save prompt to.\n Example:\n .. code-block:: python\n prompt.save(file_path=\"path/prompt.yaml\")\n \"\"\"\n if self.partial_variables:\n raise ValueError(\"Cannot save prompt with partial variables.\")\n # Convert file to Path object.\n if isinstance(file_path, str):\n save_path = Path(file_path)\n else:\n save_path = file_path", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/base.html"}
+{"id": "9d63fb169c0b-4", "text": "save_path = Path(file_path)\n else:\n save_path = file_path\n directory_path = save_path.parent\n directory_path.mkdir(parents=True, exist_ok=True)\n # Fetch dictionary to save\n prompt_dict = self.dict()\n if save_path.suffix == \".json\":\n with open(file_path, \"w\") as f:\n json.dump(prompt_dict, f, indent=4)\n elif save_path.suffix == \".yaml\":\n with open(file_path, \"w\") as f:\n yaml.dump(prompt_dict, f, default_flow_style=False)\n else:\n raise ValueError(f\"{save_path} must be json or yaml\")\n[docs]class StringPromptTemplate(BasePromptTemplate, ABC):\n \"\"\"String prompt should expose the format method, returning a prompt.\"\"\"\n[docs] def format_prompt(self, **kwargs: Any) -> PromptValue:\n \"\"\"Create Chat Messages.\"\"\"\n return StringPromptValue(text=self.format(**kwargs))\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/base.html"}
+{"id": "b8ee990c8f3f-0", "text": "Source code for langchain.prompts.prompt\n\"\"\"Prompt schema definition.\"\"\"\nfrom __future__ import annotations\nfrom pathlib import Path\nfrom string import Formatter\nfrom typing import Any, Dict, List, Union\nfrom pydantic import Extra, root_validator\nfrom langchain.prompts.base import (\n DEFAULT_FORMATTER_MAPPING,\n StringPromptTemplate,\n _get_jinja2_variables_from_template,\n check_valid_template,\n)\n[docs]class PromptTemplate(StringPromptTemplate):\n \"\"\"Schema to represent a prompt for an LLM.\n Example:\n .. code-block:: python\n from langchain import PromptTemplate\n prompt = PromptTemplate(input_variables=[\"foo\"], template=\"Say {foo}\")\n \"\"\"\n input_variables: List[str]\n \"\"\"A list of the names of the variables the prompt template expects.\"\"\"\n template: str\n \"\"\"The prompt template.\"\"\"\n template_format: str = \"f-string\"\n \"\"\"The format of the prompt template. Options are: 'f-string', 'jinja2'.\"\"\"\n validate_template: bool = True\n \"\"\"Whether or not to try validating the template.\"\"\"\n @property\n def _prompt_type(self) -> str:\n \"\"\"Return the prompt type key.\"\"\"\n return \"prompt\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] def format(self, **kwargs: Any) -> str:\n \"\"\"Format the prompt with the inputs.\n Args:\n kwargs: Any arguments to be passed to the prompt template.\n Returns:\n A formatted string.\n Example:\n .. code-block:: python\n prompt.format(variable1=\"foo\")\n \"\"\"\n kwargs = self._merge_partial_and_user_variables(**kwargs)", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/prompt.html"}
+{"id": "b8ee990c8f3f-1", "text": "\"\"\"\n kwargs = self._merge_partial_and_user_variables(**kwargs)\n return DEFAULT_FORMATTER_MAPPING[self.template_format](self.template, **kwargs)\n @root_validator()\n def template_is_valid(cls, values: Dict) -> Dict:\n \"\"\"Check that template and input variables are consistent.\"\"\"\n if values[\"validate_template\"]:\n all_inputs = values[\"input_variables\"] + list(values[\"partial_variables\"])\n check_valid_template(\n values[\"template\"], values[\"template_format\"], all_inputs\n )\n return values\n[docs] @classmethod\n def from_examples(\n cls,\n examples: List[str],\n suffix: str,\n input_variables: List[str],\n example_separator: str = \"\\n\\n\",\n prefix: str = \"\",\n **kwargs: Any,\n ) -> PromptTemplate:\n \"\"\"Take examples in list format with prefix and suffix to create a prompt.\n Intended to be used as a way to dynamically create a prompt from examples.\n Args:\n examples: List of examples to use in the prompt.\n suffix: String to go after the list of examples. Should generally\n set up the user's input.\n input_variables: A list of variable names the final prompt template\n will expect.\n example_separator: The separator to use in between examples. Defaults\n to two new line characters.\n prefix: String that should go before any examples. Generally includes\n examples. Default to an empty string.\n Returns:\n The final prompt generated.\n \"\"\"\n template = example_separator.join([prefix, *examples, suffix])\n return cls(input_variables=input_variables, template=template, **kwargs)\n[docs] @classmethod\n def from_file(", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/prompt.html"}
+{"id": "b8ee990c8f3f-2", "text": "[docs] @classmethod\n def from_file(\n cls, template_file: Union[str, Path], input_variables: List[str], **kwargs: Any\n ) -> PromptTemplate:\n \"\"\"Load a prompt from a file.\n Args:\n template_file: The path to the file containing the prompt template.\n input_variables: A list of variable names the final prompt template\n will expect.\n Returns:\n The prompt loaded from the file.\n \"\"\"\n with open(str(template_file), \"r\") as f:\n template = f.read()\n return cls(input_variables=input_variables, template=template, **kwargs)\n[docs] @classmethod\n def from_template(cls, template: str, **kwargs: Any) -> PromptTemplate:\n \"\"\"Load a prompt template from a template.\"\"\"\n if \"template_format\" in kwargs and kwargs[\"template_format\"] == \"jinja2\":\n # Get the variables for the template\n input_variables = _get_jinja2_variables_from_template(template)\n else:\n input_variables = {\n v for _, v, _, _ in Formatter().parse(template) if v is not None\n }\n if \"partial_variables\" in kwargs:\n partial_variables = kwargs[\"partial_variables\"]\n input_variables = {\n var for var in input_variables if var not in partial_variables\n }\n return cls(\n input_variables=list(sorted(input_variables)), template=template, **kwargs\n )\n# For backwards compatibility.\nPrompt = PromptTemplate\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/prompt.html"}
+{"id": "8eccc9cc9134-0", "text": "Source code for langchain.prompts.few_shot_with_templates\n\"\"\"Prompt template that contains few shot examples.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.prompts.base import DEFAULT_FORMATTER_MAPPING, StringPromptTemplate\nfrom langchain.prompts.example_selector.base import BaseExampleSelector\nfrom langchain.prompts.prompt import PromptTemplate\n[docs]class FewShotPromptWithTemplates(StringPromptTemplate):\n \"\"\"Prompt template that contains few shot examples.\"\"\"\n examples: Optional[List[dict]] = None\n \"\"\"Examples to format into the prompt.\n Either this or example_selector should be provided.\"\"\"\n example_selector: Optional[BaseExampleSelector] = None\n \"\"\"ExampleSelector to choose the examples to format into the prompt.\n Either this or examples should be provided.\"\"\"\n example_prompt: PromptTemplate\n \"\"\"PromptTemplate used to format an individual example.\"\"\"\n suffix: StringPromptTemplate\n \"\"\"A PromptTemplate to put after the examples.\"\"\"\n input_variables: List[str]\n \"\"\"A list of the names of the variables the prompt template expects.\"\"\"\n example_separator: str = \"\\n\\n\"\n \"\"\"String separator used to join the prefix, the examples, and suffix.\"\"\"\n prefix: Optional[StringPromptTemplate] = None\n \"\"\"A PromptTemplate to put before the examples.\"\"\"\n template_format: str = \"f-string\"\n \"\"\"The format of the prompt template. Options are: 'f-string', 'jinja2'.\"\"\"\n validate_template: bool = True\n \"\"\"Whether or not to try validating the template.\"\"\"\n @root_validator(pre=True)\n def check_examples_and_selector(cls, values: Dict) -> Dict:\n \"\"\"Check that one and only one of examples/example_selector are provided.\"\"\"\n examples = values.get(\"examples\", None)", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/few_shot_with_templates.html"}
+{"id": "8eccc9cc9134-1", "text": "examples = values.get(\"examples\", None)\n example_selector = values.get(\"example_selector\", None)\n if examples and example_selector:\n raise ValueError(\n \"Only one of 'examples' and 'example_selector' should be provided\"\n )\n if examples is None and example_selector is None:\n raise ValueError(\n \"One of 'examples' and 'example_selector' should be provided\"\n )\n return values\n @root_validator()\n def template_is_valid(cls, values: Dict) -> Dict:\n \"\"\"Check that prefix, suffix and input variables are consistent.\"\"\"\n if values[\"validate_template\"]:\n input_variables = values[\"input_variables\"]\n expected_input_variables = set(values[\"suffix\"].input_variables)\n expected_input_variables |= set(values[\"partial_variables\"])\n if values[\"prefix\"] is not None:\n expected_input_variables |= set(values[\"prefix\"].input_variables)\n missing_vars = expected_input_variables.difference(input_variables)\n if missing_vars:\n raise ValueError(\n f\"Got input_variables={input_variables}, but based on \"\n f\"prefix/suffix expected {expected_input_variables}\"\n )\n return values\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n def _get_examples(self, **kwargs: Any) -> List[dict]:\n if self.examples is not None:\n return self.examples\n elif self.example_selector is not None:\n return self.example_selector.select_examples(kwargs)\n else:\n raise ValueError\n[docs] def format(self, **kwargs: Any) -> str:\n \"\"\"Format the prompt with the inputs.\n Args:\n kwargs: Any arguments to be passed to the prompt template.", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/few_shot_with_templates.html"}
+{"id": "8eccc9cc9134-2", "text": "Args:\n kwargs: Any arguments to be passed to the prompt template.\n Returns:\n A formatted string.\n Example:\n .. code-block:: python\n prompt.format(variable1=\"foo\")\n \"\"\"\n kwargs = self._merge_partial_and_user_variables(**kwargs)\n # Get the examples to use.\n examples = self._get_examples(**kwargs)\n # Format the examples.\n example_strings = [\n self.example_prompt.format(**example) for example in examples\n ]\n # Create the overall prefix.\n if self.prefix is None:\n prefix = \"\"\n else:\n prefix_kwargs = {\n k: v for k, v in kwargs.items() if k in self.prefix.input_variables\n }\n for k in prefix_kwargs.keys():\n kwargs.pop(k)\n prefix = self.prefix.format(**prefix_kwargs)\n # Create the overall suffix\n suffix_kwargs = {\n k: v for k, v in kwargs.items() if k in self.suffix.input_variables\n }\n for k in suffix_kwargs.keys():\n kwargs.pop(k)\n suffix = self.suffix.format(\n **suffix_kwargs,\n )\n pieces = [prefix, *example_strings, suffix]\n template = self.example_separator.join([piece for piece in pieces if piece])\n # Format the template with the input variables.\n return DEFAULT_FORMATTER_MAPPING[self.template_format](template, **kwargs)\n @property\n def _prompt_type(self) -> str:\n \"\"\"Return the prompt type key.\"\"\"\n return \"few_shot_with_templates\"\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return a dictionary of the prompt.\"\"\"\n if self.example_selector:", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/few_shot_with_templates.html"}
+{"id": "8eccc9cc9134-3", "text": "\"\"\"Return a dictionary of the prompt.\"\"\"\n if self.example_selector:\n raise ValueError(\"Saving an example selector is not currently supported\")\n return super().dict(**kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/few_shot_with_templates.html"}
+{"id": "3f574573f4a2-0", "text": "Source code for langchain.prompts.loading\n\"\"\"Load prompts from disk.\"\"\"\nimport importlib\nimport json\nimport logging\nfrom pathlib import Path\nfrom typing import Union\nimport yaml\nfrom langchain.output_parsers.regex import RegexParser\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.prompts.few_shot import FewShotPromptTemplate\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.utilities.loading import try_load_from_hub\nURL_BASE = \"https://raw.githubusercontent.com/hwchase17/langchain-hub/master/prompts/\"\nlogger = logging.getLogger(__name__)\ndef load_prompt_from_config(config: dict) -> BasePromptTemplate:\n \"\"\"Load prompt from Config Dict.\"\"\"\n if \"_type\" not in config:\n logger.warning(\"No `_type` key found, defaulting to `prompt`.\")\n config_type = config.pop(\"_type\", \"prompt\")\n if config_type not in type_to_loader_dict:\n raise ValueError(f\"Loading {config_type} prompt not supported\")\n prompt_loader = type_to_loader_dict[config_type]\n return prompt_loader(config)\ndef _load_template(var_name: str, config: dict) -> dict:\n \"\"\"Load template from disk if applicable.\"\"\"\n # Check if template_path exists in config.\n if f\"{var_name}_path\" in config:\n # If it does, make sure template variable doesn't also exist.\n if var_name in config:\n raise ValueError(\n f\"Both `{var_name}_path` and `{var_name}` cannot be provided.\"\n )\n # Pop the template path from the config.\n template_path = Path(config.pop(f\"{var_name}_path\"))\n # Load the template.\n if template_path.suffix == \".txt\":\n with open(template_path) as f:", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/loading.html"}
+{"id": "3f574573f4a2-1", "text": "with open(template_path) as f:\n template = f.read()\n else:\n raise ValueError\n # Set the template variable to the extracted variable.\n config[var_name] = template\n return config\ndef _load_examples(config: dict) -> dict:\n \"\"\"Load examples if necessary.\"\"\"\n if isinstance(config[\"examples\"], list):\n pass\n elif isinstance(config[\"examples\"], str):\n with open(config[\"examples\"]) as f:\n if config[\"examples\"].endswith(\".json\"):\n examples = json.load(f)\n elif config[\"examples\"].endswith((\".yaml\", \".yml\")):\n examples = yaml.safe_load(f)\n else:\n raise ValueError(\n \"Invalid file format. Only json or yaml formats are supported.\"\n )\n config[\"examples\"] = examples\n else:\n raise ValueError(\"Invalid examples format. Only list or string are supported.\")\n return config\ndef _load_output_parser(config: dict) -> dict:\n \"\"\"Load output parser.\"\"\"\n if \"output_parser\" in config and config[\"output_parser\"]:\n _config = config.pop(\"output_parser\")\n output_parser_type = _config.pop(\"_type\")\n if output_parser_type == \"regex_parser\":\n output_parser = RegexParser(**_config)\n else:\n raise ValueError(f\"Unsupported output parser {output_parser_type}\")\n config[\"output_parser\"] = output_parser\n return config\ndef _load_few_shot_prompt(config: dict) -> FewShotPromptTemplate:\n \"\"\"Load the few shot prompt from the config.\"\"\"\n # Load the suffix and prefix templates.\n config = _load_template(\"suffix\", config)\n config = _load_template(\"prefix\", config)\n # Load the example prompt.", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/loading.html"}
+{"id": "3f574573f4a2-2", "text": "config = _load_template(\"prefix\", config)\n # Load the example prompt.\n if \"example_prompt_path\" in config:\n if \"example_prompt\" in config:\n raise ValueError(\n \"Only one of example_prompt and example_prompt_path should \"\n \"be specified.\"\n )\n config[\"example_prompt\"] = load_prompt(config.pop(\"example_prompt_path\"))\n else:\n config[\"example_prompt\"] = load_prompt_from_config(config[\"example_prompt\"])\n # Load the examples.\n config = _load_examples(config)\n config = _load_output_parser(config)\n return FewShotPromptTemplate(**config)\ndef _load_prompt(config: dict) -> PromptTemplate:\n \"\"\"Load the prompt template from config.\"\"\"\n # Load the template from disk if necessary.\n config = _load_template(\"template\", config)\n config = _load_output_parser(config)\n return PromptTemplate(**config)\n[docs]def load_prompt(path: Union[str, Path]) -> BasePromptTemplate:\n \"\"\"Unified method for loading a prompt from LangChainHub or local fs.\"\"\"\n if hub_result := try_load_from_hub(\n path, _load_prompt_from_file, \"prompts\", {\"py\", \"json\", \"yaml\"}\n ):\n return hub_result\n else:\n return _load_prompt_from_file(path)\ndef _load_prompt_from_file(file: Union[str, Path]) -> BasePromptTemplate:\n \"\"\"Load prompt from file.\"\"\"\n # Convert file to Path object.\n if isinstance(file, str):\n file_path = Path(file)\n else:\n file_path = file\n # Load from either json or yaml.\n if file_path.suffix == \".json\":\n with open(file_path) as f:\n config = json.load(f)", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/loading.html"}
+{"id": "3f574573f4a2-3", "text": "with open(file_path) as f:\n config = json.load(f)\n elif file_path.suffix == \".yaml\":\n with open(file_path, \"r\") as f:\n config = yaml.safe_load(f)\n elif file_path.suffix == \".py\":\n spec = importlib.util.spec_from_loader(\n \"prompt\", loader=None, origin=str(file_path)\n )\n if spec is None:\n raise ValueError(\"could not load spec\")\n helper = importlib.util.module_from_spec(spec)\n with open(file_path, \"rb\") as f:\n exec(f.read(), helper.__dict__)\n if not isinstance(helper.PROMPT, BasePromptTemplate):\n raise ValueError(\"Did not get object of type BasePromptTemplate.\")\n return helper.PROMPT\n else:\n raise ValueError(f\"Got unsupported file type {file_path.suffix}\")\n # Load the prompt from the config now.\n return load_prompt_from_config(config)\ntype_to_loader_dict = {\n \"prompt\": _load_prompt,\n \"few_shot\": _load_few_shot_prompt,\n # \"few_shot_with_templates\": _load_few_shot_with_templates_prompt,\n}\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/loading.html"}
+{"id": "aed627abb0c0-0", "text": "Source code for langchain.prompts.example_selector.length_based\n\"\"\"Select examples based on length.\"\"\"\nimport re\nfrom typing import Callable, Dict, List\nfrom pydantic import BaseModel, validator\nfrom langchain.prompts.example_selector.base import BaseExampleSelector\nfrom langchain.prompts.prompt import PromptTemplate\ndef _get_length_based(text: str) -> int:\n return len(re.split(\"\\n| \", text))\n[docs]class LengthBasedExampleSelector(BaseExampleSelector, BaseModel):\n \"\"\"Select examples based on length.\"\"\"\n examples: List[dict]\n \"\"\"A list of the examples that the prompt template expects.\"\"\"\n example_prompt: PromptTemplate\n \"\"\"Prompt template used to format the examples.\"\"\"\n get_text_length: Callable[[str], int] = _get_length_based\n \"\"\"Function to measure prompt length. Defaults to word count.\"\"\"\n max_length: int = 2048\n \"\"\"Max length for the prompt, beyond which examples are cut.\"\"\"\n example_text_lengths: List[int] = [] #: :meta private:\n[docs] def add_example(self, example: Dict[str, str]) -> None:\n \"\"\"Add new example to list.\"\"\"\n self.examples.append(example)\n string_example = self.example_prompt.format(**example)\n self.example_text_lengths.append(self.get_text_length(string_example))\n @validator(\"example_text_lengths\", always=True)\n def calculate_example_text_lengths(cls, v: List[int], values: Dict) -> List[int]:\n \"\"\"Calculate text lengths if they don't exist.\"\"\"\n # Check if text lengths were passed in\n if v:\n return v\n # If they were not, calculate them\n example_prompt = values[\"example_prompt\"]\n get_text_length = values[\"get_text_length\"]", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/length_based.html"}
+{"id": "aed627abb0c0-1", "text": "get_text_length = values[\"get_text_length\"]\n string_examples = [example_prompt.format(**eg) for eg in values[\"examples\"]]\n return [get_text_length(eg) for eg in string_examples]\n[docs] def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:\n \"\"\"Select which examples to use based on the input lengths.\"\"\"\n inputs = \" \".join(input_variables.values())\n remaining_length = self.max_length - self.get_text_length(inputs)\n i = 0\n examples = []\n while remaining_length > 0 and i < len(self.examples):\n new_length = remaining_length - self.example_text_lengths[i]\n if new_length < 0:\n break\n else:\n examples.append(self.examples[i])\n remaining_length = new_length\n i += 1\n return examples\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/length_based.html"}
+{"id": "68742d912109-0", "text": "Source code for langchain.prompts.example_selector.semantic_similarity\n\"\"\"Example selector that selects examples based on SemanticSimilarity.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional, Type\nfrom pydantic import BaseModel, Extra\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.prompts.example_selector.base import BaseExampleSelector\nfrom langchain.vectorstores.base import VectorStore\ndef sorted_values(values: Dict[str, str]) -> List[Any]:\n \"\"\"Return a list of values in dict sorted by key.\"\"\"\n return [values[val] for val in sorted(values)]\n[docs]class SemanticSimilarityExampleSelector(BaseExampleSelector, BaseModel):\n \"\"\"Example selector that selects examples based on SemanticSimilarity.\"\"\"\n vectorstore: VectorStore\n \"\"\"VectorStore than contains information about examples.\"\"\"\n k: int = 4\n \"\"\"Number of examples to select.\"\"\"\n example_keys: Optional[List[str]] = None\n \"\"\"Optional keys to filter examples to.\"\"\"\n input_keys: Optional[List[str]] = None\n \"\"\"Optional keys to filter input to. If provided, the search is based on\n the input variables instead of all variables.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] def add_example(self, example: Dict[str, str]) -> str:\n \"\"\"Add new example to vectorstore.\"\"\"\n if self.input_keys:\n string_example = \" \".join(\n sorted_values({key: example[key] for key in self.input_keys})\n )\n else:\n string_example = \" \".join(sorted_values(example))\n ids = self.vectorstore.add_texts([string_example], metadatas=[example])\n return ids[0]", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/semantic_similarity.html"}
+{"id": "68742d912109-1", "text": "return ids[0]\n[docs] def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:\n \"\"\"Select which examples to use based on semantic similarity.\"\"\"\n # Get the docs with the highest similarity.\n if self.input_keys:\n input_variables = {key: input_variables[key] for key in self.input_keys}\n query = \" \".join(sorted_values(input_variables))\n example_docs = self.vectorstore.similarity_search(query, k=self.k)\n # Get the examples from the metadata.\n # This assumes that examples are stored in metadata.\n examples = [dict(e.metadata) for e in example_docs]\n # If example keys are provided, filter examples to those keys.\n if self.example_keys:\n examples = [{k: eg[k] for k in self.example_keys} for eg in examples]\n return examples\n[docs] @classmethod\n def from_examples(\n cls,\n examples: List[dict],\n embeddings: Embeddings,\n vectorstore_cls: Type[VectorStore],\n k: int = 4,\n input_keys: Optional[List[str]] = None,\n **vectorstore_cls_kwargs: Any,\n ) -> SemanticSimilarityExampleSelector:\n \"\"\"Create k-shot example selector using example list and embeddings.\n Reshuffles examples dynamically based on query similarity.\n Args:\n examples: List of examples to use in the prompt.\n embeddings: An initialized embedding API interface, e.g. OpenAIEmbeddings().\n vectorstore_cls: A vector store DB interface class, e.g. FAISS.\n k: Number of examples to select\n input_keys: If provided, the search is based on the input variables\n instead of all variables.", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/semantic_similarity.html"}
+{"id": "68742d912109-2", "text": "instead of all variables.\n vectorstore_cls_kwargs: optional kwargs containing url for vector store\n Returns:\n The ExampleSelector instantiated, backed by a vector store.\n \"\"\"\n if input_keys:\n string_examples = [\n \" \".join(sorted_values({k: eg[k] for k in input_keys}))\n for eg in examples\n ]\n else:\n string_examples = [\" \".join(sorted_values(eg)) for eg in examples]\n vectorstore = vectorstore_cls.from_texts(\n string_examples, embeddings, metadatas=examples, **vectorstore_cls_kwargs\n )\n return cls(vectorstore=vectorstore, k=k, input_keys=input_keys)\n[docs]class MaxMarginalRelevanceExampleSelector(SemanticSimilarityExampleSelector):\n \"\"\"ExampleSelector that selects examples based on Max Marginal Relevance.\n This was shown to improve performance in this paper:\n https://arxiv.org/pdf/2211.13892.pdf\n \"\"\"\n fetch_k: int = 20\n \"\"\"Number of examples to fetch to rerank.\"\"\"\n[docs] def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:\n \"\"\"Select which examples to use based on semantic similarity.\"\"\"\n # Get the docs with the highest similarity.\n if self.input_keys:\n input_variables = {key: input_variables[key] for key in self.input_keys}\n query = \" \".join(sorted_values(input_variables))\n example_docs = self.vectorstore.max_marginal_relevance_search(\n query, k=self.k, fetch_k=self.fetch_k\n )\n # Get the examples from the metadata.\n # This assumes that examples are stored in metadata.\n examples = [dict(e.metadata) for e in example_docs]", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/semantic_similarity.html"}
+{"id": "68742d912109-3", "text": "examples = [dict(e.metadata) for e in example_docs]\n # If example keys are provided, filter examples to those keys.\n if self.example_keys:\n examples = [{k: eg[k] for k in self.example_keys} for eg in examples]\n return examples\n[docs] @classmethod\n def from_examples(\n cls,\n examples: List[dict],\n embeddings: Embeddings,\n vectorstore_cls: Type[VectorStore],\n k: int = 4,\n input_keys: Optional[List[str]] = None,\n fetch_k: int = 20,\n **vectorstore_cls_kwargs: Any,\n ) -> MaxMarginalRelevanceExampleSelector:\n \"\"\"Create k-shot example selector using example list and embeddings.\n Reshuffles examples dynamically based on query similarity.\n Args:\n examples: List of examples to use in the prompt.\n embeddings: An iniialized embedding API interface, e.g. OpenAIEmbeddings().\n vectorstore_cls: A vector store DB interface class, e.g. FAISS.\n k: Number of examples to select\n input_keys: If provided, the search is based on the input variables\n instead of all variables.\n vectorstore_cls_kwargs: optional kwargs containing url for vector store\n Returns:\n The ExampleSelector instantiated, backed by a vector store.\n \"\"\"\n if input_keys:\n string_examples = [\n \" \".join(sorted_values({k: eg[k] for k in input_keys}))\n for eg in examples\n ]\n else:\n string_examples = [\" \".join(sorted_values(eg)) for eg in examples]\n vectorstore = vectorstore_cls.from_texts(\n string_examples, embeddings, metadatas=examples, **vectorstore_cls_kwargs\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/semantic_similarity.html"}
+{"id": "68742d912109-4", "text": ")\n return cls(vectorstore=vectorstore, k=k, fetch_k=fetch_k, input_keys=input_keys)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/semantic_similarity.html"}
+{"id": "b1c978110697-0", "text": "Source code for langchain.agents.initialize\n\"\"\"Load agent.\"\"\"\nfrom typing import Any, Optional, Sequence\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_types import AgentType\nfrom langchain.agents.loading import AGENT_TO_CLASS, load_agent\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.tools.base import BaseTool\n[docs]def initialize_agent(\n tools: Sequence[BaseTool],\n llm: BaseLanguageModel,\n agent: Optional[AgentType] = None,\n callback_manager: Optional[BaseCallbackManager] = None,\n agent_path: Optional[str] = None,\n agent_kwargs: Optional[dict] = None,\n **kwargs: Any,\n) -> AgentExecutor:\n \"\"\"Load an agent executor given tools and LLM.\n Args:\n tools: List of tools this agent has access to.\n llm: Language model to use as the agent.\n agent: Agent type to use. If None and agent_path is also None, will default to\n AgentType.ZERO_SHOT_REACT_DESCRIPTION.\n callback_manager: CallbackManager to use. Global callback manager is used if\n not provided. Defaults to None.\n agent_path: Path to serialized agent to use.\n agent_kwargs: Additional key word arguments to pass to the underlying agent\n **kwargs: Additional key word arguments passed to the agent executor\n Returns:\n An agent executor\n \"\"\"\n if agent is None and agent_path is None:\n agent = AgentType.ZERO_SHOT_REACT_DESCRIPTION\n if agent is not None and agent_path is not None:\n raise ValueError(\n \"Both `agent` and `agent_path` are specified, \"\n \"but at most only one should be.\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/initialize.html"}
+{"id": "b1c978110697-1", "text": "\"but at most only one should be.\"\n )\n if agent is not None:\n if agent not in AGENT_TO_CLASS:\n raise ValueError(\n f\"Got unknown agent type: {agent}. \"\n f\"Valid types are: {AGENT_TO_CLASS.keys()}.\"\n )\n agent_cls = AGENT_TO_CLASS[agent]\n agent_kwargs = agent_kwargs or {}\n agent_obj = agent_cls.from_llm_and_tools(\n llm, tools, callback_manager=callback_manager, **agent_kwargs\n )\n elif agent_path is not None:\n agent_obj = load_agent(\n agent_path, llm=llm, tools=tools, callback_manager=callback_manager\n )\n else:\n raise ValueError(\n \"Somehow both `agent` and `agent_path` are None, \"\n \"this should never happen.\"\n )\n return AgentExecutor.from_agent_and_tools(\n agent=agent_obj,\n tools=tools,\n callback_manager=callback_manager,\n **kwargs,\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/initialize.html"}
+{"id": "c9638ff90a75-0", "text": "Source code for langchain.agents.agent\n\"\"\"Chain that takes in an input and produces an action and action input.\"\"\"\nfrom __future__ import annotations\nimport asyncio\nimport json\nimport logging\nimport time\nfrom abc import abstractmethod\nfrom pathlib import Path\nfrom typing import Any, Callable, Dict, List, Optional, Sequence, Tuple, Union\nimport yaml\nfrom pydantic import BaseModel, root_validator\nfrom langchain.agents.agent_types import AgentType\nfrom langchain.agents.tools import InvalidTool\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n AsyncCallbackManagerForToolRun,\n CallbackManagerForChainRun,\n CallbackManagerForToolRun,\n Callbacks,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.input import get_color_mapping\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.prompts.few_shot import FewShotPromptTemplate\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import (\n AgentAction,\n AgentFinish,\n BaseMessage,\n BaseOutputParser,\n OutputParserException,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.asyncio import asyncio_timeout\nlogger = logging.getLogger(__name__)\n[docs]class BaseSingleActionAgent(BaseModel):\n \"\"\"Base Agent class.\"\"\"\n @property\n def return_values(self) -> List[str]:\n \"\"\"Return values of the agent.\"\"\"\n return [\"output\"]\n[docs] def get_allowed_tools(self) -> Optional[List[str]]:\n return None\n[docs] @abstractmethod\n def plan(\n self,", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "c9638ff90a75-1", "text": "return None\n[docs] @abstractmethod\n def plan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[AgentAction, AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n[docs] @abstractmethod\n async def aplan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[AgentAction, AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n @property\n @abstractmethod\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n[docs] def return_stopped_response(\n self,\n early_stopping_method: str,\n intermediate_steps: List[Tuple[AgentAction, str]],\n **kwargs: Any,\n ) -> AgentFinish:\n \"\"\"Return response when agent has been stopped due to max iterations.\"\"\"\n if early_stopping_method == \"force\":\n # `force` just returns a constant string\n return AgentFinish(", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "c9638ff90a75-2", "text": "# `force` just returns a constant string\n return AgentFinish(\n {\"output\": \"Agent stopped due to iteration limit or time limit.\"}, \"\"\n )\n else:\n raise ValueError(\n f\"Got unsupported early_stopping_method `{early_stopping_method}`\"\n )\n[docs] @classmethod\n def from_llm_and_tools(\n cls,\n llm: BaseLanguageModel,\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n **kwargs: Any,\n ) -> BaseSingleActionAgent:\n raise NotImplementedError\n @property\n def _agent_type(self) -> str:\n \"\"\"Return Identifier of agent type.\"\"\"\n raise NotImplementedError\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return dictionary representation of agent.\"\"\"\n _dict = super().dict()\n _type = self._agent_type\n if isinstance(_type, AgentType):\n _dict[\"_type\"] = str(_type.value)\n else:\n _dict[\"_type\"] = _type\n return _dict\n[docs] def save(self, file_path: Union[Path, str]) -> None:\n \"\"\"Save the agent.\n Args:\n file_path: Path to file to save the agent to.\n Example:\n .. code-block:: python\n # If working with agent executor\n agent.agent.save(file_path=\"path/agent.yaml\")\n \"\"\"\n # Convert file to Path object.\n if isinstance(file_path, str):\n save_path = Path(file_path)\n else:\n save_path = file_path\n directory_path = save_path.parent\n directory_path.mkdir(parents=True, exist_ok=True)\n # Fetch dictionary to save", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "c9638ff90a75-3", "text": "directory_path.mkdir(parents=True, exist_ok=True)\n # Fetch dictionary to save\n agent_dict = self.dict()\n if save_path.suffix == \".json\":\n with open(file_path, \"w\") as f:\n json.dump(agent_dict, f, indent=4)\n elif save_path.suffix == \".yaml\":\n with open(file_path, \"w\") as f:\n yaml.dump(agent_dict, f, default_flow_style=False)\n else:\n raise ValueError(f\"{save_path} must be json or yaml\")\n[docs] def tool_run_logging_kwargs(self) -> Dict:\n return {}\n[docs]class BaseMultiActionAgent(BaseModel):\n \"\"\"Base Agent class.\"\"\"\n @property\n def return_values(self) -> List[str]:\n \"\"\"Return values of the agent.\"\"\"\n return [\"output\"]\n[docs] def get_allowed_tools(self) -> Optional[List[str]]:\n return None\n[docs] @abstractmethod\n def plan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[List[AgentAction], AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Actions specifying what tool to use.\n \"\"\"\n[docs] @abstractmethod\n async def aplan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[List[AgentAction], AgentFinish]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "c9638ff90a75-4", "text": "**kwargs: Any,\n ) -> Union[List[AgentAction], AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Actions specifying what tool to use.\n \"\"\"\n @property\n @abstractmethod\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n[docs] def return_stopped_response(\n self,\n early_stopping_method: str,\n intermediate_steps: List[Tuple[AgentAction, str]],\n **kwargs: Any,\n ) -> AgentFinish:\n \"\"\"Return response when agent has been stopped due to max iterations.\"\"\"\n if early_stopping_method == \"force\":\n # `force` just returns a constant string\n return AgentFinish({\"output\": \"Agent stopped due to max iterations.\"}, \"\")\n else:\n raise ValueError(\n f\"Got unsupported early_stopping_method `{early_stopping_method}`\"\n )\n @property\n def _agent_type(self) -> str:\n \"\"\"Return Identifier of agent type.\"\"\"\n raise NotImplementedError\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return dictionary representation of agent.\"\"\"\n _dict = super().dict()\n _dict[\"_type\"] = str(self._agent_type)\n return _dict\n[docs] def save(self, file_path: Union[Path, str]) -> None:\n \"\"\"Save the agent.\n Args:\n file_path: Path to file to save the agent to.\n Example:\n .. code-block:: python", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "c9638ff90a75-5", "text": "Example:\n .. code-block:: python\n # If working with agent executor\n agent.agent.save(file_path=\"path/agent.yaml\")\n \"\"\"\n # Convert file to Path object.\n if isinstance(file_path, str):\n save_path = Path(file_path)\n else:\n save_path = file_path\n directory_path = save_path.parent\n directory_path.mkdir(parents=True, exist_ok=True)\n # Fetch dictionary to save\n agent_dict = self.dict()\n if save_path.suffix == \".json\":\n with open(file_path, \"w\") as f:\n json.dump(agent_dict, f, indent=4)\n elif save_path.suffix == \".yaml\":\n with open(file_path, \"w\") as f:\n yaml.dump(agent_dict, f, default_flow_style=False)\n else:\n raise ValueError(f\"{save_path} must be json or yaml\")\n[docs] def tool_run_logging_kwargs(self) -> Dict:\n return {}\n[docs]class AgentOutputParser(BaseOutputParser):\n[docs] @abstractmethod\n def parse(self, text: str) -> Union[AgentAction, AgentFinish]:\n \"\"\"Parse text into agent action/finish.\"\"\"\n[docs]class LLMSingleActionAgent(BaseSingleActionAgent):\n llm_chain: LLMChain\n output_parser: AgentOutputParser\n stop: List[str]\n @property\n def input_keys(self) -> List[str]:\n return list(set(self.llm_chain.input_keys) - {\"intermediate_steps\"})\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return dictionary representation of agent.\"\"\"\n _dict = super().dict()\n del _dict[\"output_parser\"]\n return _dict\n[docs] def plan(", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "c9638ff90a75-6", "text": "return _dict\n[docs] def plan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[AgentAction, AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n output = self.llm_chain.run(\n intermediate_steps=intermediate_steps,\n stop=self.stop,\n callbacks=callbacks,\n **kwargs,\n )\n return self.output_parser.parse(output)\n[docs] async def aplan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[AgentAction, AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n output = await self.llm_chain.arun(\n intermediate_steps=intermediate_steps,\n stop=self.stop,\n callbacks=callbacks,\n **kwargs,\n )\n return self.output_parser.parse(output)\n[docs] def tool_run_logging_kwargs(self) -> Dict:\n return {\n \"llm_prefix\": \"\",\n \"observation_prefix\": \"\" if len(self.stop) == 0 else self.stop[0],\n }", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "c9638ff90a75-7", "text": "}\n[docs]class Agent(BaseSingleActionAgent):\n \"\"\"Class responsible for calling the language model and deciding the action.\n This is driven by an LLMChain. The prompt in the LLMChain MUST include\n a variable called \"agent_scratchpad\" where the agent can put its\n intermediary work.\n \"\"\"\n llm_chain: LLMChain\n output_parser: AgentOutputParser\n allowed_tools: Optional[List[str]] = None\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return dictionary representation of agent.\"\"\"\n _dict = super().dict()\n del _dict[\"output_parser\"]\n return _dict\n[docs] def get_allowed_tools(self) -> Optional[List[str]]:\n return self.allowed_tools\n @property\n def return_values(self) -> List[str]:\n return [\"output\"]\n def _fix_text(self, text: str) -> str:\n \"\"\"Fix the text.\"\"\"\n raise ValueError(\"fix_text not implemented for this agent.\")\n @property\n def _stop(self) -> List[str]:\n return [\n f\"\\n{self.observation_prefix.rstrip()}\",\n f\"\\n\\t{self.observation_prefix.rstrip()}\",\n ]\n def _construct_scratchpad(\n self, intermediate_steps: List[Tuple[AgentAction, str]]\n ) -> Union[str, List[BaseMessage]]:\n \"\"\"Construct the scratchpad that lets the agent continue its thought process.\"\"\"\n thoughts = \"\"\n for action, observation in intermediate_steps:\n thoughts += action.log\n thoughts += f\"\\n{self.observation_prefix}{observation}\\n{self.llm_prefix}\"\n return thoughts\n[docs] def plan(\n self,", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "c9638ff90a75-8", "text": "return thoughts\n[docs] def plan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[AgentAction, AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)\n full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs)\n return self.output_parser.parse(full_output)\n[docs] async def aplan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[AgentAction, AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)\n full_output = await self.llm_chain.apredict(callbacks=callbacks, **full_inputs)\n return self.output_parser.parse(full_output)\n[docs] def get_full_inputs(\n self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any\n ) -> Dict[str, Any]:\n \"\"\"Create the full inputs for the LLMChain from intermediate steps.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "c9638ff90a75-9", "text": "\"\"\"Create the full inputs for the LLMChain from intermediate steps.\"\"\"\n thoughts = self._construct_scratchpad(intermediate_steps)\n new_inputs = {\"agent_scratchpad\": thoughts, \"stop\": self._stop}\n full_inputs = {**kwargs, **new_inputs}\n return full_inputs\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n return list(set(self.llm_chain.input_keys) - {\"agent_scratchpad\"})\n @root_validator()\n def validate_prompt(cls, values: Dict) -> Dict:\n \"\"\"Validate that prompt matches format.\"\"\"\n prompt = values[\"llm_chain\"].prompt\n if \"agent_scratchpad\" not in prompt.input_variables:\n logger.warning(\n \"`agent_scratchpad` should be a variable in prompt.input_variables.\"\n \" Did not find it, so adding it at the end.\"\n )\n prompt.input_variables.append(\"agent_scratchpad\")\n if isinstance(prompt, PromptTemplate):\n prompt.template += \"\\n{agent_scratchpad}\"\n elif isinstance(prompt, FewShotPromptTemplate):\n prompt.suffix += \"\\n{agent_scratchpad}\"\n else:\n raise ValueError(f\"Got unexpected prompt type {type(prompt)}\")\n return values\n @property\n @abstractmethod\n def observation_prefix(self) -> str:\n \"\"\"Prefix to append the observation with.\"\"\"\n @property\n @abstractmethod\n def llm_prefix(self) -> str:\n \"\"\"Prefix to append the LLM call with.\"\"\"\n[docs] @classmethod\n @abstractmethod\n def create_prompt(cls, tools: Sequence[BaseTool]) -> BasePromptTemplate:\n \"\"\"Create a prompt for this class.\"\"\"\n @classmethod", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "c9638ff90a75-10", "text": "\"\"\"Create a prompt for this class.\"\"\"\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n \"\"\"Validate that appropriate tools are passed in.\"\"\"\n pass\n @classmethod\n @abstractmethod\n def _get_default_output_parser(cls, **kwargs: Any) -> AgentOutputParser:\n \"\"\"Get default output parser for this class.\"\"\"\n[docs] @classmethod\n def from_llm_and_tools(\n cls,\n llm: BaseLanguageModel,\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n output_parser: Optional[AgentOutputParser] = None,\n **kwargs: Any,\n ) -> Agent:\n \"\"\"Construct an agent from an LLM and tools.\"\"\"\n cls._validate_tools(tools)\n llm_chain = LLMChain(\n llm=llm,\n prompt=cls.create_prompt(tools),\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n _output_parser = output_parser or cls._get_default_output_parser()\n return cls(\n llm_chain=llm_chain,\n allowed_tools=tool_names,\n output_parser=_output_parser,\n **kwargs,\n )\n[docs] def return_stopped_response(\n self,\n early_stopping_method: str,\n intermediate_steps: List[Tuple[AgentAction, str]],\n **kwargs: Any,\n ) -> AgentFinish:\n \"\"\"Return response when agent has been stopped due to max iterations.\"\"\"\n if early_stopping_method == \"force\":\n # `force` just returns a constant string\n return AgentFinish(", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "c9638ff90a75-11", "text": "# `force` just returns a constant string\n return AgentFinish(\n {\"output\": \"Agent stopped due to iteration limit or time limit.\"}, \"\"\n )\n elif early_stopping_method == \"generate\":\n # Generate does one final forward pass\n thoughts = \"\"\n for action, observation in intermediate_steps:\n thoughts += action.log\n thoughts += (\n f\"\\n{self.observation_prefix}{observation}\\n{self.llm_prefix}\"\n )\n # Adding to the previous steps, we now tell the LLM to make a final pred\n thoughts += (\n \"\\n\\nI now need to return a final answer based on the previous steps:\"\n )\n new_inputs = {\"agent_scratchpad\": thoughts, \"stop\": self._stop}\n full_inputs = {**kwargs, **new_inputs}\n full_output = self.llm_chain.predict(**full_inputs)\n # We try to extract a final answer\n parsed_output = self.output_parser.parse(full_output)\n if isinstance(parsed_output, AgentFinish):\n # If we can extract, we send the correct stuff\n return parsed_output\n else:\n # If we can extract, but the tool is not the final tool,\n # we just return the full output\n return AgentFinish({\"output\": full_output}, full_output)\n else:\n raise ValueError(\n \"early_stopping_method should be one of `force` or `generate`, \"\n f\"got {early_stopping_method}\"\n )\n[docs] def tool_run_logging_kwargs(self) -> Dict:\n return {\n \"llm_prefix\": self.llm_prefix,\n \"observation_prefix\": self.observation_prefix,\n }\nclass ExceptionTool(BaseTool):\n name = \"_Exception\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "c9638ff90a75-12", "text": "}\nclass ExceptionTool(BaseTool):\n name = \"_Exception\"\n description = \"Exception tool\"\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n return query\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n return query\n[docs]class AgentExecutor(Chain):\n \"\"\"Consists of an agent using tools.\"\"\"\n agent: Union[BaseSingleActionAgent, BaseMultiActionAgent]\n tools: Sequence[BaseTool]\n return_intermediate_steps: bool = False\n max_iterations: Optional[int] = 15\n max_execution_time: Optional[float] = None\n early_stopping_method: str = \"force\"\n handle_parsing_errors: Union[\n bool, str, Callable[[OutputParserException], str]\n ] = False\n[docs] @classmethod\n def from_agent_and_tools(\n cls,\n agent: Union[BaseSingleActionAgent, BaseMultiActionAgent],\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n **kwargs: Any,\n ) -> AgentExecutor:\n \"\"\"Create from agent and tools.\"\"\"\n return cls(\n agent=agent, tools=tools, callback_manager=callback_manager, **kwargs\n )\n @root_validator()\n def validate_tools(cls, values: Dict) -> Dict:\n \"\"\"Validate that tools are compatible with agent.\"\"\"\n agent = values[\"agent\"]\n tools = values[\"tools\"]\n allowed_tools = agent.get_allowed_tools()", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "c9638ff90a75-13", "text": "tools = values[\"tools\"]\n allowed_tools = agent.get_allowed_tools()\n if allowed_tools is not None:\n if set(allowed_tools) != set([tool.name for tool in tools]):\n raise ValueError(\n f\"Allowed tools ({allowed_tools}) different than \"\n f\"provided tools ({[tool.name for tool in tools]})\"\n )\n return values\n @root_validator()\n def validate_return_direct_tool(cls, values: Dict) -> Dict:\n \"\"\"Validate that tools are compatible with agent.\"\"\"\n agent = values[\"agent\"]\n tools = values[\"tools\"]\n if isinstance(agent, BaseMultiActionAgent):\n for tool in tools:\n if tool.return_direct:\n raise ValueError(\n \"Tools that have `return_direct=True` are not allowed \"\n \"in multi-action agents\"\n )\n return values\n[docs] def save(self, file_path: Union[Path, str]) -> None:\n \"\"\"Raise error - saving not supported for Agent Executors.\"\"\"\n raise ValueError(\n \"Saving not supported for agent executors. \"\n \"If you are trying to save the agent, please use the \"\n \"`.save_agent(...)`\"\n )\n[docs] def save_agent(self, file_path: Union[Path, str]) -> None:\n \"\"\"Save the underlying agent.\"\"\"\n return self.agent.save(file_path)\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n return self.agent.input_keys\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the singular output key.\n :meta private:\n \"\"\"\n if self.return_intermediate_steps:", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "c9638ff90a75-14", "text": ":meta private:\n \"\"\"\n if self.return_intermediate_steps:\n return self.agent.return_values + [\"intermediate_steps\"]\n else:\n return self.agent.return_values\n[docs] def lookup_tool(self, name: str) -> BaseTool:\n \"\"\"Lookup tool by name.\"\"\"\n return {tool.name: tool for tool in self.tools}[name]\n def _should_continue(self, iterations: int, time_elapsed: float) -> bool:\n if self.max_iterations is not None and iterations >= self.max_iterations:\n return False\n if (\n self.max_execution_time is not None\n and time_elapsed >= self.max_execution_time\n ):\n return False\n return True\n def _return(\n self,\n output: AgentFinish,\n intermediate_steps: list,\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n if run_manager:\n run_manager.on_agent_finish(output, color=\"green\", verbose=self.verbose)\n final_output = output.return_values\n if self.return_intermediate_steps:\n final_output[\"intermediate_steps\"] = intermediate_steps\n return final_output\n async def _areturn(\n self,\n output: AgentFinish,\n intermediate_steps: list,\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n if run_manager:\n await run_manager.on_agent_finish(\n output, color=\"green\", verbose=self.verbose\n )\n final_output = output.return_values\n if self.return_intermediate_steps:\n final_output[\"intermediate_steps\"] = intermediate_steps\n return final_output\n def _take_next_step(\n self,", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "c9638ff90a75-15", "text": "return final_output\n def _take_next_step(\n self,\n name_to_tool_map: Dict[str, BaseTool],\n color_mapping: Dict[str, str],\n inputs: Dict[str, str],\n intermediate_steps: List[Tuple[AgentAction, str]],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:\n \"\"\"Take a single step in the thought-action-observation loop.\n Override this to take control of how the agent makes and acts on choices.\n \"\"\"\n try:\n # Call the LLM to see what to do.\n output = self.agent.plan(\n intermediate_steps,\n callbacks=run_manager.get_child() if run_manager else None,\n **inputs,\n )\n except OutputParserException as e:\n if isinstance(self.handle_parsing_errors, bool):\n raise_error = not self.handle_parsing_errors\n else:\n raise_error = False\n if raise_error:\n raise e\n text = str(e)\n if isinstance(self.handle_parsing_errors, bool):\n if e.send_to_llm:\n observation = str(e.observation)\n text = str(e.llm_output)\n else:\n observation = \"Invalid or incomplete response\"\n elif isinstance(self.handle_parsing_errors, str):\n observation = self.handle_parsing_errors\n elif callable(self.handle_parsing_errors):\n observation = self.handle_parsing_errors(e)\n else:\n raise ValueError(\"Got unexpected type of `handle_parsing_errors`\")\n output = AgentAction(\"_Exception\", observation, text)\n if run_manager:\n run_manager.on_agent_action(output, color=\"green\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "c9638ff90a75-16", "text": "if run_manager:\n run_manager.on_agent_action(output, color=\"green\")\n tool_run_kwargs = self.agent.tool_run_logging_kwargs()\n observation = ExceptionTool().run(\n output.tool_input,\n verbose=self.verbose,\n color=None,\n callbacks=run_manager.get_child() if run_manager else None,\n **tool_run_kwargs,\n )\n return [(output, observation)]\n # If the tool chosen is the finishing tool, then we end and return.\n if isinstance(output, AgentFinish):\n return output\n actions: List[AgentAction]\n if isinstance(output, AgentAction):\n actions = [output]\n else:\n actions = output\n result = []\n for agent_action in actions:\n if run_manager:\n run_manager.on_agent_action(agent_action, color=\"green\")\n # Otherwise we lookup the tool\n if agent_action.tool in name_to_tool_map:\n tool = name_to_tool_map[agent_action.tool]\n return_direct = tool.return_direct\n color = color_mapping[agent_action.tool]\n tool_run_kwargs = self.agent.tool_run_logging_kwargs()\n if return_direct:\n tool_run_kwargs[\"llm_prefix\"] = \"\"\n # We then call the tool on the tool input to get an observation\n observation = tool.run(\n agent_action.tool_input,\n verbose=self.verbose,\n color=color,\n callbacks=run_manager.get_child() if run_manager else None,\n **tool_run_kwargs,\n )\n else:\n tool_run_kwargs = self.agent.tool_run_logging_kwargs()\n observation = InvalidTool().run(\n agent_action.tool,\n verbose=self.verbose,\n color=None,\n callbacks=run_manager.get_child() if run_manager else None,", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "c9638ff90a75-17", "text": "color=None,\n callbacks=run_manager.get_child() if run_manager else None,\n **tool_run_kwargs,\n )\n result.append((agent_action, observation))\n return result\n async def _atake_next_step(\n self,\n name_to_tool_map: Dict[str, BaseTool],\n color_mapping: Dict[str, str],\n inputs: Dict[str, str],\n intermediate_steps: List[Tuple[AgentAction, str]],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:\n \"\"\"Take a single step in the thought-action-observation loop.\n Override this to take control of how the agent makes and acts on choices.\n \"\"\"\n try:\n # Call the LLM to see what to do.\n output = await self.agent.aplan(\n intermediate_steps,\n callbacks=run_manager.get_child() if run_manager else None,\n **inputs,\n )\n except OutputParserException as e:\n if isinstance(self.handle_parsing_errors, bool):\n raise_error = not self.handle_parsing_errors\n else:\n raise_error = False\n if raise_error:\n raise e\n text = str(e)\n if isinstance(self.handle_parsing_errors, bool):\n if e.send_to_llm:\n observation = str(e.observation)\n text = str(e.llm_output)\n else:\n observation = \"Invalid or incomplete response\"\n elif isinstance(self.handle_parsing_errors, str):\n observation = self.handle_parsing_errors\n elif callable(self.handle_parsing_errors):\n observation = self.handle_parsing_errors(e)\n else:", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "c9638ff90a75-18", "text": "observation = self.handle_parsing_errors(e)\n else:\n raise ValueError(\"Got unexpected type of `handle_parsing_errors`\")\n output = AgentAction(\"_Exception\", observation, text)\n tool_run_kwargs = self.agent.tool_run_logging_kwargs()\n observation = await ExceptionTool().arun(\n output.tool_input,\n verbose=self.verbose,\n color=None,\n callbacks=run_manager.get_child() if run_manager else None,\n **tool_run_kwargs,\n )\n return [(output, observation)]\n # If the tool chosen is the finishing tool, then we end and return.\n if isinstance(output, AgentFinish):\n return output\n actions: List[AgentAction]\n if isinstance(output, AgentAction):\n actions = [output]\n else:\n actions = output\n async def _aperform_agent_action(\n agent_action: AgentAction,\n ) -> Tuple[AgentAction, str]:\n if run_manager:\n await run_manager.on_agent_action(\n agent_action, verbose=self.verbose, color=\"green\"\n )\n # Otherwise we lookup the tool\n if agent_action.tool in name_to_tool_map:\n tool = name_to_tool_map[agent_action.tool]\n return_direct = tool.return_direct\n color = color_mapping[agent_action.tool]\n tool_run_kwargs = self.agent.tool_run_logging_kwargs()\n if return_direct:\n tool_run_kwargs[\"llm_prefix\"] = \"\"\n # We then call the tool on the tool input to get an observation\n observation = await tool.arun(\n agent_action.tool_input,\n verbose=self.verbose,\n color=color,\n callbacks=run_manager.get_child() if run_manager else None,\n **tool_run_kwargs,\n )\n else:", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "c9638ff90a75-19", "text": "**tool_run_kwargs,\n )\n else:\n tool_run_kwargs = self.agent.tool_run_logging_kwargs()\n observation = await InvalidTool().arun(\n agent_action.tool,\n verbose=self.verbose,\n color=None,\n callbacks=run_manager.get_child() if run_manager else None,\n **tool_run_kwargs,\n )\n return agent_action, observation\n # Use asyncio.gather to run multiple tool.arun() calls concurrently\n result = await asyncio.gather(\n *[_aperform_agent_action(agent_action) for agent_action in actions]\n )\n return list(result)\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Run text through and get agent response.\"\"\"\n # Construct a mapping of tool name to tool for easy lookup\n name_to_tool_map = {tool.name: tool for tool in self.tools}\n # We construct a mapping from each tool to a color, used for logging.\n color_mapping = get_color_mapping(\n [tool.name for tool in self.tools], excluded_colors=[\"green\", \"red\"]\n )\n intermediate_steps: List[Tuple[AgentAction, str]] = []\n # Let's start tracking the number of iterations and time elapsed\n iterations = 0\n time_elapsed = 0.0\n start_time = time.time()\n # We now enter the agent loop (until it returns something).\n while self._should_continue(iterations, time_elapsed):\n next_step_output = self._take_next_step(\n name_to_tool_map,\n color_mapping,\n inputs,\n intermediate_steps,\n run_manager=run_manager,\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "c9638ff90a75-20", "text": "inputs,\n intermediate_steps,\n run_manager=run_manager,\n )\n if isinstance(next_step_output, AgentFinish):\n return self._return(\n next_step_output, intermediate_steps, run_manager=run_manager\n )\n intermediate_steps.extend(next_step_output)\n if len(next_step_output) == 1:\n next_step_action = next_step_output[0]\n # See if tool should return directly\n tool_return = self._get_tool_return(next_step_action)\n if tool_return is not None:\n return self._return(\n tool_return, intermediate_steps, run_manager=run_manager\n )\n iterations += 1\n time_elapsed = time.time() - start_time\n output = self.agent.return_stopped_response(\n self.early_stopping_method, intermediate_steps, **inputs\n )\n return self._return(output, intermediate_steps, run_manager=run_manager)\n async def _acall(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n \"\"\"Run text through and get agent response.\"\"\"\n # Construct a mapping of tool name to tool for easy lookup\n name_to_tool_map = {tool.name: tool for tool in self.tools}\n # We construct a mapping from each tool to a color, used for logging.\n color_mapping = get_color_mapping(\n [tool.name for tool in self.tools], excluded_colors=[\"green\"]\n )\n intermediate_steps: List[Tuple[AgentAction, str]] = []\n # Let's start tracking the number of iterations and time elapsed\n iterations = 0\n time_elapsed = 0.0\n start_time = time.time()", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "c9638ff90a75-21", "text": "time_elapsed = 0.0\n start_time = time.time()\n # We now enter the agent loop (until it returns something).\n async with asyncio_timeout(self.max_execution_time):\n try:\n while self._should_continue(iterations, time_elapsed):\n next_step_output = await self._atake_next_step(\n name_to_tool_map,\n color_mapping,\n inputs,\n intermediate_steps,\n run_manager=run_manager,\n )\n if isinstance(next_step_output, AgentFinish):\n return await self._areturn(\n next_step_output,\n intermediate_steps,\n run_manager=run_manager,\n )\n intermediate_steps.extend(next_step_output)\n if len(next_step_output) == 1:\n next_step_action = next_step_output[0]\n # See if tool should return directly\n tool_return = self._get_tool_return(next_step_action)\n if tool_return is not None:\n return await self._areturn(\n tool_return, intermediate_steps, run_manager=run_manager\n )\n iterations += 1\n time_elapsed = time.time() - start_time\n output = self.agent.return_stopped_response(\n self.early_stopping_method, intermediate_steps, **inputs\n )\n return await self._areturn(\n output, intermediate_steps, run_manager=run_manager\n )\n except TimeoutError:\n # stop early when interrupted by the async timeout\n output = self.agent.return_stopped_response(\n self.early_stopping_method, intermediate_steps, **inputs\n )\n return await self._areturn(\n output, intermediate_steps, run_manager=run_manager\n )\n def _get_tool_return(\n self, next_step_output: Tuple[AgentAction, str]", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "c9638ff90a75-22", "text": "self, next_step_output: Tuple[AgentAction, str]\n ) -> Optional[AgentFinish]:\n \"\"\"Check if the tool is a returning tool.\"\"\"\n agent_action, observation = next_step_output\n name_to_tool_map = {tool.name: tool for tool in self.tools}\n # Invalid tools won't be in the map, so we return False.\n if agent_action.tool in name_to_tool_map:\n if name_to_tool_map[agent_action.tool].return_direct:\n return AgentFinish(\n {self.agent.return_values[0]: observation},\n \"\",\n )\n return None\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent.html"}
+{"id": "a913293a28bc-0", "text": "Source code for langchain.agents.loading\n\"\"\"Functionality for loading agents.\"\"\"\nimport json\nimport logging\nfrom pathlib import Path\nfrom typing import Any, List, Optional, Union\nimport yaml\nfrom langchain.agents.agent import BaseSingleActionAgent\nfrom langchain.agents.tools import Tool\nfrom langchain.agents.types import AGENT_TO_CLASS\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.chains.loading import load_chain, load_chain_from_config\nfrom langchain.utilities.loading import try_load_from_hub\nlogger = logging.getLogger(__file__)\nURL_BASE = \"https://raw.githubusercontent.com/hwchase17/langchain-hub/master/agents/\"\ndef _load_agent_from_tools(\n config: dict, llm: BaseLanguageModel, tools: List[Tool], **kwargs: Any\n) -> BaseSingleActionAgent:\n config_type = config.pop(\"_type\")\n if config_type not in AGENT_TO_CLASS:\n raise ValueError(f\"Loading {config_type} agent not supported\")\n agent_cls = AGENT_TO_CLASS[config_type]\n combined_config = {**config, **kwargs}\n return agent_cls.from_llm_and_tools(llm, tools, **combined_config)\ndef load_agent_from_config(\n config: dict,\n llm: Optional[BaseLanguageModel] = None,\n tools: Optional[List[Tool]] = None,\n **kwargs: Any,\n) -> BaseSingleActionAgent:\n \"\"\"Load agent from Config Dict.\"\"\"\n if \"_type\" not in config:\n raise ValueError(\"Must specify an agent Type in config\")\n load_from_tools = config.pop(\"load_from_llm_and_tools\", False)\n if load_from_tools:\n if llm is None:\n raise ValueError(", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/loading.html"}
+{"id": "a913293a28bc-1", "text": "if load_from_tools:\n if llm is None:\n raise ValueError(\n \"If `load_from_llm_and_tools` is set to True, \"\n \"then LLM must be provided\"\n )\n if tools is None:\n raise ValueError(\n \"If `load_from_llm_and_tools` is set to True, \"\n \"then tools must be provided\"\n )\n return _load_agent_from_tools(config, llm, tools, **kwargs)\n config_type = config.pop(\"_type\")\n if config_type not in AGENT_TO_CLASS:\n raise ValueError(f\"Loading {config_type} agent not supported\")\n agent_cls = AGENT_TO_CLASS[config_type]\n if \"llm_chain\" in config:\n config[\"llm_chain\"] = load_chain_from_config(config.pop(\"llm_chain\"))\n elif \"llm_chain_path\" in config:\n config[\"llm_chain\"] = load_chain(config.pop(\"llm_chain_path\"))\n else:\n raise ValueError(\"One of `llm_chain` and `llm_chain_path` should be specified.\")\n if \"output_parser\" in config:\n logger.warning(\n \"Currently loading output parsers on agent is not supported, \"\n \"will just use the default one.\"\n )\n del config[\"output_parser\"]\n combined_config = {**config, **kwargs}\n return agent_cls(**combined_config) # type: ignore\n[docs]def load_agent(path: Union[str, Path], **kwargs: Any) -> BaseSingleActionAgent:\n \"\"\"Unified method for loading a agent from LangChainHub or local fs.\"\"\"\n if hub_result := try_load_from_hub(\n path, _load_agent_from_file, \"agents\", {\"json\", \"yaml\"}", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/loading.html"}
+{"id": "a913293a28bc-2", "text": "path, _load_agent_from_file, \"agents\", {\"json\", \"yaml\"}\n ):\n return hub_result\n else:\n return _load_agent_from_file(path, **kwargs)\ndef _load_agent_from_file(\n file: Union[str, Path], **kwargs: Any\n) -> BaseSingleActionAgent:\n \"\"\"Load agent from file.\"\"\"\n # Convert file to Path object.\n if isinstance(file, str):\n file_path = Path(file)\n else:\n file_path = file\n # Load from either json or yaml.\n if file_path.suffix == \".json\":\n with open(file_path) as f:\n config = json.load(f)\n elif file_path.suffix == \".yaml\":\n with open(file_path, \"r\") as f:\n config = yaml.safe_load(f)\n else:\n raise ValueError(\"File type must be json or yaml\")\n # Load the agent from the config now.\n return load_agent_from_config(config, **kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/loading.html"}
+{"id": "9291600aa6ee-0", "text": "Source code for langchain.agents.agent_types\nfrom enum import Enum\n[docs]class AgentType(str, Enum):\n ZERO_SHOT_REACT_DESCRIPTION = \"zero-shot-react-description\"\n REACT_DOCSTORE = \"react-docstore\"\n SELF_ASK_WITH_SEARCH = \"self-ask-with-search\"\n CONVERSATIONAL_REACT_DESCRIPTION = \"conversational-react-description\"\n CHAT_ZERO_SHOT_REACT_DESCRIPTION = \"chat-zero-shot-react-description\"\n CHAT_CONVERSATIONAL_REACT_DESCRIPTION = \"chat-conversational-react-description\"\n STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION = (\n \"structured-chat-zero-shot-react-description\"\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_types.html"}
+{"id": "fde3fa009cfe-0", "text": "Source code for langchain.agents.load_tools\n# flake8: noqa\n\"\"\"Load tools.\"\"\"\nimport warnings\nfrom typing import Any, Dict, List, Optional, Callable, Tuple\nfrom mypy_extensions import Arg, KwArg\nfrom langchain.agents.tools import Tool\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.chains.api import news_docs, open_meteo_docs, podcast_docs, tmdb_docs\nfrom langchain.chains.api.base import APIChain\nfrom langchain.chains.llm_math.base import LLMMathChain\nfrom langchain.chains.pal.base import PALChain\nfrom langchain.requests import TextRequestsWrapper\nfrom langchain.tools.arxiv.tool import ArxivQueryRun\nfrom langchain.tools.pubmed.tool import PubmedQueryRun\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.bing_search.tool import BingSearchRun\nfrom langchain.tools.ddg_search.tool import DuckDuckGoSearchRun\nfrom langchain.tools.google_search.tool import GoogleSearchResults, GoogleSearchRun\nfrom langchain.tools.metaphor_search.tool import MetaphorSearchResults\nfrom langchain.tools.google_serper.tool import GoogleSerperResults, GoogleSerperRun\nfrom langchain.tools.graphql.tool import BaseGraphQLTool\nfrom langchain.tools.human.tool import HumanInputRun\nfrom langchain.tools.python.tool import PythonREPLTool\nfrom langchain.tools.requests.tool import (\n RequestsDeleteTool,\n RequestsGetTool,\n RequestsPatchTool,\n RequestsPostTool,\n RequestsPutTool,\n)\nfrom langchain.tools.scenexplain.tool import SceneXplainTool\nfrom langchain.tools.searx_search.tool import SearxSearchResults, SearxSearchRun\nfrom langchain.tools.shell.tool import ShellTool", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"}
+{"id": "fde3fa009cfe-1", "text": "from langchain.tools.shell.tool import ShellTool\nfrom langchain.tools.sleep.tool import SleepTool\nfrom langchain.tools.wikipedia.tool import WikipediaQueryRun\nfrom langchain.tools.wolfram_alpha.tool import WolframAlphaQueryRun\nfrom langchain.tools.openweathermap.tool import OpenWeatherMapQueryRun\nfrom langchain.utilities import ArxivAPIWrapper\nfrom langchain.utilities import PubMedAPIWrapper\nfrom langchain.utilities.bing_search import BingSearchAPIWrapper\nfrom langchain.utilities.duckduckgo_search import DuckDuckGoSearchAPIWrapper\nfrom langchain.utilities.google_search import GoogleSearchAPIWrapper\nfrom langchain.utilities.google_serper import GoogleSerperAPIWrapper\nfrom langchain.utilities.metaphor_search import MetaphorSearchAPIWrapper\nfrom langchain.utilities.awslambda import LambdaWrapper\nfrom langchain.utilities.graphql import GraphQLAPIWrapper\nfrom langchain.utilities.searx_search import SearxSearchWrapper\nfrom langchain.utilities.serpapi import SerpAPIWrapper\nfrom langchain.utilities.twilio import TwilioAPIWrapper\nfrom langchain.utilities.wikipedia import WikipediaAPIWrapper\nfrom langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper\nfrom langchain.utilities.openweathermap import OpenWeatherMapAPIWrapper\ndef _get_python_repl() -> BaseTool:\n return PythonREPLTool()\ndef _get_tools_requests_get() -> BaseTool:\n return RequestsGetTool(requests_wrapper=TextRequestsWrapper())\ndef _get_tools_requests_post() -> BaseTool:\n return RequestsPostTool(requests_wrapper=TextRequestsWrapper())\ndef _get_tools_requests_patch() -> BaseTool:\n return RequestsPatchTool(requests_wrapper=TextRequestsWrapper())\ndef _get_tools_requests_put() -> BaseTool:\n return RequestsPutTool(requests_wrapper=TextRequestsWrapper())\ndef _get_tools_requests_delete() -> BaseTool:", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"}
+{"id": "fde3fa009cfe-2", "text": "def _get_tools_requests_delete() -> BaseTool:\n return RequestsDeleteTool(requests_wrapper=TextRequestsWrapper())\ndef _get_terminal() -> BaseTool:\n return ShellTool()\ndef _get_sleep() -> BaseTool:\n return SleepTool()\n_BASE_TOOLS: Dict[str, Callable[[], BaseTool]] = {\n \"python_repl\": _get_python_repl,\n \"requests\": _get_tools_requests_get, # preserved for backwards compatability\n \"requests_get\": _get_tools_requests_get,\n \"requests_post\": _get_tools_requests_post,\n \"requests_patch\": _get_tools_requests_patch,\n \"requests_put\": _get_tools_requests_put,\n \"requests_delete\": _get_tools_requests_delete,\n \"terminal\": _get_terminal,\n \"sleep\": _get_sleep,\n}\ndef _get_pal_math(llm: BaseLanguageModel) -> BaseTool:\n return Tool(\n name=\"PAL-MATH\",\n description=\"A language model that is really good at solving complex word math problems. Input should be a fully worded hard word math problem.\",\n func=PALChain.from_math_prompt(llm).run,\n )\ndef _get_pal_colored_objects(llm: BaseLanguageModel) -> BaseTool:\n return Tool(\n name=\"PAL-COLOR-OBJ\",\n description=\"A language model that is really good at reasoning about position and the color attributes of objects. Input should be a fully worded hard reasoning problem. Make sure to include all information about the objects AND the final question you want to answer.\",\n func=PALChain.from_colored_object_prompt(llm).run,\n )\ndef _get_llm_math(llm: BaseLanguageModel) -> BaseTool:\n return Tool(\n name=\"Calculator\",", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"}
+{"id": "fde3fa009cfe-3", "text": "return Tool(\n name=\"Calculator\",\n description=\"Useful for when you need to answer questions about math.\",\n func=LLMMathChain.from_llm(llm=llm).run,\n coroutine=LLMMathChain.from_llm(llm=llm).arun,\n )\ndef _get_open_meteo_api(llm: BaseLanguageModel) -> BaseTool:\n chain = APIChain.from_llm_and_api_docs(llm, open_meteo_docs.OPEN_METEO_DOCS)\n return Tool(\n name=\"Open Meteo API\",\n description=\"Useful for when you want to get weather information from the OpenMeteo API. The input should be a question in natural language that this API can answer.\",\n func=chain.run,\n )\n_LLM_TOOLS: Dict[str, Callable[[BaseLanguageModel], BaseTool]] = {\n \"pal-math\": _get_pal_math,\n \"pal-colored-objects\": _get_pal_colored_objects,\n \"llm-math\": _get_llm_math,\n \"open-meteo-api\": _get_open_meteo_api,\n}\ndef _get_news_api(llm: BaseLanguageModel, **kwargs: Any) -> BaseTool:\n news_api_key = kwargs[\"news_api_key\"]\n chain = APIChain.from_llm_and_api_docs(\n llm, news_docs.NEWS_DOCS, headers={\"X-Api-Key\": news_api_key}\n )\n return Tool(\n name=\"News API\",\n description=\"Use this when you want to get information about the top headlines of current news stories. The input should be a question in natural language that this API can answer.\",\n func=chain.run,\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"}
+{"id": "fde3fa009cfe-4", "text": "func=chain.run,\n )\ndef _get_tmdb_api(llm: BaseLanguageModel, **kwargs: Any) -> BaseTool:\n tmdb_bearer_token = kwargs[\"tmdb_bearer_token\"]\n chain = APIChain.from_llm_and_api_docs(\n llm,\n tmdb_docs.TMDB_DOCS,\n headers={\"Authorization\": f\"Bearer {tmdb_bearer_token}\"},\n )\n return Tool(\n name=\"TMDB API\",\n description=\"Useful for when you want to get information from The Movie Database. The input should be a question in natural language that this API can answer.\",\n func=chain.run,\n )\ndef _get_podcast_api(llm: BaseLanguageModel, **kwargs: Any) -> BaseTool:\n listen_api_key = kwargs[\"listen_api_key\"]\n chain = APIChain.from_llm_and_api_docs(\n llm,\n podcast_docs.PODCAST_DOCS,\n headers={\"X-ListenAPI-Key\": listen_api_key},\n )\n return Tool(\n name=\"Podcast API\",\n description=\"Use the Listen Notes Podcast API to search all podcasts or episodes. The input should be a question in natural language that this API can answer.\",\n func=chain.run,\n )\ndef _get_lambda_api(**kwargs: Any) -> BaseTool:\n return Tool(\n name=kwargs[\"awslambda_tool_name\"],\n description=kwargs[\"awslambda_tool_description\"],\n func=LambdaWrapper(**kwargs).run,\n )\ndef _get_wolfram_alpha(**kwargs: Any) -> BaseTool:\n return WolframAlphaQueryRun(api_wrapper=WolframAlphaAPIWrapper(**kwargs))\ndef _get_google_search(**kwargs: Any) -> BaseTool:", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"}
+{"id": "fde3fa009cfe-5", "text": "def _get_google_search(**kwargs: Any) -> BaseTool:\n return GoogleSearchRun(api_wrapper=GoogleSearchAPIWrapper(**kwargs))\ndef _get_wikipedia(**kwargs: Any) -> BaseTool:\n return WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper(**kwargs))\ndef _get_arxiv(**kwargs: Any) -> BaseTool:\n return ArxivQueryRun(api_wrapper=ArxivAPIWrapper(**kwargs))\ndef _get_pupmed(**kwargs: Any) -> BaseTool:\n return PubmedQueryRun(api_wrapper=PubMedAPIWrapper(**kwargs))\ndef _get_google_serper(**kwargs: Any) -> BaseTool:\n return GoogleSerperRun(api_wrapper=GoogleSerperAPIWrapper(**kwargs))\ndef _get_google_serper_results_json(**kwargs: Any) -> BaseTool:\n return GoogleSerperResults(api_wrapper=GoogleSerperAPIWrapper(**kwargs))\ndef _get_google_search_results_json(**kwargs: Any) -> BaseTool:\n return GoogleSearchResults(api_wrapper=GoogleSearchAPIWrapper(**kwargs))\ndef _get_serpapi(**kwargs: Any) -> BaseTool:\n return Tool(\n name=\"Search\",\n description=\"A search engine. Useful for when you need to answer questions about current events. Input should be a search query.\",\n func=SerpAPIWrapper(**kwargs).run,\n coroutine=SerpAPIWrapper(**kwargs).arun,\n )\ndef _get_twilio(**kwargs: Any) -> BaseTool:\n return Tool(\n name=\"Text Message\",\n description=\"Useful for when you need to send a text message to a provided phone number.\",\n func=TwilioAPIWrapper(**kwargs).run,\n )\ndef _get_searx_search(**kwargs: Any) -> BaseTool:", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"}
+{"id": "fde3fa009cfe-6", "text": ")\ndef _get_searx_search(**kwargs: Any) -> BaseTool:\n return SearxSearchRun(wrapper=SearxSearchWrapper(**kwargs))\ndef _get_searx_search_results_json(**kwargs: Any) -> BaseTool:\n wrapper_kwargs = {k: v for k, v in kwargs.items() if k != \"num_results\"}\n return SearxSearchResults(wrapper=SearxSearchWrapper(**wrapper_kwargs), **kwargs)\ndef _get_bing_search(**kwargs: Any) -> BaseTool:\n return BingSearchRun(api_wrapper=BingSearchAPIWrapper(**kwargs))\ndef _get_metaphor_search(**kwargs: Any) -> BaseTool:\n return MetaphorSearchResults(api_wrapper=MetaphorSearchAPIWrapper(**kwargs))\ndef _get_ddg_search(**kwargs: Any) -> BaseTool:\n return DuckDuckGoSearchRun(api_wrapper=DuckDuckGoSearchAPIWrapper(**kwargs))\ndef _get_human_tool(**kwargs: Any) -> BaseTool:\n return HumanInputRun(**kwargs)\ndef _get_scenexplain(**kwargs: Any) -> BaseTool:\n return SceneXplainTool(**kwargs)\ndef _get_graphql_tool(**kwargs: Any) -> BaseTool:\n graphql_endpoint = kwargs[\"graphql_endpoint\"]\n wrapper = GraphQLAPIWrapper(graphql_endpoint=graphql_endpoint)\n return BaseGraphQLTool(graphql_wrapper=wrapper)\ndef _get_openweathermap(**kwargs: Any) -> BaseTool:\n return OpenWeatherMapQueryRun(api_wrapper=OpenWeatherMapAPIWrapper(**kwargs))\n_EXTRA_LLM_TOOLS: Dict[\n str,\n Tuple[Callable[[Arg(BaseLanguageModel, \"llm\"), KwArg(Any)], BaseTool], List[str]],\n] = {", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"}
+{"id": "fde3fa009cfe-7", "text": "] = {\n \"news-api\": (_get_news_api, [\"news_api_key\"]),\n \"tmdb-api\": (_get_tmdb_api, [\"tmdb_bearer_token\"]),\n \"podcast-api\": (_get_podcast_api, [\"listen_api_key\"]),\n}\n_EXTRA_OPTIONAL_TOOLS: Dict[str, Tuple[Callable[[KwArg(Any)], BaseTool], List[str]]] = {\n \"wolfram-alpha\": (_get_wolfram_alpha, [\"wolfram_alpha_appid\"]),\n \"google-search\": (_get_google_search, [\"google_api_key\", \"google_cse_id\"]),\n \"google-search-results-json\": (\n _get_google_search_results_json,\n [\"google_api_key\", \"google_cse_id\", \"num_results\"],\n ),\n \"searx-search-results-json\": (\n _get_searx_search_results_json,\n [\"searx_host\", \"engines\", \"num_results\", \"aiosession\"],\n ),\n \"bing-search\": (_get_bing_search, [\"bing_subscription_key\", \"bing_search_url\"]),\n \"metaphor-search\": (_get_metaphor_search, [\"metaphor_api_key\"]),\n \"ddg-search\": (_get_ddg_search, []),\n \"google-serper\": (_get_google_serper, [\"serper_api_key\", \"aiosession\"]),\n \"google-serper-results-json\": (\n _get_google_serper_results_json,\n [\"serper_api_key\", \"aiosession\"],\n ),\n \"serpapi\": (_get_serpapi, [\"serpapi_api_key\", \"aiosession\"]),\n \"twilio\": (_get_twilio, [\"account_sid\", \"auth_token\", \"from_number\"]),", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"}
+{"id": "fde3fa009cfe-8", "text": "\"searx-search\": (_get_searx_search, [\"searx_host\", \"engines\", \"aiosession\"]),\n \"wikipedia\": (_get_wikipedia, [\"top_k_results\", \"lang\"]),\n \"arxiv\": (\n _get_arxiv,\n [\"top_k_results\", \"load_max_docs\", \"load_all_available_meta\"],\n ),\n \"pupmed\": (\n _get_pupmed,\n [\"top_k_results\", \"load_max_docs\", \"load_all_available_meta\"],\n ),\n \"human\": (_get_human_tool, [\"prompt_func\", \"input_func\"]),\n \"awslambda\": (\n _get_lambda_api,\n [\"awslambda_tool_name\", \"awslambda_tool_description\", \"function_name\"],\n ),\n \"sceneXplain\": (_get_scenexplain, []),\n \"graphql\": (_get_graphql_tool, [\"graphql_endpoint\"]),\n \"openweathermap-api\": (_get_openweathermap, [\"openweathermap_api_key\"]),\n}\ndef _handle_callbacks(\n callback_manager: Optional[BaseCallbackManager], callbacks: Callbacks\n) -> Callbacks:\n if callback_manager is not None:\n warnings.warn(\n \"callback_manager is deprecated. Please use callbacks instead.\",\n DeprecationWarning,\n )\n if callbacks is not None:\n raise ValueError(\n \"Cannot specify both callback_manager and callbacks arguments.\"\n )\n return callback_manager\n return callbacks\n[docs]def load_huggingface_tool(\n task_or_repo_id: str,\n model_repo_id: Optional[str] = None,\n token: Optional[str] = None,\n remote: bool = False,\n **kwargs: Any,\n) -> BaseTool:\n try:", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"}
+{"id": "fde3fa009cfe-9", "text": "**kwargs: Any,\n) -> BaseTool:\n try:\n from transformers import load_tool\n except ImportError:\n raise ValueError(\n \"HuggingFace tools require the libraries `transformers>=4.29.0`\"\n \" and `huggingface_hub>=0.14.1` to be installed.\"\n \" Please install it with\"\n \" `pip install --upgrade transformers huggingface_hub`.\"\n )\n hf_tool = load_tool(\n task_or_repo_id,\n model_repo_id=model_repo_id,\n token=token,\n remote=remote,\n **kwargs,\n )\n outputs = hf_tool.outputs\n if set(outputs) != {\"text\"}:\n raise NotImplementedError(\"Multimodal outputs not supported yet.\")\n inputs = hf_tool.inputs\n if set(inputs) != {\"text\"}:\n raise NotImplementedError(\"Multimodal inputs not supported yet.\")\n return Tool.from_function(\n hf_tool.__call__, name=hf_tool.name, description=hf_tool.description\n )\n[docs]def load_tools(\n tool_names: List[str],\n llm: Optional[BaseLanguageModel] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n) -> List[BaseTool]:\n \"\"\"Load tools based on their name.\n Args:\n tool_names: name of tools to load.\n llm: Optional language model, may be needed to initialize certain tools.\n callbacks: Optional callback manager or list of callback handlers.\n If not provided, default global callback manager will be used.\n Returns:\n List of tools.\n \"\"\"\n tools = []\n callbacks = _handle_callbacks(\n callback_manager=kwargs.get(\"callback_manager\"), callbacks=callbacks", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"}
+{"id": "fde3fa009cfe-10", "text": "callback_manager=kwargs.get(\"callback_manager\"), callbacks=callbacks\n )\n for name in tool_names:\n if name == \"requests\":\n warnings.warn(\n \"tool name `requests` is deprecated - \"\n \"please use `requests_all` or specify the requests method\"\n )\n if name == \"requests_all\":\n # expand requests into various methods\n requests_method_tools = [\n _tool for _tool in _BASE_TOOLS if _tool.startswith(\"requests_\")\n ]\n tool_names.extend(requests_method_tools)\n elif name in _BASE_TOOLS:\n tools.append(_BASE_TOOLS[name]())\n elif name in _LLM_TOOLS:\n if llm is None:\n raise ValueError(f\"Tool {name} requires an LLM to be provided\")\n tool = _LLM_TOOLS[name](llm)\n tools.append(tool)\n elif name in _EXTRA_LLM_TOOLS:\n if llm is None:\n raise ValueError(f\"Tool {name} requires an LLM to be provided\")\n _get_llm_tool_func, extra_keys = _EXTRA_LLM_TOOLS[name]\n missing_keys = set(extra_keys).difference(kwargs)\n if missing_keys:\n raise ValueError(\n f\"Tool {name} requires some parameters that were not \"\n f\"provided: {missing_keys}\"\n )\n sub_kwargs = {k: kwargs[k] for k in extra_keys}\n tool = _get_llm_tool_func(llm=llm, **sub_kwargs)\n tools.append(tool)\n elif name in _EXTRA_OPTIONAL_TOOLS:\n _get_tool_func, extra_keys = _EXTRA_OPTIONAL_TOOLS[name]", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"}
+{"id": "fde3fa009cfe-11", "text": "_get_tool_func, extra_keys = _EXTRA_OPTIONAL_TOOLS[name]\n sub_kwargs = {k: kwargs[k] for k in extra_keys if k in kwargs}\n tool = _get_tool_func(**sub_kwargs)\n tools.append(tool)\n else:\n raise ValueError(f\"Got unknown tool {name}\")\n if callbacks is not None:\n for tool in tools:\n tool.callbacks = callbacks\n return tools\n[docs]def get_all_tool_names() -> List[str]:\n \"\"\"Get a list of all possible tool names.\"\"\"\n return (\n list(_BASE_TOOLS)\n + list(_EXTRA_OPTIONAL_TOOLS)\n + list(_EXTRA_LLM_TOOLS)\n + list(_LLM_TOOLS)\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"}
+{"id": "679d207e5db1-0", "text": "Source code for langchain.agents.conversational_chat.base\n\"\"\"An agent designed to hold a conversation in addition to using tools.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, List, Optional, Sequence, Tuple\nfrom pydantic import Field\nfrom langchain.agents.agent import Agent, AgentOutputParser\nfrom langchain.agents.conversational_chat.output_parser import ConvoOutputParser\nfrom langchain.agents.conversational_chat.prompt import (\n PREFIX,\n SUFFIX,\n TEMPLATE_TOOL_RESPONSE,\n)\nfrom langchain.agents.utils import validate_tools_single_input\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains import LLMChain\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.prompts.chat import (\n ChatPromptTemplate,\n HumanMessagePromptTemplate,\n MessagesPlaceholder,\n SystemMessagePromptTemplate,\n)\nfrom langchain.schema import (\n AgentAction,\n AIMessage,\n BaseMessage,\n BaseOutputParser,\n HumanMessage,\n)\nfrom langchain.tools.base import BaseTool\n[docs]class ConversationalChatAgent(Agent):\n \"\"\"An agent designed to hold a conversation in addition to using tools.\"\"\"\n output_parser: AgentOutputParser = Field(default_factory=ConvoOutputParser)\n template_tool_response: str = TEMPLATE_TOOL_RESPONSE\n @classmethod\n def _get_default_output_parser(cls, **kwargs: Any) -> AgentOutputParser:\n return ConvoOutputParser()\n @property\n def _agent_type(self) -> str:\n raise NotImplementedError\n @property\n def observation_prefix(self) -> str:\n \"\"\"Prefix to append the observation with.\"\"\"\n return \"Observation: \"\n @property", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/conversational_chat/base.html"}
+{"id": "679d207e5db1-1", "text": "return \"Observation: \"\n @property\n def llm_prefix(self) -> str:\n \"\"\"Prefix to append the llm call with.\"\"\"\n return \"Thought:\"\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n super()._validate_tools(tools)\n validate_tools_single_input(cls.__name__, tools)\n[docs] @classmethod\n def create_prompt(\n cls,\n tools: Sequence[BaseTool],\n system_message: str = PREFIX,\n human_message: str = SUFFIX,\n input_variables: Optional[List[str]] = None,\n output_parser: Optional[BaseOutputParser] = None,\n ) -> BasePromptTemplate:\n tool_strings = \"\\n\".join(\n [f\"> {tool.name}: {tool.description}\" for tool in tools]\n )\n tool_names = \", \".join([tool.name for tool in tools])\n _output_parser = output_parser or cls._get_default_output_parser()\n format_instructions = human_message.format(\n format_instructions=_output_parser.get_format_instructions()\n )\n final_prompt = format_instructions.format(\n tool_names=tool_names, tools=tool_strings\n )\n if input_variables is None:\n input_variables = [\"input\", \"chat_history\", \"agent_scratchpad\"]\n messages = [\n SystemMessagePromptTemplate.from_template(system_message),\n MessagesPlaceholder(variable_name=\"chat_history\"),\n HumanMessagePromptTemplate.from_template(final_prompt),\n MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n ]\n return ChatPromptTemplate(input_variables=input_variables, messages=messages)\n def _construct_scratchpad(\n self, intermediate_steps: List[Tuple[AgentAction, str]]\n ) -> List[BaseMessage]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/conversational_chat/base.html"}
+{"id": "679d207e5db1-2", "text": ") -> List[BaseMessage]:\n \"\"\"Construct the scratchpad that lets the agent continue its thought process.\"\"\"\n thoughts: List[BaseMessage] = []\n for action, observation in intermediate_steps:\n thoughts.append(AIMessage(content=action.log))\n human_message = HumanMessage(\n content=self.template_tool_response.format(observation=observation)\n )\n thoughts.append(human_message)\n return thoughts\n[docs] @classmethod\n def from_llm_and_tools(\n cls,\n llm: BaseLanguageModel,\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n output_parser: Optional[AgentOutputParser] = None,\n system_message: str = PREFIX,\n human_message: str = SUFFIX,\n input_variables: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> Agent:\n \"\"\"Construct an agent from an LLM and tools.\"\"\"\n cls._validate_tools(tools)\n _output_parser = output_parser or cls._get_default_output_parser()\n prompt = cls.create_prompt(\n tools,\n system_message=system_message,\n human_message=human_message,\n input_variables=input_variables,\n output_parser=_output_parser,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n return cls(\n llm_chain=llm_chain,\n allowed_tools=tool_names,\n output_parser=_output_parser,\n **kwargs,\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/conversational_chat/base.html"}
+{"id": "679d207e5db1-3", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/conversational_chat/base.html"}
+{"id": "f715e1abf57c-0", "text": "Source code for langchain.agents.mrkl.base\n\"\"\"Attempt to implement MRKL systems as described in arxiv.org/pdf/2205.00445.pdf.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Callable, List, NamedTuple, Optional, Sequence\nfrom pydantic import Field\nfrom langchain.agents.agent import Agent, AgentExecutor, AgentOutputParser\nfrom langchain.agents.agent_types import AgentType\nfrom langchain.agents.mrkl.output_parser import MRKLOutputParser\nfrom langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS, PREFIX, SUFFIX\nfrom langchain.agents.tools import Tool\nfrom langchain.agents.utils import validate_tools_single_input\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains import LLMChain\nfrom langchain.prompts import PromptTemplate\nfrom langchain.tools.base import BaseTool\nclass ChainConfig(NamedTuple):\n \"\"\"Configuration for chain to use in MRKL system.\n Args:\n action_name: Name of the action.\n action: Action function to call.\n action_description: Description of the action.\n \"\"\"\n action_name: str\n action: Callable\n action_description: str\n[docs]class ZeroShotAgent(Agent):\n \"\"\"Agent for the MRKL chain.\"\"\"\n output_parser: AgentOutputParser = Field(default_factory=MRKLOutputParser)\n @classmethod\n def _get_default_output_parser(cls, **kwargs: Any) -> AgentOutputParser:\n return MRKLOutputParser()\n @property\n def _agent_type(self) -> str:\n \"\"\"Return Identifier of agent type.\"\"\"\n return AgentType.ZERO_SHOT_REACT_DESCRIPTION\n @property\n def observation_prefix(self) -> str:", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/mrkl/base.html"}
+{"id": "f715e1abf57c-1", "text": "@property\n def observation_prefix(self) -> str:\n \"\"\"Prefix to append the observation with.\"\"\"\n return \"Observation: \"\n @property\n def llm_prefix(self) -> str:\n \"\"\"Prefix to append the llm call with.\"\"\"\n return \"Thought:\"\n[docs] @classmethod\n def create_prompt(\n cls,\n tools: Sequence[BaseTool],\n prefix: str = PREFIX,\n suffix: str = SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n ) -> PromptTemplate:\n \"\"\"Create prompt in the style of the zero shot agent.\n Args:\n tools: List of tools the agent will have access to, used to format the\n prompt.\n prefix: String to put before the list of tools.\n suffix: String to put after the list of tools.\n input_variables: List of input variables the final prompt will expect.\n Returns:\n A PromptTemplate with the template assembled from the pieces here.\n \"\"\"\n tool_strings = \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in tools])\n tool_names = \", \".join([tool.name for tool in tools])\n format_instructions = format_instructions.format(tool_names=tool_names)\n template = \"\\n\\n\".join([prefix, tool_strings, format_instructions, suffix])\n if input_variables is None:\n input_variables = [\"input\", \"agent_scratchpad\"]\n return PromptTemplate(template=template, input_variables=input_variables)\n[docs] @classmethod\n def from_llm_and_tools(\n cls,\n llm: BaseLanguageModel,\n tools: Sequence[BaseTool],", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/mrkl/base.html"}
+{"id": "f715e1abf57c-2", "text": "llm: BaseLanguageModel,\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n output_parser: Optional[AgentOutputParser] = None,\n prefix: str = PREFIX,\n suffix: str = SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> Agent:\n \"\"\"Construct an agent from an LLM and tools.\"\"\"\n cls._validate_tools(tools)\n prompt = cls.create_prompt(\n tools,\n prefix=prefix,\n suffix=suffix,\n format_instructions=format_instructions,\n input_variables=input_variables,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n _output_parser = output_parser or cls._get_default_output_parser()\n return cls(\n llm_chain=llm_chain,\n allowed_tools=tool_names,\n output_parser=_output_parser,\n **kwargs,\n )\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n validate_tools_single_input(cls.__name__, tools)\n for tool in tools:\n if tool.description is None:\n raise ValueError(\n f\"Got a tool {tool.name} without a description. For this agent, \"\n f\"a description must always be provided.\"\n )\n super()._validate_tools(tools)\n[docs]class MRKLChain(AgentExecutor):\n \"\"\"Chain that implements the MRKL system.\n Example:\n .. code-block:: python", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/mrkl/base.html"}
+{"id": "f715e1abf57c-3", "text": "Example:\n .. code-block:: python\n from langchain import OpenAI, MRKLChain\n from langchain.chains.mrkl.base import ChainConfig\n llm = OpenAI(temperature=0)\n prompt = PromptTemplate(...)\n chains = [...]\n mrkl = MRKLChain.from_chains(llm=llm, prompt=prompt)\n \"\"\"\n[docs] @classmethod\n def from_chains(\n cls, llm: BaseLanguageModel, chains: List[ChainConfig], **kwargs: Any\n ) -> AgentExecutor:\n \"\"\"User friendly way to initialize the MRKL chain.\n This is intended to be an easy way to get up and running with the\n MRKL chain.\n Args:\n llm: The LLM to use as the agent LLM.\n chains: The chains the MRKL system has access to.\n **kwargs: parameters to be passed to initialization.\n Returns:\n An initialized MRKL chain.\n Example:\n .. code-block:: python\n from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, MRKLChain\n from langchain.chains.mrkl.base import ChainConfig\n llm = OpenAI(temperature=0)\n search = SerpAPIWrapper()\n llm_math_chain = LLMMathChain(llm=llm)\n chains = [\n ChainConfig(\n action_name = \"Search\",\n action=search.search,\n action_description=\"useful for searching\"\n ),\n ChainConfig(\n action_name=\"Calculator\",\n action=llm_math_chain.run,\n action_description=\"useful for doing math\"\n )\n ]\n mrkl = MRKLChain.from_chains(llm, chains)", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/mrkl/base.html"}
+{"id": "f715e1abf57c-4", "text": "]\n mrkl = MRKLChain.from_chains(llm, chains)\n \"\"\"\n tools = [\n Tool(\n name=c.action_name,\n func=c.action,\n description=c.action_description,\n )\n for c in chains\n ]\n agent = ZeroShotAgent.from_llm_and_tools(llm, tools)\n return cls(agent=agent, tools=tools, **kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/mrkl/base.html"}
+{"id": "2e6a41142cff-0", "text": "Source code for langchain.agents.agent_toolkits.playwright.toolkit\n\"\"\"Playwright web browser toolkit.\"\"\"\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, List, Optional, Type, cast\nfrom pydantic import Extra, root_validator\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.playwright.base import (\n BaseBrowserTool,\n lazy_import_playwright_browsers,\n)\nfrom langchain.tools.playwright.click import ClickTool\nfrom langchain.tools.playwright.current_page import CurrentWebPageTool\nfrom langchain.tools.playwright.extract_hyperlinks import ExtractHyperlinksTool\nfrom langchain.tools.playwright.extract_text import ExtractTextTool\nfrom langchain.tools.playwright.get_elements import GetElementsTool\nfrom langchain.tools.playwright.navigate import NavigateTool\nfrom langchain.tools.playwright.navigate_back import NavigateBackTool\nif TYPE_CHECKING:\n from playwright.async_api import Browser as AsyncBrowser\n from playwright.sync_api import Browser as SyncBrowser\nelse:\n try:\n # We do this so pydantic can resolve the types when instantiating\n from playwright.async_api import Browser as AsyncBrowser\n from playwright.sync_api import Browser as SyncBrowser\n except ImportError:\n pass\n[docs]class PlayWrightBrowserToolkit(BaseToolkit):\n \"\"\"Toolkit for web browser tools.\"\"\"\n sync_browser: Optional[\"SyncBrowser\"] = None\n async_browser: Optional[\"AsyncBrowser\"] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator\n def validate_imports_and_browser_provided(cls, values: dict) -> dict:\n \"\"\"Check that the arguments are valid.\"\"\"\n lazy_import_playwright_browsers()", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/playwright/toolkit.html"}
+{"id": "2e6a41142cff-1", "text": "\"\"\"Check that the arguments are valid.\"\"\"\n lazy_import_playwright_browsers()\n if values.get(\"async_browser\") is None and values.get(\"sync_browser\") is None:\n raise ValueError(\"Either async_browser or sync_browser must be specified.\")\n return values\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n tool_classes: List[Type[BaseBrowserTool]] = [\n ClickTool,\n NavigateTool,\n NavigateBackTool,\n ExtractTextTool,\n ExtractHyperlinksTool,\n GetElementsTool,\n CurrentWebPageTool,\n ]\n tools = [\n tool_cls.from_browser(\n sync_browser=self.sync_browser, async_browser=self.async_browser\n )\n for tool_cls in tool_classes\n ]\n return cast(List[BaseTool], tools)\n[docs] @classmethod\n def from_browser(\n cls,\n sync_browser: Optional[SyncBrowser] = None,\n async_browser: Optional[AsyncBrowser] = None,\n ) -> PlayWrightBrowserToolkit:\n \"\"\"Instantiate the toolkit.\"\"\"\n # This is to raise a better error than the forward ref ones Pydantic would have\n lazy_import_playwright_browsers()\n return cls(sync_browser=sync_browser, async_browser=async_browser)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/playwright/toolkit.html"}
+{"id": "eda5880f8561-0", "text": "Source code for langchain.agents.agent_toolkits.azure_cognitive_services.toolkit\nfrom __future__ import annotations\nimport sys\nfrom typing import List\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.tools.azure_cognitive_services import (\n AzureCogsFormRecognizerTool,\n AzureCogsImageAnalysisTool,\n AzureCogsSpeech2TextTool,\n AzureCogsText2SpeechTool,\n)\nfrom langchain.tools.base import BaseTool\n[docs]class AzureCognitiveServicesToolkit(BaseToolkit):\n \"\"\"Toolkit for Azure Cognitive Services.\"\"\"\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n tools = [\n AzureCogsFormRecognizerTool(),\n AzureCogsSpeech2TextTool(),\n AzureCogsText2SpeechTool(),\n ]\n # TODO: Remove check once azure-ai-vision supports MacOS.\n if sys.platform.startswith(\"linux\") or sys.platform.startswith(\"win\"):\n tools.append(AzureCogsImageAnalysisTool())\n return tools\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/azure_cognitive_services/toolkit.html"}
+{"id": "e2cf8219650c-0", "text": "Source code for langchain.agents.agent_toolkits.csv.base\n\"\"\"Agent for working with csvs.\"\"\"\nfrom typing import Any, List, Optional, Union\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.pandas.base import create_pandas_dataframe_agent\nfrom langchain.base_language import BaseLanguageModel\n[docs]def create_csv_agent(\n llm: BaseLanguageModel,\n path: Union[str, List[str]],\n pandas_kwargs: Optional[dict] = None,\n **kwargs: Any,\n) -> AgentExecutor:\n \"\"\"Create csv agent by loading to a dataframe and using pandas agent.\"\"\"\n try:\n import pandas as pd\n except ImportError:\n raise ValueError(\n \"pandas package not found, please install with `pip install pandas`\"\n )\n _kwargs = pandas_kwargs or {}\n if isinstance(path, str):\n df = pd.read_csv(path, **_kwargs)\n elif isinstance(path, list):\n df = []\n for item in path:\n if not isinstance(item, str):\n raise ValueError(f\"Expected str, got {type(path)}\")\n df.append(pd.read_csv(item, **_kwargs))\n else:\n raise ValueError(f\"Expected str or list, got {type(path)}\")\n return create_pandas_dataframe_agent(llm, df, **kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/csv/base.html"}
+{"id": "3a8b1efb69b9-0", "text": "Source code for langchain.agents.agent_toolkits.spark_sql.base\n\"\"\"Spark SQL agent.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.spark_sql.prompt import SQL_PREFIX, SQL_SUFFIX\nfrom langchain.agents.agent_toolkits.spark_sql.toolkit import SparkSQLToolkit\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\n[docs]def create_spark_sql_agent(\n llm: BaseLanguageModel,\n toolkit: SparkSQLToolkit,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = SQL_PREFIX,\n suffix: str = SQL_SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n top_k: int = 10,\n max_iterations: Optional[int] = 15,\n max_execution_time: Optional[float] = None,\n early_stopping_method: str = \"force\",\n verbose: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a sql agent from an LLM and tools.\"\"\"\n tools = toolkit.get_tools()\n prefix = prefix.format(top_k=top_k)\n prompt = ZeroShotAgent.create_prompt(\n tools,\n prefix=prefix,\n suffix=suffix,\n format_instructions=format_instructions,\n input_variables=input_variables,\n )\n llm_chain = LLMChain(\n llm=llm,", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/spark_sql/base.html"}
+{"id": "3a8b1efb69b9-1", "text": "llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n max_iterations=max_iterations,\n max_execution_time=max_execution_time,\n early_stopping_method=early_stopping_method,\n **(agent_executor_kwargs or {}),\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/spark_sql/base.html"}
+{"id": "c7bdd0b1e16f-0", "text": "Source code for langchain.agents.agent_toolkits.spark_sql.toolkit\n\"\"\"Toolkit for interacting with Spark SQL.\"\"\"\nfrom typing import List\nfrom pydantic import Field\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.tools import BaseTool\nfrom langchain.tools.spark_sql.tool import (\n InfoSparkSQLTool,\n ListSparkSQLTool,\n QueryCheckerTool,\n QuerySparkSQLTool,\n)\nfrom langchain.utilities.spark_sql import SparkSQL\n[docs]class SparkSQLToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with Spark SQL.\"\"\"\n db: SparkSQL = Field(exclude=True)\n llm: BaseLanguageModel = Field(exclude=True)\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n return [\n QuerySparkSQLTool(db=self.db),\n InfoSparkSQLTool(db=self.db),\n ListSparkSQLTool(db=self.db),\n QueryCheckerTool(db=self.db, llm=self.llm),\n ]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/spark_sql/toolkit.html"}
+{"id": "672ca160c7a3-0", "text": "Source code for langchain.agents.agent_toolkits.spark.base\n\"\"\"Agent for working with pandas objects.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.spark.prompt import PREFIX, SUFFIX\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.llms.base import BaseLLM\nfrom langchain.tools.python.tool import PythonAstREPLTool\ndef _validate_spark_df(df: Any) -> bool:\n try:\n from pyspark.sql import DataFrame as SparkLocalDataFrame\n return isinstance(df, SparkLocalDataFrame)\n except ImportError:\n return False\ndef _validate_spark_connect_df(df: Any) -> bool:\n try:\n from pyspark.sql.connect.dataframe import DataFrame as SparkConnectDataFrame\n return isinstance(df, SparkConnectDataFrame)\n except ImportError:\n return False\n[docs]def create_spark_dataframe_agent(\n llm: BaseLLM,\n df: Any,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = PREFIX,\n suffix: str = SUFFIX,\n input_variables: Optional[List[str]] = None,\n verbose: bool = False,\n return_intermediate_steps: bool = False,\n max_iterations: Optional[int] = 15,\n max_execution_time: Optional[float] = None,\n early_stopping_method: str = \"force\",\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a spark agent from an LLM and dataframe.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/spark/base.html"}
+{"id": "672ca160c7a3-1", "text": ") -> AgentExecutor:\n \"\"\"Construct a spark agent from an LLM and dataframe.\"\"\"\n if not _validate_spark_df(df) and not _validate_spark_connect_df(df):\n raise ValueError(\"Spark is not installed. run `pip install pyspark`.\")\n if input_variables is None:\n input_variables = [\"df\", \"input\", \"agent_scratchpad\"]\n tools = [PythonAstREPLTool(locals={\"df\": df})]\n prompt = ZeroShotAgent.create_prompt(\n tools, prefix=prefix, suffix=suffix, input_variables=input_variables\n )\n partial_prompt = prompt.partial(df=str(df.first()))\n llm_chain = LLMChain(\n llm=llm,\n prompt=partial_prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(\n llm_chain=llm_chain,\n allowed_tools=tool_names,\n callback_manager=callback_manager,\n **kwargs,\n )\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n return_intermediate_steps=return_intermediate_steps,\n max_iterations=max_iterations,\n max_execution_time=max_execution_time,\n early_stopping_method=early_stopping_method,\n **(agent_executor_kwargs or {}),\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/spark/base.html"}
+{"id": "8a3c72e63e7d-0", "text": "Source code for langchain.agents.agent_toolkits.vectorstore.base\n\"\"\"VectorStore agent.\"\"\"\nfrom typing import Any, Dict, Optional\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.vectorstore.prompt import PREFIX, ROUTER_PREFIX\nfrom langchain.agents.agent_toolkits.vectorstore.toolkit import (\n VectorStoreRouterToolkit,\n VectorStoreToolkit,\n)\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\n[docs]def create_vectorstore_agent(\n llm: BaseLanguageModel,\n toolkit: VectorStoreToolkit,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = PREFIX,\n verbose: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a vectorstore agent from an LLM and tools.\"\"\"\n tools = toolkit.get_tools()\n prompt = ZeroShotAgent.create_prompt(tools, prefix=prefix)\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n **(agent_executor_kwargs or {}),\n )\n[docs]def create_vectorstore_router_agent(", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/vectorstore/base.html"}
+{"id": "8a3c72e63e7d-1", "text": ")\n[docs]def create_vectorstore_router_agent(\n llm: BaseLanguageModel,\n toolkit: VectorStoreRouterToolkit,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = ROUTER_PREFIX,\n verbose: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a vectorstore router agent from an LLM and tools.\"\"\"\n tools = toolkit.get_tools()\n prompt = ZeroShotAgent.create_prompt(tools, prefix=prefix)\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n **(agent_executor_kwargs or {}),\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/vectorstore/base.html"}
+{"id": "fca16cd97652-0", "text": "Source code for langchain.agents.agent_toolkits.vectorstore.toolkit\n\"\"\"Toolkit for interacting with a vector store.\"\"\"\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.llms.openai import OpenAI\nfrom langchain.tools import BaseTool\nfrom langchain.tools.vectorstore.tool import (\n VectorStoreQATool,\n VectorStoreQAWithSourcesTool,\n)\nfrom langchain.vectorstores.base import VectorStore\n[docs]class VectorStoreInfo(BaseModel):\n \"\"\"Information about a vectorstore.\"\"\"\n vectorstore: VectorStore = Field(exclude=True)\n name: str\n description: str\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs]class VectorStoreToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with a vector store.\"\"\"\n vectorstore_info: VectorStoreInfo = Field(exclude=True)\n llm: BaseLanguageModel = Field(default_factory=lambda: OpenAI(temperature=0))\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n description = VectorStoreQATool.get_description(\n self.vectorstore_info.name, self.vectorstore_info.description\n )\n qa_tool = VectorStoreQATool(\n name=self.vectorstore_info.name,\n description=description,\n vectorstore=self.vectorstore_info.vectorstore,\n llm=self.llm,\n )\n description = VectorStoreQAWithSourcesTool.get_description(\n self.vectorstore_info.name, self.vectorstore_info.description\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/vectorstore/toolkit.html"}
+{"id": "fca16cd97652-1", "text": "self.vectorstore_info.name, self.vectorstore_info.description\n )\n qa_with_sources_tool = VectorStoreQAWithSourcesTool(\n name=f\"{self.vectorstore_info.name}_with_sources\",\n description=description,\n vectorstore=self.vectorstore_info.vectorstore,\n llm=self.llm,\n )\n return [qa_tool, qa_with_sources_tool]\n[docs]class VectorStoreRouterToolkit(BaseToolkit):\n \"\"\"Toolkit for routing between vectorstores.\"\"\"\n vectorstores: List[VectorStoreInfo] = Field(exclude=True)\n llm: BaseLanguageModel = Field(default_factory=lambda: OpenAI(temperature=0))\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n tools: List[BaseTool] = []\n for vectorstore_info in self.vectorstores:\n description = VectorStoreQATool.get_description(\n vectorstore_info.name, vectorstore_info.description\n )\n qa_tool = VectorStoreQATool(\n name=vectorstore_info.name,\n description=description,\n vectorstore=vectorstore_info.vectorstore,\n llm=self.llm,\n )\n tools.append(qa_tool)\n return tools\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/vectorstore/toolkit.html"}
+{"id": "93be96bd29fd-0", "text": "Source code for langchain.agents.agent_toolkits.python.base\n\"\"\"Python agent.\"\"\"\nfrom typing import Any, Dict, Optional\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.python.prompt import PREFIX\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.tools.python.tool import PythonREPLTool\n[docs]def create_python_agent(\n llm: BaseLanguageModel,\n tool: PythonREPLTool,\n callback_manager: Optional[BaseCallbackManager] = None,\n verbose: bool = False,\n prefix: str = PREFIX,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a python agent from an LLM and tool.\"\"\"\n tools = [tool]\n prompt = ZeroShotAgent.create_prompt(tools, prefix=prefix)\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n **(agent_executor_kwargs or {}),\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/python/base.html"}
+{"id": "1ec09efa4e01-0", "text": "Source code for langchain.agents.agent_toolkits.json.base\n\"\"\"Json agent.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.json.prompt import JSON_PREFIX, JSON_SUFFIX\nfrom langchain.agents.agent_toolkits.json.toolkit import JsonToolkit\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\n[docs]def create_json_agent(\n llm: BaseLanguageModel,\n toolkit: JsonToolkit,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = JSON_PREFIX,\n suffix: str = JSON_SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n verbose: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a json agent from an LLM and tools.\"\"\"\n tools = toolkit.get_tools()\n prompt = ZeroShotAgent.create_prompt(\n tools,\n prefix=prefix,\n suffix=suffix,\n format_instructions=format_instructions,\n input_variables=input_variables,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)\n return AgentExecutor.from_agent_and_tools(", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/json/base.html"}
+{"id": "1ec09efa4e01-1", "text": "return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n **(agent_executor_kwargs or {}),\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/json/base.html"}
+{"id": "e124274d62b3-0", "text": "Source code for langchain.agents.agent_toolkits.json.toolkit\n\"\"\"Toolkit for interacting with a JSON spec.\"\"\"\nfrom __future__ import annotations\nfrom typing import List\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.tools import BaseTool\nfrom langchain.tools.json.tool import JsonGetValueTool, JsonListKeysTool, JsonSpec\n[docs]class JsonToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with a JSON spec.\"\"\"\n spec: JsonSpec\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n return [\n JsonListKeysTool(spec=self.spec),\n JsonGetValueTool(spec=self.spec),\n ]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/json/toolkit.html"}
+{"id": "97188a1c52ae-0", "text": "Source code for langchain.agents.agent_toolkits.zapier.toolkit\n\"\"\"Zapier Toolkit.\"\"\"\nfrom typing import List\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.tools import BaseTool\nfrom langchain.tools.zapier.tool import ZapierNLARunAction\nfrom langchain.utilities.zapier import ZapierNLAWrapper\n[docs]class ZapierToolkit(BaseToolkit):\n \"\"\"Zapier Toolkit.\"\"\"\n tools: List[BaseTool] = []\n[docs] @classmethod\n def from_zapier_nla_wrapper(\n cls, zapier_nla_wrapper: ZapierNLAWrapper\n ) -> \"ZapierToolkit\":\n \"\"\"Create a toolkit from a ZapierNLAWrapper.\"\"\"\n actions = zapier_nla_wrapper.list()\n tools = [\n ZapierNLARunAction(\n action_id=action[\"id\"],\n zapier_description=action[\"description\"],\n params_schema=action[\"params\"],\n api_wrapper=zapier_nla_wrapper,\n )\n for action in actions\n ]\n return cls(tools=tools)\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n return self.tools\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/zapier/toolkit.html"}
+{"id": "33f9de32e719-0", "text": "Source code for langchain.agents.agent_toolkits.pandas.base\n\"\"\"Agent for working with pandas objects.\"\"\"\nfrom typing import Any, Dict, List, Optional, Tuple\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.pandas.prompt import (\n MULTI_DF_PREFIX,\n PREFIX,\n SUFFIX_NO_DF,\n SUFFIX_WITH_DF,\n SUFFIX_WITH_MULTI_DF,\n)\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.tools.python.tool import PythonAstREPLTool\ndef _get_multi_prompt(\n dfs: List[Any],\n prefix: Optional[str] = None,\n suffix: Optional[str] = None,\n input_variables: Optional[List[str]] = None,\n include_df_in_prompt: Optional[bool] = True,\n) -> Tuple[BasePromptTemplate, List[PythonAstREPLTool]]:\n num_dfs = len(dfs)\n if suffix is not None:\n suffix_to_use = suffix\n include_dfs_head = True\n elif include_df_in_prompt:\n suffix_to_use = SUFFIX_WITH_MULTI_DF\n include_dfs_head = True\n else:\n suffix_to_use = SUFFIX_NO_DF\n include_dfs_head = False\n if input_variables is None:\n input_variables = [\"input\", \"agent_scratchpad\", \"num_dfs\"]\n if include_dfs_head:\n input_variables += [\"dfs_head\"]\n if prefix is None:\n prefix = MULTI_DF_PREFIX\n df_locals = {}\n for i, dataframe in enumerate(dfs):", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/pandas/base.html"}
+{"id": "33f9de32e719-1", "text": "df_locals = {}\n for i, dataframe in enumerate(dfs):\n df_locals[f\"df{i + 1}\"] = dataframe\n tools = [PythonAstREPLTool(locals=df_locals)]\n prompt = ZeroShotAgent.create_prompt(\n tools, prefix=prefix, suffix=suffix_to_use, input_variables=input_variables\n )\n partial_prompt = prompt.partial()\n if \"dfs_head\" in input_variables:\n dfs_head = \"\\n\\n\".join([d.head().to_markdown() for d in dfs])\n partial_prompt = partial_prompt.partial(num_dfs=str(num_dfs), dfs_head=dfs_head)\n if \"num_dfs\" in input_variables:\n partial_prompt = partial_prompt.partial(num_dfs=str(num_dfs))\n return partial_prompt, tools\ndef _get_single_prompt(\n df: Any,\n prefix: Optional[str] = None,\n suffix: Optional[str] = None,\n input_variables: Optional[List[str]] = None,\n include_df_in_prompt: Optional[bool] = True,\n) -> Tuple[BasePromptTemplate, List[PythonAstREPLTool]]:\n if suffix is not None:\n suffix_to_use = suffix\n include_df_head = True\n elif include_df_in_prompt:\n suffix_to_use = SUFFIX_WITH_DF\n include_df_head = True\n else:\n suffix_to_use = SUFFIX_NO_DF\n include_df_head = False\n if input_variables is None:\n input_variables = [\"input\", \"agent_scratchpad\"]\n if include_df_head:\n input_variables += [\"df_head\"]\n if prefix is None:\n prefix = PREFIX\n tools = [PythonAstREPLTool(locals={\"df\": df})]\n prompt = ZeroShotAgent.create_prompt(", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/pandas/base.html"}
+{"id": "33f9de32e719-2", "text": "prompt = ZeroShotAgent.create_prompt(\n tools, prefix=prefix, suffix=suffix_to_use, input_variables=input_variables\n )\n partial_prompt = prompt.partial()\n if \"df_head\" in input_variables:\n partial_prompt = partial_prompt.partial(df_head=str(df.head().to_markdown()))\n return partial_prompt, tools\ndef _get_prompt_and_tools(\n df: Any,\n prefix: Optional[str] = None,\n suffix: Optional[str] = None,\n input_variables: Optional[List[str]] = None,\n include_df_in_prompt: Optional[bool] = True,\n) -> Tuple[BasePromptTemplate, List[PythonAstREPLTool]]:\n try:\n import pandas as pd\n except ImportError:\n raise ValueError(\n \"pandas package not found, please install with `pip install pandas`\"\n )\n if include_df_in_prompt is not None and suffix is not None:\n raise ValueError(\"If suffix is specified, include_df_in_prompt should not be.\")\n if isinstance(df, list):\n for item in df:\n if not isinstance(item, pd.DataFrame):\n raise ValueError(f\"Expected pandas object, got {type(df)}\")\n return _get_multi_prompt(\n df,\n prefix=prefix,\n suffix=suffix,\n input_variables=input_variables,\n include_df_in_prompt=include_df_in_prompt,\n )\n else:\n if not isinstance(df, pd.DataFrame):\n raise ValueError(f\"Expected pandas object, got {type(df)}\")\n return _get_single_prompt(\n df,\n prefix=prefix,\n suffix=suffix,\n input_variables=input_variables,\n include_df_in_prompt=include_df_in_prompt,\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/pandas/base.html"}
+{"id": "33f9de32e719-3", "text": "include_df_in_prompt=include_df_in_prompt,\n )\n[docs]def create_pandas_dataframe_agent(\n llm: BaseLanguageModel,\n df: Any,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: Optional[str] = None,\n suffix: Optional[str] = None,\n input_variables: Optional[List[str]] = None,\n verbose: bool = False,\n return_intermediate_steps: bool = False,\n max_iterations: Optional[int] = 15,\n max_execution_time: Optional[float] = None,\n early_stopping_method: str = \"force\",\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n include_df_in_prompt: Optional[bool] = True,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a pandas agent from an LLM and dataframe.\"\"\"\n prompt, tools = _get_prompt_and_tools(\n df,\n prefix=prefix,\n suffix=suffix,\n input_variables=input_variables,\n include_df_in_prompt=include_df_in_prompt,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(\n llm_chain=llm_chain,\n allowed_tools=tool_names,\n callback_manager=callback_manager,\n **kwargs,\n )\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n return_intermediate_steps=return_intermediate_steps,\n max_iterations=max_iterations,", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/pandas/base.html"}
+{"id": "33f9de32e719-4", "text": "return_intermediate_steps=return_intermediate_steps,\n max_iterations=max_iterations,\n max_execution_time=max_execution_time,\n early_stopping_method=early_stopping_method,\n **(agent_executor_kwargs or {}),\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/pandas/base.html"}
+{"id": "34bfc966d2f0-0", "text": "Source code for langchain.agents.agent_toolkits.nla.toolkit\n\"\"\"Toolkit for interacting with API's using natural language.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, List, Optional, Sequence\nfrom pydantic import Field\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.agents.agent_toolkits.nla.tool import NLATool\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.requests import Requests\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.openapi.utils.openapi_utils import OpenAPISpec\nfrom langchain.tools.plugin import AIPlugin\n[docs]class NLAToolkit(BaseToolkit):\n \"\"\"Natural Language API Toolkit Definition.\"\"\"\n nla_tools: Sequence[NLATool] = Field(...)\n \"\"\"List of API Endpoint Tools.\"\"\"\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools for all the API operations.\"\"\"\n return list(self.nla_tools)\n @staticmethod\n def _get_http_operation_tools(\n llm: BaseLanguageModel,\n spec: OpenAPISpec,\n requests: Optional[Requests] = None,\n verbose: bool = False,\n **kwargs: Any,\n ) -> List[NLATool]:\n \"\"\"Get the tools for all the API operations.\"\"\"\n if not spec.paths:\n return []\n http_operation_tools = []\n for path in spec.paths:\n for method in spec.get_methods_for_path(path):\n endpoint_tool = NLATool.from_llm_and_method(\n llm=llm,\n path=path,\n method=method,\n spec=spec,\n requests=requests,\n verbose=verbose,\n **kwargs,\n )\n http_operation_tools.append(endpoint_tool)\n return http_operation_tools", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/nla/toolkit.html"}
+{"id": "34bfc966d2f0-1", "text": ")\n http_operation_tools.append(endpoint_tool)\n return http_operation_tools\n[docs] @classmethod\n def from_llm_and_spec(\n cls,\n llm: BaseLanguageModel,\n spec: OpenAPISpec,\n requests: Optional[Requests] = None,\n verbose: bool = False,\n **kwargs: Any,\n ) -> NLAToolkit:\n \"\"\"Instantiate the toolkit by creating tools for each operation.\"\"\"\n http_operation_tools = cls._get_http_operation_tools(\n llm=llm, spec=spec, requests=requests, verbose=verbose, **kwargs\n )\n return cls(nla_tools=http_operation_tools)\n[docs] @classmethod\n def from_llm_and_url(\n cls,\n llm: BaseLanguageModel,\n open_api_url: str,\n requests: Optional[Requests] = None,\n verbose: bool = False,\n **kwargs: Any,\n ) -> NLAToolkit:\n \"\"\"Instantiate the toolkit from an OpenAPI Spec URL\"\"\"\n spec = OpenAPISpec.from_url(open_api_url)\n return cls.from_llm_and_spec(\n llm=llm, spec=spec, requests=requests, verbose=verbose, **kwargs\n )\n[docs] @classmethod\n def from_llm_and_ai_plugin(\n cls,\n llm: BaseLanguageModel,\n ai_plugin: AIPlugin,\n requests: Optional[Requests] = None,\n verbose: bool = False,\n **kwargs: Any,\n ) -> NLAToolkit:\n \"\"\"Instantiate the toolkit from an OpenAPI Spec URL\"\"\"\n spec = OpenAPISpec.from_url(ai_plugin.api.url)", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/nla/toolkit.html"}
+{"id": "34bfc966d2f0-2", "text": "spec = OpenAPISpec.from_url(ai_plugin.api.url)\n # TODO: Merge optional Auth information with the `requests` argument\n return cls.from_llm_and_spec(\n llm=llm,\n spec=spec,\n requests=requests,\n verbose=verbose,\n **kwargs,\n )\n[docs] @classmethod\n def from_llm_and_ai_plugin_url(\n cls,\n llm: BaseLanguageModel,\n ai_plugin_url: str,\n requests: Optional[Requests] = None,\n verbose: bool = False,\n **kwargs: Any,\n ) -> NLAToolkit:\n \"\"\"Instantiate the toolkit from an OpenAPI Spec URL\"\"\"\n plugin = AIPlugin.from_url(ai_plugin_url)\n return cls.from_llm_and_ai_plugin(\n llm=llm, ai_plugin=plugin, requests=requests, verbose=verbose, **kwargs\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/nla/toolkit.html"}
+{"id": "4bf2393d4f81-0", "text": "Source code for langchain.agents.agent_toolkits.jira.toolkit\n\"\"\"Jira Toolkit.\"\"\"\nfrom typing import List\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.tools import BaseTool\nfrom langchain.tools.jira.tool import JiraAction\nfrom langchain.utilities.jira import JiraAPIWrapper\n[docs]class JiraToolkit(BaseToolkit):\n \"\"\"Jira Toolkit.\"\"\"\n tools: List[BaseTool] = []\n[docs] @classmethod\n def from_jira_api_wrapper(cls, jira_api_wrapper: JiraAPIWrapper) -> \"JiraToolkit\":\n actions = jira_api_wrapper.list()\n tools = [\n JiraAction(\n name=action[\"name\"],\n description=action[\"description\"],\n mode=action[\"mode\"],\n api_wrapper=jira_api_wrapper,\n )\n for action in actions\n ]\n return cls(tools=tools)\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n return self.tools\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/jira/toolkit.html"}
+{"id": "0e9332918c23-0", "text": "Source code for langchain.agents.agent_toolkits.file_management.toolkit\n\"\"\"Toolkit for interacting with the local filesystem.\"\"\"\nfrom __future__ import annotations\nfrom typing import List, Optional\nfrom pydantic import root_validator\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.tools import BaseTool\nfrom langchain.tools.file_management.copy import CopyFileTool\nfrom langchain.tools.file_management.delete import DeleteFileTool\nfrom langchain.tools.file_management.file_search import FileSearchTool\nfrom langchain.tools.file_management.list_dir import ListDirectoryTool\nfrom langchain.tools.file_management.move import MoveFileTool\nfrom langchain.tools.file_management.read import ReadFileTool\nfrom langchain.tools.file_management.write import WriteFileTool\n_FILE_TOOLS = {\n tool_cls.__fields__[\"name\"].default: tool_cls\n for tool_cls in [\n CopyFileTool,\n DeleteFileTool,\n FileSearchTool,\n MoveFileTool,\n ReadFileTool,\n WriteFileTool,\n ListDirectoryTool,\n ]\n}\n[docs]class FileManagementToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with a Local Files.\"\"\"\n root_dir: Optional[str] = None\n \"\"\"If specified, all file operations are made relative to root_dir.\"\"\"\n selected_tools: Optional[List[str]] = None\n \"\"\"If provided, only provide the selected tools. Defaults to all.\"\"\"\n @root_validator\n def validate_tools(cls, values: dict) -> dict:\n selected_tools = values.get(\"selected_tools\") or []\n for tool_name in selected_tools:\n if tool_name not in _FILE_TOOLS:\n raise ValueError(\n f\"File Tool of name {tool_name} not supported.\"\n f\" Permitted tools: {list(_FILE_TOOLS)}\"\n )\n return values", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/file_management/toolkit.html"}
+{"id": "0e9332918c23-1", "text": ")\n return values\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n allowed_tools = self.selected_tools or _FILE_TOOLS.keys()\n tools: List[BaseTool] = []\n for tool in allowed_tools:\n tool_cls = _FILE_TOOLS[tool]\n tools.append(tool_cls(root_dir=self.root_dir)) # type: ignore\n return tools\n__all__ = [\"FileManagementToolkit\"]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/file_management/toolkit.html"}
+{"id": "b4b591fdb38d-0", "text": "Source code for langchain.agents.agent_toolkits.gmail.toolkit\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, List\nfrom pydantic import Field\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.tools import BaseTool\nfrom langchain.tools.gmail.create_draft import GmailCreateDraft\nfrom langchain.tools.gmail.get_message import GmailGetMessage\nfrom langchain.tools.gmail.get_thread import GmailGetThread\nfrom langchain.tools.gmail.search import GmailSearch\nfrom langchain.tools.gmail.send_message import GmailSendMessage\nfrom langchain.tools.gmail.utils import build_resource_service\nif TYPE_CHECKING:\n # This is for linting and IDE typehints\n from googleapiclient.discovery import Resource\nelse:\n try:\n # We do this so pydantic can resolve the types when instantiating\n from googleapiclient.discovery import Resource\n except ImportError:\n pass\nSCOPES = [\"https://mail.google.com/\"]\n[docs]class GmailToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with Gmail.\"\"\"\n api_resource: Resource = Field(default_factory=build_resource_service)\n class Config:\n \"\"\"Pydantic config.\"\"\"\n arbitrary_types_allowed = True\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n return [\n GmailCreateDraft(api_resource=self.api_resource),\n GmailSendMessage(api_resource=self.api_resource),\n GmailSearch(api_resource=self.api_resource),\n GmailGetMessage(api_resource=self.api_resource),\n GmailGetThread(api_resource=self.api_resource),\n ]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/gmail/toolkit.html"}
+{"id": "ab354669c51c-0", "text": "Source code for langchain.agents.agent_toolkits.openapi.base\n\"\"\"OpenAPI spec agent.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.openapi.prompt import (\n OPENAPI_PREFIX,\n OPENAPI_SUFFIX,\n)\nfrom langchain.agents.agent_toolkits.openapi.toolkit import OpenAPIToolkit\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\n[docs]def create_openapi_agent(\n llm: BaseLanguageModel,\n toolkit: OpenAPIToolkit,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = OPENAPI_PREFIX,\n suffix: str = OPENAPI_SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n max_iterations: Optional[int] = 15,\n max_execution_time: Optional[float] = None,\n early_stopping_method: str = \"force\",\n verbose: bool = False,\n return_intermediate_steps: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a json agent from an LLM and tools.\"\"\"\n tools = toolkit.get_tools()\n prompt = ZeroShotAgent.create_prompt(\n tools,\n prefix=prefix,\n suffix=suffix,\n format_instructions=format_instructions,\n input_variables=input_variables,\n )\n llm_chain = LLMChain(", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/openapi/base.html"}
+{"id": "ab354669c51c-1", "text": "input_variables=input_variables,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n return_intermediate_steps=return_intermediate_steps,\n max_iterations=max_iterations,\n max_execution_time=max_execution_time,\n early_stopping_method=early_stopping_method,\n **(agent_executor_kwargs or {}),\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/openapi/base.html"}
+{"id": "c58e52615b2f-0", "text": "Source code for langchain.agents.agent_toolkits.openapi.toolkit\n\"\"\"Requests toolkit.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, List\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.agents.agent_toolkits.json.base import create_json_agent\nfrom langchain.agents.agent_toolkits.json.toolkit import JsonToolkit\nfrom langchain.agents.agent_toolkits.openapi.prompt import DESCRIPTION\nfrom langchain.agents.tools import Tool\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.requests import TextRequestsWrapper\nfrom langchain.tools import BaseTool\nfrom langchain.tools.json.tool import JsonSpec\nfrom langchain.tools.requests.tool import (\n RequestsDeleteTool,\n RequestsGetTool,\n RequestsPatchTool,\n RequestsPostTool,\n RequestsPutTool,\n)\nclass RequestsToolkit(BaseToolkit):\n \"\"\"Toolkit for making requests.\"\"\"\n requests_wrapper: TextRequestsWrapper\n def get_tools(self) -> List[BaseTool]:\n \"\"\"Return a list of tools.\"\"\"\n return [\n RequestsGetTool(requests_wrapper=self.requests_wrapper),\n RequestsPostTool(requests_wrapper=self.requests_wrapper),\n RequestsPatchTool(requests_wrapper=self.requests_wrapper),\n RequestsPutTool(requests_wrapper=self.requests_wrapper),\n RequestsDeleteTool(requests_wrapper=self.requests_wrapper),\n ]\n[docs]class OpenAPIToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with a OpenAPI api.\"\"\"\n json_agent: AgentExecutor\n requests_wrapper: TextRequestsWrapper\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n json_agent_tool = Tool(\n name=\"json_explorer\",\n func=self.json_agent.run,\n description=DESCRIPTION,\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/openapi/toolkit.html"}
+{"id": "c58e52615b2f-1", "text": "func=self.json_agent.run,\n description=DESCRIPTION,\n )\n request_toolkit = RequestsToolkit(requests_wrapper=self.requests_wrapper)\n return [*request_toolkit.get_tools(), json_agent_tool]\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n json_spec: JsonSpec,\n requests_wrapper: TextRequestsWrapper,\n **kwargs: Any,\n ) -> OpenAPIToolkit:\n \"\"\"Create json agent from llm, then initialize.\"\"\"\n json_agent = create_json_agent(llm, JsonToolkit(spec=json_spec), **kwargs)\n return cls(json_agent=json_agent, requests_wrapper=requests_wrapper)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/openapi/toolkit.html"}
+{"id": "0124b7e86a51-0", "text": "Source code for langchain.agents.agent_toolkits.sql.base\n\"\"\"SQL agent.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.sql.prompt import SQL_PREFIX, SQL_SUFFIX\nfrom langchain.agents.agent_toolkits.sql.toolkit import SQLDatabaseToolkit\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\n[docs]def create_sql_agent(\n llm: BaseLanguageModel,\n toolkit: SQLDatabaseToolkit,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = SQL_PREFIX,\n suffix: str = SQL_SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n top_k: int = 10,\n max_iterations: Optional[int] = 15,\n max_execution_time: Optional[float] = None,\n early_stopping_method: str = \"force\",\n verbose: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a sql agent from an LLM and tools.\"\"\"\n tools = toolkit.get_tools()\n prefix = prefix.format(dialect=toolkit.dialect, top_k=top_k)\n prompt = ZeroShotAgent.create_prompt(\n tools,\n prefix=prefix,\n suffix=suffix,\n format_instructions=format_instructions,\n input_variables=input_variables,\n )\n llm_chain = LLMChain(\n llm=llm,", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/sql/base.html"}
+{"id": "0124b7e86a51-1", "text": "llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n max_iterations=max_iterations,\n max_execution_time=max_execution_time,\n early_stopping_method=early_stopping_method,\n **(agent_executor_kwargs or {}),\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/sql/base.html"}
+{"id": "b1aa22658a2d-0", "text": "Source code for langchain.agents.agent_toolkits.sql.toolkit\n\"\"\"Toolkit for interacting with a SQL database.\"\"\"\nfrom typing import List\nfrom pydantic import Field\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.sql_database import SQLDatabase\nfrom langchain.tools import BaseTool\nfrom langchain.tools.sql_database.tool import (\n InfoSQLDatabaseTool,\n ListSQLDatabaseTool,\n QueryCheckerTool,\n QuerySQLDataBaseTool,\n)\n[docs]class SQLDatabaseToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with SQL databases.\"\"\"\n db: SQLDatabase = Field(exclude=True)\n llm: BaseLanguageModel = Field(exclude=True)\n @property\n def dialect(self) -> str:\n \"\"\"Return string representation of dialect to use.\"\"\"\n return self.db.dialect\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n query_sql_database_tool_description = (\n \"Input to this tool is a detailed and correct SQL query, output is a \"\n \"result from the database. If the query is not correct, an error message \"\n \"will be returned. If an error is returned, rewrite the query, check the \"\n \"query, and try again. If you encounter an issue with Unknown column \"\n \"'xxxx' in 'field list', using schema_sql_db to query the correct table \"\n \"fields.\"\n )\n info_sql_database_tool_description = (\n \"Input to this tool is a comma-separated list of tables, output is the \"\n \"schema and sample rows for those tables. \"", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/sql/toolkit.html"}
+{"id": "b1aa22658a2d-1", "text": "\"schema and sample rows for those tables. \"\n \"Be sure that the tables actually exist by calling list_tables_sql_db \"\n \"first! Example Input: 'table1, table2, table3'\"\n )\n return [\n QuerySQLDataBaseTool(\n db=self.db, description=query_sql_database_tool_description\n ),\n InfoSQLDatabaseTool(\n db=self.db, description=info_sql_database_tool_description\n ),\n ListSQLDatabaseTool(db=self.db),\n QueryCheckerTool(db=self.db, llm=self.llm),\n ]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/sql/toolkit.html"}
+{"id": "c5ec7823547b-0", "text": "Source code for langchain.agents.agent_toolkits.powerbi.base\n\"\"\"Power BI agent.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.agents import AgentExecutor\nfrom langchain.agents.agent_toolkits.powerbi.prompt import (\n POWERBI_PREFIX,\n POWERBI_SUFFIX,\n)\nfrom langchain.agents.agent_toolkits.powerbi.toolkit import PowerBIToolkit\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.utilities.powerbi import PowerBIDataset\n[docs]def create_pbi_agent(\n llm: BaseLanguageModel,\n toolkit: Optional[PowerBIToolkit],\n powerbi: Optional[PowerBIDataset] = None,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = POWERBI_PREFIX,\n suffix: str = POWERBI_SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n examples: Optional[str] = None,\n input_variables: Optional[List[str]] = None,\n top_k: int = 10,\n verbose: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a pbi agent from an LLM and tools.\"\"\"\n if toolkit is None:\n if powerbi is None:\n raise ValueError(\"Must provide either a toolkit or powerbi dataset\")\n toolkit = PowerBIToolkit(powerbi=powerbi, llm=llm, examples=examples)\n tools = toolkit.get_tools()", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/powerbi/base.html"}
+{"id": "c5ec7823547b-1", "text": "tools = toolkit.get_tools()\n agent = ZeroShotAgent(\n llm_chain=LLMChain(\n llm=llm,\n prompt=ZeroShotAgent.create_prompt(\n tools,\n prefix=prefix.format(top_k=top_k),\n suffix=suffix,\n format_instructions=format_instructions,\n input_variables=input_variables,\n ),\n callback_manager=callback_manager, # type: ignore\n verbose=verbose,\n ),\n allowed_tools=[tool.name for tool in tools],\n **kwargs,\n )\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n **(agent_executor_kwargs or {}),\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/powerbi/base.html"}
+{"id": "3863e666c185-0", "text": "Source code for langchain.agents.agent_toolkits.powerbi.toolkit\n\"\"\"Toolkit for interacting with a Power BI dataset.\"\"\"\nfrom typing import List, Optional\nfrom pydantic import Field\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts import PromptTemplate\nfrom langchain.tools import BaseTool\nfrom langchain.tools.powerbi.prompt import QUESTION_TO_QUERY\nfrom langchain.tools.powerbi.tool import (\n InfoPowerBITool,\n ListPowerBITool,\n QueryPowerBITool,\n)\nfrom langchain.utilities.powerbi import PowerBIDataset\n[docs]class PowerBIToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with PowerBI dataset.\"\"\"\n powerbi: PowerBIDataset = Field(exclude=True)\n llm: BaseLanguageModel = Field(exclude=True)\n examples: Optional[str] = None\n max_iterations: int = 5\n callback_manager: Optional[BaseCallbackManager] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n if self.callback_manager:\n chain = LLMChain(\n llm=self.llm,\n callback_manager=self.callback_manager,\n prompt=PromptTemplate(\n template=QUESTION_TO_QUERY,\n input_variables=[\"tool_input\", \"tables\", \"schemas\", \"examples\"],\n ),\n )\n else:\n chain = LLMChain(\n llm=self.llm,\n prompt=PromptTemplate(\n template=QUESTION_TO_QUERY,", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/powerbi/toolkit.html"}
+{"id": "3863e666c185-1", "text": "prompt=PromptTemplate(\n template=QUESTION_TO_QUERY,\n input_variables=[\"tool_input\", \"tables\", \"schemas\", \"examples\"],\n ),\n )\n return [\n QueryPowerBITool(\n llm_chain=chain,\n powerbi=self.powerbi,\n examples=self.examples,\n max_iterations=self.max_iterations,\n ),\n InfoPowerBITool(powerbi=self.powerbi),\n ListPowerBITool(powerbi=self.powerbi),\n ]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/powerbi/toolkit.html"}
+{"id": "f09268cdea5b-0", "text": "Source code for langchain.agents.agent_toolkits.powerbi.chat_base\n\"\"\"Power BI agent.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.agents import AgentExecutor\nfrom langchain.agents.agent import AgentOutputParser\nfrom langchain.agents.agent_toolkits.powerbi.prompt import (\n POWERBI_CHAT_PREFIX,\n POWERBI_CHAT_SUFFIX,\n)\nfrom langchain.agents.agent_toolkits.powerbi.toolkit import PowerBIToolkit\nfrom langchain.agents.conversational_chat.base import ConversationalChatAgent\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.memory import ConversationBufferMemory\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.utilities.powerbi import PowerBIDataset\n[docs]def create_pbi_chat_agent(\n llm: BaseChatModel,\n toolkit: Optional[PowerBIToolkit],\n powerbi: Optional[PowerBIDataset] = None,\n callback_manager: Optional[BaseCallbackManager] = None,\n output_parser: Optional[AgentOutputParser] = None,\n prefix: str = POWERBI_CHAT_PREFIX,\n suffix: str = POWERBI_CHAT_SUFFIX,\n examples: Optional[str] = None,\n input_variables: Optional[List[str]] = None,\n memory: Optional[BaseChatMemory] = None,\n top_k: int = 10,\n verbose: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a pbi agent from an Chat LLM and tools.\n If you supply only a toolkit and no powerbi dataset, the same LLM is used for both.\n \"\"\"\n if toolkit is None:", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/powerbi/chat_base.html"}
+{"id": "f09268cdea5b-1", "text": "\"\"\"\n if toolkit is None:\n if powerbi is None:\n raise ValueError(\"Must provide either a toolkit or powerbi dataset\")\n toolkit = PowerBIToolkit(powerbi=powerbi, llm=llm, examples=examples)\n tools = toolkit.get_tools()\n agent = ConversationalChatAgent.from_llm_and_tools(\n llm=llm,\n tools=tools,\n system_message=prefix.format(top_k=top_k),\n human_message=suffix,\n input_variables=input_variables,\n callback_manager=callback_manager,\n output_parser=output_parser,\n verbose=verbose,\n **kwargs,\n )\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n memory=memory\n or ConversationBufferMemory(memory_key=\"chat_history\", return_messages=True),\n verbose=verbose,\n **(agent_executor_kwargs or {}),\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/powerbi/chat_base.html"}
+{"id": "9870d6e76239-0", "text": "Source code for langchain.agents.structured_chat.base\nimport re\nfrom typing import Any, List, Optional, Sequence, Tuple\nfrom pydantic import Field\nfrom langchain.agents.agent import Agent, AgentOutputParser\nfrom langchain.agents.structured_chat.output_parser import (\n StructuredChatOutputParserWithRetries,\n)\nfrom langchain.agents.structured_chat.prompt import FORMAT_INSTRUCTIONS, PREFIX, SUFFIX\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.prompts.chat import (\n ChatPromptTemplate,\n HumanMessagePromptTemplate,\n SystemMessagePromptTemplate,\n)\nfrom langchain.schema import AgentAction\nfrom langchain.tools import BaseTool\nHUMAN_MESSAGE_TEMPLATE = \"{input}\\n\\n{agent_scratchpad}\"\n[docs]class StructuredChatAgent(Agent):\n output_parser: AgentOutputParser = Field(\n default_factory=StructuredChatOutputParserWithRetries\n )\n @property\n def observation_prefix(self) -> str:\n \"\"\"Prefix to append the observation with.\"\"\"\n return \"Observation: \"\n @property\n def llm_prefix(self) -> str:\n \"\"\"Prefix to append the llm call with.\"\"\"\n return \"Thought:\"\n def _construct_scratchpad(\n self, intermediate_steps: List[Tuple[AgentAction, str]]\n ) -> str:\n agent_scratchpad = super()._construct_scratchpad(intermediate_steps)\n if not isinstance(agent_scratchpad, str):\n raise ValueError(\"agent_scratchpad should be of type string.\")\n if agent_scratchpad:\n return (\n f\"This was your previous work \"", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/structured_chat/base.html"}
+{"id": "9870d6e76239-1", "text": "return (\n f\"This was your previous work \"\n f\"(but I haven't seen any of it! I only see what \"\n f\"you return as final answer):\\n{agent_scratchpad}\"\n )\n else:\n return agent_scratchpad\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n pass\n @classmethod\n def _get_default_output_parser(\n cls, llm: Optional[BaseLanguageModel] = None, **kwargs: Any\n ) -> AgentOutputParser:\n return StructuredChatOutputParserWithRetries.from_llm(llm=llm)\n @property\n def _stop(self) -> List[str]:\n return [\"Observation:\"]\n[docs] @classmethod\n def create_prompt(\n cls,\n tools: Sequence[BaseTool],\n prefix: str = PREFIX,\n suffix: str = SUFFIX,\n human_message_template: str = HUMAN_MESSAGE_TEMPLATE,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n memory_prompts: Optional[List[BasePromptTemplate]] = None,\n ) -> BasePromptTemplate:\n tool_strings = []\n for tool in tools:\n args_schema = re.sub(\"}\", \"}}}}\", re.sub(\"{\", \"{{{{\", str(tool.args)))\n tool_strings.append(f\"{tool.name}: {tool.description}, args: {args_schema}\")\n formatted_tools = \"\\n\".join(tool_strings)\n tool_names = \", \".join([tool.name for tool in tools])\n format_instructions = format_instructions.format(tool_names=tool_names)\n template = \"\\n\\n\".join([prefix, formatted_tools, format_instructions, suffix])", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/structured_chat/base.html"}
+{"id": "9870d6e76239-2", "text": "template = \"\\n\\n\".join([prefix, formatted_tools, format_instructions, suffix])\n if input_variables is None:\n input_variables = [\"input\", \"agent_scratchpad\"]\n _memory_prompts = memory_prompts or []\n messages = [\n SystemMessagePromptTemplate.from_template(template),\n *_memory_prompts,\n HumanMessagePromptTemplate.from_template(human_message_template),\n ]\n return ChatPromptTemplate(input_variables=input_variables, messages=messages)\n[docs] @classmethod\n def from_llm_and_tools(\n cls,\n llm: BaseLanguageModel,\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n output_parser: Optional[AgentOutputParser] = None,\n prefix: str = PREFIX,\n suffix: str = SUFFIX,\n human_message_template: str = HUMAN_MESSAGE_TEMPLATE,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n memory_prompts: Optional[List[BasePromptTemplate]] = None,\n **kwargs: Any,\n ) -> Agent:\n \"\"\"Construct an agent from an LLM and tools.\"\"\"\n cls._validate_tools(tools)\n prompt = cls.create_prompt(\n tools,\n prefix=prefix,\n suffix=suffix,\n human_message_template=human_message_template,\n format_instructions=format_instructions,\n input_variables=input_variables,\n memory_prompts=memory_prompts,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/structured_chat/base.html"}
+{"id": "9870d6e76239-3", "text": ")\n tool_names = [tool.name for tool in tools]\n _output_parser = output_parser or cls._get_default_output_parser(llm=llm)\n return cls(\n llm_chain=llm_chain,\n allowed_tools=tool_names,\n output_parser=_output_parser,\n **kwargs,\n )\n @property\n def _agent_type(self) -> str:\n raise ValueError\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/structured_chat/base.html"}
+{"id": "87d05810d9bb-0", "text": "Source code for langchain.agents.self_ask_with_search.base\n\"\"\"Chain that does self ask with search.\"\"\"\nfrom typing import Any, Sequence, Union\nfrom pydantic import Field\nfrom langchain.agents.agent import Agent, AgentExecutor, AgentOutputParser\nfrom langchain.agents.agent_types import AgentType\nfrom langchain.agents.self_ask_with_search.output_parser import SelfAskOutputParser\nfrom langchain.agents.self_ask_with_search.prompt import PROMPT\nfrom langchain.agents.tools import Tool\nfrom langchain.agents.utils import validate_tools_single_input\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.google_serper import GoogleSerperAPIWrapper\nfrom langchain.utilities.serpapi import SerpAPIWrapper\nclass SelfAskWithSearchAgent(Agent):\n \"\"\"Agent for the self-ask-with-search paper.\"\"\"\n output_parser: AgentOutputParser = Field(default_factory=SelfAskOutputParser)\n @classmethod\n def _get_default_output_parser(cls, **kwargs: Any) -> AgentOutputParser:\n return SelfAskOutputParser()\n @property\n def _agent_type(self) -> str:\n \"\"\"Return Identifier of agent type.\"\"\"\n return AgentType.SELF_ASK_WITH_SEARCH\n @classmethod\n def create_prompt(cls, tools: Sequence[BaseTool]) -> BasePromptTemplate:\n \"\"\"Prompt does not depend on tools.\"\"\"\n return PROMPT\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n validate_tools_single_input(cls.__name__, tools)\n super()._validate_tools(tools)\n if len(tools) != 1:\n raise ValueError(f\"Exactly one tool must be specified, but got {tools}\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/self_ask_with_search/base.html"}
+{"id": "87d05810d9bb-1", "text": "raise ValueError(f\"Exactly one tool must be specified, but got {tools}\")\n tool_names = {tool.name for tool in tools}\n if tool_names != {\"Intermediate Answer\"}:\n raise ValueError(\n f\"Tool name should be Intermediate Answer, got {tool_names}\"\n )\n @property\n def observation_prefix(self) -> str:\n \"\"\"Prefix to append the observation with.\"\"\"\n return \"Intermediate answer: \"\n @property\n def llm_prefix(self) -> str:\n \"\"\"Prefix to append the LLM call with.\"\"\"\n return \"\"\n[docs]class SelfAskWithSearchChain(AgentExecutor):\n \"\"\"Chain that does self ask with search.\n Example:\n .. code-block:: python\n from langchain import SelfAskWithSearchChain, OpenAI, GoogleSerperAPIWrapper\n search_chain = GoogleSerperAPIWrapper()\n self_ask = SelfAskWithSearchChain(llm=OpenAI(), search_chain=search_chain)\n \"\"\"\n def __init__(\n self,\n llm: BaseLanguageModel,\n search_chain: Union[GoogleSerperAPIWrapper, SerpAPIWrapper],\n **kwargs: Any,\n ):\n \"\"\"Initialize with just an LLM and a search chain.\"\"\"\n search_tool = Tool(\n name=\"Intermediate Answer\",\n func=search_chain.run,\n coroutine=search_chain.arun,\n description=\"Search\",\n )\n agent = SelfAskWithSearchAgent.from_llm_and_tools(llm, [search_tool])\n super().__init__(agent=agent, tools=[search_tool], **kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/self_ask_with_search/base.html"}
+{"id": "aa35e0cff4a1-0", "text": "Source code for langchain.agents.conversational.base\n\"\"\"An agent designed to hold a conversation in addition to using tools.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, List, Optional, Sequence\nfrom pydantic import Field\nfrom langchain.agents.agent import Agent, AgentOutputParser\nfrom langchain.agents.agent_types import AgentType\nfrom langchain.agents.conversational.output_parser import ConvoOutputParser\nfrom langchain.agents.conversational.prompt import FORMAT_INSTRUCTIONS, PREFIX, SUFFIX\nfrom langchain.agents.utils import validate_tools_single_input\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains import LLMChain\nfrom langchain.prompts import PromptTemplate\nfrom langchain.tools.base import BaseTool\n[docs]class ConversationalAgent(Agent):\n \"\"\"An agent designed to hold a conversation in addition to using tools.\"\"\"\n ai_prefix: str = \"AI\"\n output_parser: AgentOutputParser = Field(default_factory=ConvoOutputParser)\n @classmethod\n def _get_default_output_parser(\n cls, ai_prefix: str = \"AI\", **kwargs: Any\n ) -> AgentOutputParser:\n return ConvoOutputParser(ai_prefix=ai_prefix)\n @property\n def _agent_type(self) -> str:\n \"\"\"Return Identifier of agent type.\"\"\"\n return AgentType.CONVERSATIONAL_REACT_DESCRIPTION\n @property\n def observation_prefix(self) -> str:\n \"\"\"Prefix to append the observation with.\"\"\"\n return \"Observation: \"\n @property\n def llm_prefix(self) -> str:\n \"\"\"Prefix to append the llm call with.\"\"\"\n return \"Thought:\"\n[docs] @classmethod\n def create_prompt(\n cls,", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/conversational/base.html"}
+{"id": "aa35e0cff4a1-1", "text": "[docs] @classmethod\n def create_prompt(\n cls,\n tools: Sequence[BaseTool],\n prefix: str = PREFIX,\n suffix: str = SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n ai_prefix: str = \"AI\",\n human_prefix: str = \"Human\",\n input_variables: Optional[List[str]] = None,\n ) -> PromptTemplate:\n \"\"\"Create prompt in the style of the zero shot agent.\n Args:\n tools: List of tools the agent will have access to, used to format the\n prompt.\n prefix: String to put before the list of tools.\n suffix: String to put after the list of tools.\n ai_prefix: String to use before AI output.\n human_prefix: String to use before human output.\n input_variables: List of input variables the final prompt will expect.\n Returns:\n A PromptTemplate with the template assembled from the pieces here.\n \"\"\"\n tool_strings = \"\\n\".join(\n [f\"> {tool.name}: {tool.description}\" for tool in tools]\n )\n tool_names = \", \".join([tool.name for tool in tools])\n format_instructions = format_instructions.format(\n tool_names=tool_names, ai_prefix=ai_prefix, human_prefix=human_prefix\n )\n template = \"\\n\\n\".join([prefix, tool_strings, format_instructions, suffix])\n if input_variables is None:\n input_variables = [\"input\", \"chat_history\", \"agent_scratchpad\"]\n return PromptTemplate(template=template, input_variables=input_variables)\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n super()._validate_tools(tools)\n validate_tools_single_input(cls.__name__, tools)", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/conversational/base.html"}
+{"id": "aa35e0cff4a1-2", "text": "validate_tools_single_input(cls.__name__, tools)\n[docs] @classmethod\n def from_llm_and_tools(\n cls,\n llm: BaseLanguageModel,\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n output_parser: Optional[AgentOutputParser] = None,\n prefix: str = PREFIX,\n suffix: str = SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n ai_prefix: str = \"AI\",\n human_prefix: str = \"Human\",\n input_variables: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> Agent:\n \"\"\"Construct an agent from an LLM and tools.\"\"\"\n cls._validate_tools(tools)\n prompt = cls.create_prompt(\n tools,\n ai_prefix=ai_prefix,\n human_prefix=human_prefix,\n prefix=prefix,\n suffix=suffix,\n format_instructions=format_instructions,\n input_variables=input_variables,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n _output_parser = output_parser or cls._get_default_output_parser(\n ai_prefix=ai_prefix\n )\n return cls(\n llm_chain=llm_chain,\n allowed_tools=tool_names,\n ai_prefix=ai_prefix,\n output_parser=_output_parser,\n **kwargs,\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/conversational/base.html"}
+{"id": "0a6f6480406e-0", "text": "Source code for langchain.agents.react.base\n\"\"\"Chain that implements the ReAct paper from https://arxiv.org/pdf/2210.03629.pdf.\"\"\"\nfrom typing import Any, List, Optional, Sequence\nfrom pydantic import Field\nfrom langchain.agents.agent import Agent, AgentExecutor, AgentOutputParser\nfrom langchain.agents.agent_types import AgentType\nfrom langchain.agents.react.output_parser import ReActOutputParser\nfrom langchain.agents.react.textworld_prompt import TEXTWORLD_PROMPT\nfrom langchain.agents.react.wiki_prompt import WIKI_PROMPT\nfrom langchain.agents.tools import Tool\nfrom langchain.agents.utils import validate_tools_single_input\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.docstore.base import Docstore\nfrom langchain.docstore.document import Document\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.tools.base import BaseTool\nclass ReActDocstoreAgent(Agent):\n \"\"\"Agent for the ReAct chain.\"\"\"\n output_parser: AgentOutputParser = Field(default_factory=ReActOutputParser)\n @classmethod\n def _get_default_output_parser(cls, **kwargs: Any) -> AgentOutputParser:\n return ReActOutputParser()\n @property\n def _agent_type(self) -> str:\n \"\"\"Return Identifier of agent type.\"\"\"\n return AgentType.REACT_DOCSTORE\n @classmethod\n def create_prompt(cls, tools: Sequence[BaseTool]) -> BasePromptTemplate:\n \"\"\"Return default prompt.\"\"\"\n return WIKI_PROMPT\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n validate_tools_single_input(cls.__name__, tools)\n super()._validate_tools(tools)\n if len(tools) != 2:", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/react/base.html"}
+{"id": "0a6f6480406e-1", "text": "super()._validate_tools(tools)\n if len(tools) != 2:\n raise ValueError(f\"Exactly two tools must be specified, but got {tools}\")\n tool_names = {tool.name for tool in tools}\n if tool_names != {\"Lookup\", \"Search\"}:\n raise ValueError(\n f\"Tool names should be Lookup and Search, got {tool_names}\"\n )\n @property\n def observation_prefix(self) -> str:\n \"\"\"Prefix to append the observation with.\"\"\"\n return \"Observation: \"\n @property\n def _stop(self) -> List[str]:\n return [\"\\nObservation:\"]\n @property\n def llm_prefix(self) -> str:\n \"\"\"Prefix to append the LLM call with.\"\"\"\n return \"Thought:\"\nclass DocstoreExplorer:\n \"\"\"Class to assist with exploration of a document store.\"\"\"\n def __init__(self, docstore: Docstore):\n \"\"\"Initialize with a docstore, and set initial document to None.\"\"\"\n self.docstore = docstore\n self.document: Optional[Document] = None\n self.lookup_str = \"\"\n self.lookup_index = 0\n def search(self, term: str) -> str:\n \"\"\"Search for a term in the docstore, and if found save.\"\"\"\n result = self.docstore.search(term)\n if isinstance(result, Document):\n self.document = result\n return self._summary\n else:\n self.document = None\n return result\n def lookup(self, term: str) -> str:\n \"\"\"Lookup a term in document (if saved).\"\"\"\n if self.document is None:\n raise ValueError(\"Cannot lookup without a successful search first\")\n if term.lower() != self.lookup_str:", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/react/base.html"}
+{"id": "0a6f6480406e-2", "text": "if term.lower() != self.lookup_str:\n self.lookup_str = term.lower()\n self.lookup_index = 0\n else:\n self.lookup_index += 1\n lookups = [p for p in self._paragraphs if self.lookup_str in p.lower()]\n if len(lookups) == 0:\n return \"No Results\"\n elif self.lookup_index >= len(lookups):\n return \"No More Results\"\n else:\n result_prefix = f\"(Result {self.lookup_index + 1}/{len(lookups)})\"\n return f\"{result_prefix} {lookups[self.lookup_index]}\"\n @property\n def _summary(self) -> str:\n return self._paragraphs[0]\n @property\n def _paragraphs(self) -> List[str]:\n if self.document is None:\n raise ValueError(\"Cannot get paragraphs without a document\")\n return self.document.page_content.split(\"\\n\\n\")\n[docs]class ReActTextWorldAgent(ReActDocstoreAgent):\n \"\"\"Agent for the ReAct TextWorld chain.\"\"\"\n[docs] @classmethod\n def create_prompt(cls, tools: Sequence[BaseTool]) -> BasePromptTemplate:\n \"\"\"Return default prompt.\"\"\"\n return TEXTWORLD_PROMPT\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n validate_tools_single_input(cls.__name__, tools)\n super()._validate_tools(tools)\n if len(tools) != 1:\n raise ValueError(f\"Exactly one tool must be specified, but got {tools}\")\n tool_names = {tool.name for tool in tools}\n if tool_names != {\"Play\"}:\n raise ValueError(f\"Tool name should be Play, got {tool_names}\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/react/base.html"}
+{"id": "0a6f6480406e-3", "text": "raise ValueError(f\"Tool name should be Play, got {tool_names}\")\n[docs]class ReActChain(AgentExecutor):\n \"\"\"Chain that implements the ReAct paper.\n Example:\n .. code-block:: python\n from langchain import ReActChain, OpenAI\n react = ReAct(llm=OpenAI())\n \"\"\"\n def __init__(self, llm: BaseLanguageModel, docstore: Docstore, **kwargs: Any):\n \"\"\"Initialize with the LLM and a docstore.\"\"\"\n docstore_explorer = DocstoreExplorer(docstore)\n tools = [\n Tool(\n name=\"Search\",\n func=docstore_explorer.search,\n description=\"Search for a term in the docstore.\",\n ),\n Tool(\n name=\"Lookup\",\n func=docstore_explorer.lookup,\n description=\"Lookup a term in the docstore.\",\n ),\n ]\n agent = ReActDocstoreAgent.from_llm_and_tools(llm, tools)\n super().__init__(agent=agent, tools=tools, **kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/agents/react/base.html"}
+{"id": "3b989edf2637-0", "text": "Source code for langchain.utilities.searx_search\n\"\"\"Utility for using SearxNG meta search API.\nSearxNG is a privacy-friendly free metasearch engine that aggregates results from\n`multiple search engines\n`_ and databases and\nsupports the `OpenSearch\n`_\nspecification.\nMore details on the installation instructions `here. <../../integrations/searx.html>`_\nFor the search API refer to https://docs.searxng.org/dev/search_api.html\nQuick Start\n-----------\nIn order to use this utility you need to provide the searx host. This can be done\nby passing the named parameter :attr:`searx_host `\nor exporting the environment variable SEARX_HOST.\nNote: this is the only required parameter.\nThen create a searx search instance like this:\n .. code-block:: python\n from langchain.utilities import SearxSearchWrapper\n # when the host starts with `http` SSL is disabled and the connection\n # is assumed to be on a private network\n searx_host='http://self.hosted'\n search = SearxSearchWrapper(searx_host=searx_host)\nYou can now use the ``search`` instance to query the searx API.\nSearching\n---------\nUse the :meth:`run() ` and\n:meth:`results() ` methods to query the searx API.\nOther methods are available for convenience.\n:class:`SearxResults` is a convenience wrapper around the raw json result.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"}
+{"id": "3b989edf2637-1", "text": ":class:`SearxResults` is a convenience wrapper around the raw json result.\nExample usage of the ``run`` method to make a search:\n .. code-block:: python\n s.run(query=\"what is the best search engine?\")\nEngine Parameters\n-----------------\nYou can pass any `accepted searx search API\n`_ parameters to the\n:py:class:`SearxSearchWrapper` instance.\nIn the following example we are using the\n:attr:`engines ` and the ``language`` parameters:\n .. code-block:: python\n # assuming the searx host is set as above or exported as an env variable\n s = SearxSearchWrapper(engines=['google', 'bing'],\n language='es')\nSearch Tips\n-----------\nSearx offers a special\n`search syntax `_\nthat can also be used instead of passing engine parameters.\nFor example the following query:\n .. code-block:: python\n s = SearxSearchWrapper(\"langchain library\", engines=['github'])\n # can also be written as:\n s = SearxSearchWrapper(\"langchain library !github\")\n # or even:\n s = SearxSearchWrapper(\"langchain library !gh\")\nIn some situations you might want to pass an extra string to the search query.\nFor example when the `run()` method is called by an agent. The search suffix can\nalso be used as a way to pass extra parameters to searx or the underlying search\nengines.\n .. code-block:: python\n # select the github engine and pass the search suffix", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"}
+{"id": "3b989edf2637-2", "text": ".. code-block:: python\n # select the github engine and pass the search suffix\n s = SearchWrapper(\"langchain library\", query_suffix=\"!gh\")\n s = SearchWrapper(\"langchain library\")\n # select github the conventional google search syntax\n s.run(\"large language models\", query_suffix=\"site:github.com\")\n*NOTE*: A search suffix can be defined on both the instance and the method level.\nThe resulting query will be the concatenation of the two with the former taking\nprecedence.\nSee `SearxNG Configured Engines\n`_ and\n`SearxNG Search Syntax `_\nfor more details.\nNotes\n-----\nThis wrapper is based on the SearxNG fork https://github.com/searxng/searxng which is\nbetter maintained than the original Searx project and offers more features.\nPublic searxNG instances often use a rate limiter for API usage, so you might want to\nuse a self hosted instance and disable the rate limiter.\nIf you are self-hosting an instance you can customize the rate limiter for your\nown network as described `here `_.\nFor a list of public SearxNG instances see https://searx.space/\n\"\"\"\nimport json\nfrom typing import Any, Dict, List, Optional\nimport aiohttp\nimport requests\nfrom pydantic import BaseModel, Extra, Field, PrivateAttr, root_validator, validator\nfrom langchain.utils import get_from_dict_or_env\ndef _get_default_params() -> dict:\n return {\"language\": \"en\", \"format\": \"json\"}", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"}
+{"id": "3b989edf2637-3", "text": "return {\"language\": \"en\", \"format\": \"json\"}\n[docs]class SearxResults(dict):\n \"\"\"Dict like wrapper around search api results.\"\"\"\n _data = \"\"\n def __init__(self, data: str):\n \"\"\"Take a raw result from Searx and make it into a dict like object.\"\"\"\n json_data = json.loads(data)\n super().__init__(json_data)\n self.__dict__ = self\n def __str__(self) -> str:\n \"\"\"Text representation of searx result.\"\"\"\n return self._data\n @property\n def results(self) -> Any:\n \"\"\"Silence mypy for accessing this field.\n :meta private:\n \"\"\"\n return self.get(\"results\")\n @property\n def answers(self) -> Any:\n \"\"\"Helper accessor on the json result.\"\"\"\n return self.get(\"answers\")\n[docs]class SearxSearchWrapper(BaseModel):\n \"\"\"Wrapper for Searx API.\n To use you need to provide the searx host by passing the named parameter\n ``searx_host`` or exporting the environment variable ``SEARX_HOST``.\n In some situations you might want to disable SSL verification, for example\n if you are running searx locally. You can do this by passing the named parameter\n ``unsecure``. You can also pass the host url scheme as ``http`` to disable SSL.\n Example:\n .. code-block:: python\n from langchain.utilities import SearxSearchWrapper\n searx = SearxSearchWrapper(searx_host=\"http://localhost:8888\")\n Example with SSL disabled:\n .. code-block:: python\n from langchain.utilities import SearxSearchWrapper", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"}
+{"id": "3b989edf2637-4", "text": ".. code-block:: python\n from langchain.utilities import SearxSearchWrapper\n # note the unsecure parameter is not needed if you pass the url scheme as\n # http\n searx = SearxSearchWrapper(searx_host=\"http://localhost:8888\",\n unsecure=True)\n \"\"\"\n _result: SearxResults = PrivateAttr()\n searx_host: str = \"\"\n unsecure: bool = False\n params: dict = Field(default_factory=_get_default_params)\n headers: Optional[dict] = None\n engines: Optional[List[str]] = []\n categories: Optional[List[str]] = []\n query_suffix: Optional[str] = \"\"\n k: int = 10\n aiosession: Optional[Any] = None\n @validator(\"unsecure\")\n def disable_ssl_warnings(cls, v: bool) -> bool:\n \"\"\"Disable SSL warnings.\"\"\"\n if v:\n # requests.urllib3.disable_warnings()\n try:\n import urllib3\n urllib3.disable_warnings()\n except ImportError as e:\n print(e)\n return v\n @root_validator()\n def validate_params(cls, values: Dict) -> Dict:\n \"\"\"Validate that custom searx params are merged with default ones.\"\"\"\n user_params = values[\"params\"]\n default = _get_default_params()\n values[\"params\"] = {**default, **user_params}\n engines = values.get(\"engines\")\n if engines:\n values[\"params\"][\"engines\"] = \",\".join(engines)\n categories = values.get(\"categories\")\n if categories:\n values[\"params\"][\"categories\"] = \",\".join(categories)", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"}
+{"id": "3b989edf2637-5", "text": "if categories:\n values[\"params\"][\"categories\"] = \",\".join(categories)\n searx_host = get_from_dict_or_env(values, \"searx_host\", \"SEARX_HOST\")\n if not searx_host.startswith(\"http\"):\n print(\n f\"Warning: missing the url scheme on host \\\n ! assuming secure https://{searx_host} \"\n )\n searx_host = \"https://\" + searx_host\n elif searx_host.startswith(\"http://\"):\n values[\"unsecure\"] = True\n cls.disable_ssl_warnings(True)\n values[\"searx_host\"] = searx_host\n return values\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n def _searx_api_query(self, params: dict) -> SearxResults:\n \"\"\"Actual request to searx API.\"\"\"\n raw_result = requests.get(\n self.searx_host,\n headers=self.headers,\n params=params,\n verify=not self.unsecure,\n )\n # test if http result is ok\n if not raw_result.ok:\n raise ValueError(\"Searx API returned an error: \", raw_result.text)\n res = SearxResults(raw_result.text)\n self._result = res\n return res\n async def _asearx_api_query(self, params: dict) -> SearxResults:\n if not self.aiosession:\n async with aiohttp.ClientSession() as session:\n async with session.get(\n self.searx_host,\n headers=self.headers,\n params=params,\n ssl=(lambda: False if self.unsecure else None)(),\n ) as response:", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"}
+{"id": "3b989edf2637-6", "text": ") as response:\n if not response.ok:\n raise ValueError(\"Searx API returned an error: \", response.text)\n result = SearxResults(await response.text())\n self._result = result\n else:\n async with self.aiosession.get(\n self.searx_host,\n headers=self.headers,\n params=params,\n verify=not self.unsecure,\n ) as response:\n if not response.ok:\n raise ValueError(\"Searx API returned an error: \", response.text)\n result = SearxResults(await response.text())\n self._result = result\n return result\n[docs] def run(\n self,\n query: str,\n engines: Optional[List[str]] = None,\n categories: Optional[List[str]] = None,\n query_suffix: Optional[str] = \"\",\n **kwargs: Any,\n ) -> str:\n \"\"\"Run query through Searx API and parse results.\n You can pass any other params to the searx query API.\n Args:\n query: The query to search for.\n query_suffix: Extra suffix appended to the query.\n engines: List of engines to use for the query.\n categories: List of categories to use for the query.\n **kwargs: extra parameters to pass to the searx API.\n Returns:\n str: The result of the query.\n Raises:\n ValueError: If an error occured with the query.\n Example:\n This will make a query to the qwant engine:\n .. code-block:: python\n from langchain.utilities import SearxSearchWrapper\n searx = SearxSearchWrapper(searx_host=\"http://my.searx.host\")", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"}
+{"id": "3b989edf2637-7", "text": "searx.run(\"what is the weather in France ?\", engine=\"qwant\")\n # the same result can be achieved using the `!` syntax of searx\n # to select the engine using `query_suffix`\n searx.run(\"what is the weather in France ?\", query_suffix=\"!qwant\")\n \"\"\"\n _params = {\n \"q\": query,\n }\n params = {**self.params, **_params, **kwargs}\n if self.query_suffix and len(self.query_suffix) > 0:\n params[\"q\"] += \" \" + self.query_suffix\n if isinstance(query_suffix, str) and len(query_suffix) > 0:\n params[\"q\"] += \" \" + query_suffix\n if isinstance(engines, list) and len(engines) > 0:\n params[\"engines\"] = \",\".join(engines)\n if isinstance(categories, list) and len(categories) > 0:\n params[\"categories\"] = \",\".join(categories)\n res = self._searx_api_query(params)\n if len(res.answers) > 0:\n toret = res.answers[0]\n # only return the content of the results list\n elif len(res.results) > 0:\n toret = \"\\n\\n\".join([r.get(\"content\", \"\") for r in res.results[: self.k]])\n else:\n toret = \"No good search result found\"\n return toret\n[docs] async def arun(\n self,\n query: str,\n engines: Optional[List[str]] = None,\n query_suffix: Optional[str] = \"\",\n **kwargs: Any,\n ) -> str:\n \"\"\"Asynchronously version of `run`.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"}
+{"id": "3b989edf2637-8", "text": ") -> str:\n \"\"\"Asynchronously version of `run`.\"\"\"\n _params = {\n \"q\": query,\n }\n params = {**self.params, **_params, **kwargs}\n if self.query_suffix and len(self.query_suffix) > 0:\n params[\"q\"] += \" \" + self.query_suffix\n if isinstance(query_suffix, str) and len(query_suffix) > 0:\n params[\"q\"] += \" \" + query_suffix\n if isinstance(engines, list) and len(engines) > 0:\n params[\"engines\"] = \",\".join(engines)\n res = await self._asearx_api_query(params)\n if len(res.answers) > 0:\n toret = res.answers[0]\n # only return the content of the results list\n elif len(res.results) > 0:\n toret = \"\\n\\n\".join([r.get(\"content\", \"\") for r in res.results[: self.k]])\n else:\n toret = \"No good search result found\"\n return toret\n[docs] def results(\n self,\n query: str,\n num_results: int,\n engines: Optional[List[str]] = None,\n categories: Optional[List[str]] = None,\n query_suffix: Optional[str] = \"\",\n **kwargs: Any,\n ) -> List[Dict]:\n \"\"\"Run query through Searx API and returns the results with metadata.\n Args:\n query: The query to search for.\n query_suffix: Extra suffix appended to the query.\n num_results: Limit the number of results to return.\n engines: List of engines to use for the query.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"}
+{"id": "3b989edf2637-9", "text": "engines: List of engines to use for the query.\n categories: List of categories to use for the query.\n **kwargs: extra parameters to pass to the searx API.\n Returns:\n Dict with the following keys:\n {\n snippet: The description of the result.\n title: The title of the result.\n link: The link to the result.\n engines: The engines used for the result.\n category: Searx category of the result.\n }\n \"\"\"\n _params = {\n \"q\": query,\n }\n params = {**self.params, **_params, **kwargs}\n if self.query_suffix and len(self.query_suffix) > 0:\n params[\"q\"] += \" \" + self.query_suffix\n if isinstance(query_suffix, str) and len(query_suffix) > 0:\n params[\"q\"] += \" \" + query_suffix\n if isinstance(engines, list) and len(engines) > 0:\n params[\"engines\"] = \",\".join(engines)\n if isinstance(categories, list) and len(categories) > 0:\n params[\"categories\"] = \",\".join(categories)\n results = self._searx_api_query(params).results[:num_results]\n if len(results) == 0:\n return [{\"Result\": \"No good Search Result was found\"}]\n return [\n {\n \"snippet\": result.get(\"content\", \"\"),\n \"title\": result[\"title\"],\n \"link\": result[\"url\"],\n \"engines\": result[\"engines\"],\n \"category\": result[\"category\"],\n }\n for result in results\n ]\n[docs] async def aresults(\n self,", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"}
+{"id": "3b989edf2637-10", "text": "]\n[docs] async def aresults(\n self,\n query: str,\n num_results: int,\n engines: Optional[List[str]] = None,\n query_suffix: Optional[str] = \"\",\n **kwargs: Any,\n ) -> List[Dict]:\n \"\"\"Asynchronously query with json results.\n Uses aiohttp. See `results` for more info.\n \"\"\"\n _params = {\n \"q\": query,\n }\n params = {**self.params, **_params, **kwargs}\n if self.query_suffix and len(self.query_suffix) > 0:\n params[\"q\"] += \" \" + self.query_suffix\n if isinstance(query_suffix, str) and len(query_suffix) > 0:\n params[\"q\"] += \" \" + query_suffix\n if isinstance(engines, list) and len(engines) > 0:\n params[\"engines\"] = \",\".join(engines)\n results = (await self._asearx_api_query(params)).results[:num_results]\n if len(results) == 0:\n return [{\"Result\": \"No good Search Result was found\"}]\n return [\n {\n \"snippet\": result.get(\"content\", \"\"),\n \"title\": result[\"title\"],\n \"link\": result[\"url\"],\n \"engines\": result[\"engines\"],\n \"category\": result[\"category\"],\n }\n for result in results\n ]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"}
+{"id": "65732cf5e956-0", "text": "Source code for langchain.utilities.wolfram_alpha\n\"\"\"Util that calls WolframAlpha.\"\"\"\nfrom typing import Any, Dict, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.utils import get_from_dict_or_env\n[docs]class WolframAlphaAPIWrapper(BaseModel):\n \"\"\"Wrapper for Wolfram Alpha.\n Docs for using:\n 1. Go to wolfram alpha and sign up for a developer account\n 2. Create an app and get your APP ID\n 3. Save your APP ID into WOLFRAM_ALPHA_APPID env variable\n 4. pip install wolframalpha\n \"\"\"\n wolfram_client: Any #: :meta private:\n wolfram_alpha_appid: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n wolfram_alpha_appid = get_from_dict_or_env(\n values, \"wolfram_alpha_appid\", \"WOLFRAM_ALPHA_APPID\"\n )\n values[\"wolfram_alpha_appid\"] = wolfram_alpha_appid\n try:\n import wolframalpha\n except ImportError:\n raise ImportError(\n \"wolframalpha is not installed. \"\n \"Please install it with `pip install wolframalpha`\"\n )\n client = wolframalpha.Client(wolfram_alpha_appid)\n values[\"wolfram_client\"] = client\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"Run query through WolframAlpha and parse result.\"\"\"\n res = self.wolfram_client.query(query)", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/wolfram_alpha.html"}
+{"id": "65732cf5e956-1", "text": "res = self.wolfram_client.query(query)\n try:\n assumption = next(res.pods).text\n answer = next(res.results).text\n except StopIteration:\n return \"Wolfram Alpha wasn't able to answer it\"\n if answer is None or answer == \"\":\n # We don't want to return the assumption alone if answer is empty\n return \"No good Wolfram Alpha Result was found\"\n else:\n return f\"Assumption: {assumption} \\nAnswer: {answer}\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/wolfram_alpha.html"}
+{"id": "56f2766f87ff-0", "text": "Source code for langchain.utilities.powerbi\n\"\"\"Wrapper around a Power BI endpoint.\"\"\"\nfrom __future__ import annotations\nimport asyncio\nimport logging\nimport os\nfrom typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Union\nimport aiohttp\nimport requests\nfrom aiohttp import ServerTimeoutError\nfrom pydantic import BaseModel, Field, root_validator, validator\nfrom requests.exceptions import Timeout\n_LOGGER = logging.getLogger(__name__)\nBASE_URL = os.getenv(\"POWERBI_BASE_URL\", \"https://api.powerbi.com/v1.0/myorg\")\nif TYPE_CHECKING:\n from azure.core.credentials import TokenCredential\n[docs]class PowerBIDataset(BaseModel):\n \"\"\"Create PowerBI engine from dataset ID and credential or token.\n Use either the credential or a supplied token to authenticate.\n If both are supplied the credential is used to generate a token.\n The impersonated_user_name is the UPN of a user to be impersonated.\n If the model is not RLS enabled, this will be ignored.\n \"\"\"\n dataset_id: str\n table_names: List[str]\n group_id: Optional[str] = None\n credential: Optional[TokenCredential] = None\n token: Optional[str] = None\n impersonated_user_name: Optional[str] = None\n sample_rows_in_table_info: int = Field(default=1, gt=0, le=10)\n schemas: Dict[str, str] = Field(default_factory=dict)\n aiosession: Optional[aiohttp.ClientSession] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n @validator(\"table_names\", allow_reuse=True)\n def fix_table_names(cls, table_names: List[str]) -> List[str]:\n \"\"\"Fix the table names.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html"}
+{"id": "56f2766f87ff-1", "text": "\"\"\"Fix the table names.\"\"\"\n return [fix_table_name(table) for table in table_names]\n @root_validator(pre=True, allow_reuse=True)\n def token_or_credential_present(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Validate that at least one of token and credentials is present.\"\"\"\n if \"token\" in values or \"credential\" in values:\n return values\n raise ValueError(\"Please provide either a credential or a token.\")\n @property\n def request_url(self) -> str:\n \"\"\"Get the request url.\"\"\"\n if self.group_id:\n return f\"{BASE_URL}/groups/{self.group_id}/datasets/{self.dataset_id}/executeQueries\" # noqa: E501 # pylint: disable=C0301\n return f\"{BASE_URL}/datasets/{self.dataset_id}/executeQueries\" # noqa: E501 # pylint: disable=C0301\n @property\n def headers(self) -> Dict[str, str]:\n \"\"\"Get the token.\"\"\"\n if self.token:\n return {\n \"Content-Type\": \"application/json\",\n \"Authorization\": \"Bearer \" + self.token,\n }\n from azure.core.exceptions import (\n ClientAuthenticationError, # pylint: disable=import-outside-toplevel\n )\n if self.credential:\n try:\n token = self.credential.get_token(\n \"https://analysis.windows.net/powerbi/api/.default\"\n ).token\n return {\n \"Content-Type\": \"application/json\",\n \"Authorization\": \"Bearer \" + token,\n }\n except Exception as exc: # pylint: disable=broad-exception-caught\n raise ClientAuthenticationError(\n \"Could not get a token from the supplied credentials.\"\n ) from exc", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html"}
+{"id": "56f2766f87ff-2", "text": "\"Could not get a token from the supplied credentials.\"\n ) from exc\n raise ClientAuthenticationError(\"No credential or token supplied.\")\n[docs] def get_table_names(self) -> Iterable[str]:\n \"\"\"Get names of tables available.\"\"\"\n return self.table_names\n[docs] def get_schemas(self) -> str:\n \"\"\"Get the available schema's.\"\"\"\n if self.schemas:\n return \", \".join([f\"{key}: {value}\" for key, value in self.schemas.items()])\n return \"No known schema's yet. Use the schema_powerbi tool first.\"\n @property\n def table_info(self) -> str:\n \"\"\"Information about all tables in the database.\"\"\"\n return self.get_table_info()\n def _get_tables_to_query(\n self, table_names: Optional[Union[List[str], str]] = None\n ) -> Optional[List[str]]:\n \"\"\"Get the tables names that need to be queried, after checking they exist.\"\"\"\n if table_names is not None:\n if (\n isinstance(table_names, list)\n and len(table_names) > 0\n and table_names[0] != \"\"\n ):\n fixed_tables = [fix_table_name(table) for table in table_names]\n non_existing_tables = [\n table for table in fixed_tables if table not in self.table_names\n ]\n if non_existing_tables:\n _LOGGER.warning(\n \"Table(s) %s not found in dataset.\",\n \", \".join(non_existing_tables),\n )\n tables = [\n table for table in fixed_tables if table not in non_existing_tables\n ]\n return tables if tables else None\n if isinstance(table_names, str) and table_names != \"\":", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html"}
+{"id": "56f2766f87ff-3", "text": "if isinstance(table_names, str) and table_names != \"\":\n if table_names not in self.table_names:\n _LOGGER.warning(\"Table %s not found in dataset.\", table_names)\n return None\n return [fix_table_name(table_names)]\n return self.table_names\n def _get_tables_todo(self, tables_todo: List[str]) -> List[str]:\n \"\"\"Get the tables that still need to be queried.\"\"\"\n return [table for table in tables_todo if table not in self.schemas]\n def _get_schema_for_tables(self, table_names: List[str]) -> str:\n \"\"\"Create a string of the table schemas for the supplied tables.\"\"\"\n schemas = [\n schema for table, schema in self.schemas.items() if table in table_names\n ]\n return \", \".join(schemas)\n[docs] def get_table_info(\n self, table_names: Optional[Union[List[str], str]] = None\n ) -> str:\n \"\"\"Get information about specified tables.\"\"\"\n tables_requested = self._get_tables_to_query(table_names)\n if tables_requested is None:\n return \"No (valid) tables requested.\"\n tables_todo = self._get_tables_todo(tables_requested)\n for table in tables_todo:\n self._get_schema(table)\n return self._get_schema_for_tables(tables_requested)\n[docs] async def aget_table_info(\n self, table_names: Optional[Union[List[str], str]] = None\n ) -> str:\n \"\"\"Get information about specified tables.\"\"\"\n tables_requested = self._get_tables_to_query(table_names)\n if tables_requested is None:\n return \"No (valid) tables requested.\"\n tables_todo = self._get_tables_todo(tables_requested)", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html"}
+{"id": "56f2766f87ff-4", "text": "tables_todo = self._get_tables_todo(tables_requested)\n await asyncio.gather(*[self._aget_schema(table) for table in tables_todo])\n return self._get_schema_for_tables(tables_requested)\n def _get_schema(self, table: str) -> None:\n \"\"\"Get the schema for a table.\"\"\"\n try:\n result = self.run(\n f\"EVALUATE TOPN({self.sample_rows_in_table_info}, {table})\"\n )\n self.schemas[table] = json_to_md(result[\"results\"][0][\"tables\"][0][\"rows\"])\n except Timeout:\n _LOGGER.warning(\"Timeout while getting table info for %s\", table)\n self.schemas[table] = \"unknown\"\n except Exception as exc: # pylint: disable=broad-exception-caught\n _LOGGER.warning(\"Error while getting table info for %s: %s\", table, exc)\n self.schemas[table] = \"unknown\"\n async def _aget_schema(self, table: str) -> None:\n \"\"\"Get the schema for a table.\"\"\"\n try:\n result = await self.arun(\n f\"EVALUATE TOPN({self.sample_rows_in_table_info}, {table})\"\n )\n self.schemas[table] = json_to_md(result[\"results\"][0][\"tables\"][0][\"rows\"])\n except ServerTimeoutError:\n _LOGGER.warning(\"Timeout while getting table info for %s\", table)\n self.schemas[table] = \"unknown\"\n except Exception as exc: # pylint: disable=broad-exception-caught\n _LOGGER.warning(\"Error while getting table info for %s: %s\", table, exc)\n self.schemas[table] = \"unknown\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html"}
+{"id": "56f2766f87ff-5", "text": "self.schemas[table] = \"unknown\"\n def _create_json_content(self, command: str) -> dict[str, Any]:\n \"\"\"Create the json content for the request.\"\"\"\n return {\n \"queries\": [{\"query\": rf\"{command}\"}],\n \"impersonatedUserName\": self.impersonated_user_name,\n \"serializerSettings\": {\"includeNulls\": True},\n }\n[docs] def run(self, command: str) -> Any:\n \"\"\"Execute a DAX command and return a json representing the results.\"\"\"\n _LOGGER.debug(\"Running command: %s\", command)\n result = requests.post(\n self.request_url,\n json=self._create_json_content(command),\n headers=self.headers,\n timeout=10,\n )\n return result.json()\n[docs] async def arun(self, command: str) -> Any:\n \"\"\"Execute a DAX command and return the result asynchronously.\"\"\"\n _LOGGER.debug(\"Running command: %s\", command)\n if self.aiosession:\n async with self.aiosession.post(\n self.request_url,\n headers=self.headers,\n json=self._create_json_content(command),\n timeout=10,\n ) as response:\n response_json = await response.json()\n return response_json\n async with aiohttp.ClientSession() as session:\n async with session.post(\n self.request_url,\n headers=self.headers,\n json=self._create_json_content(command),\n timeout=10,\n ) as response:\n response_json = await response.json()\n return response_json\ndef json_to_md(\n json_contents: List[Dict[str, Union[str, int, float]]],\n table_name: Optional[str] = None,\n) -> str:", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html"}
+{"id": "56f2766f87ff-6", "text": "table_name: Optional[str] = None,\n) -> str:\n \"\"\"Converts a JSON object to a markdown table.\"\"\"\n output_md = \"\"\n headers = json_contents[0].keys()\n for header in headers:\n header.replace(\"[\", \".\").replace(\"]\", \"\")\n if table_name:\n header.replace(f\"{table_name}.\", \"\")\n output_md += f\"| {header} \"\n output_md += \"|\\n\"\n for row in json_contents:\n for value in row.values():\n output_md += f\"| {value} \"\n output_md += \"|\\n\"\n return output_md\ndef fix_table_name(table: str) -> str:\n \"\"\"Add single quotes around table names that contain spaces.\"\"\"\n if \" \" in table and not table.startswith(\"'\") and not table.endswith(\"'\"):\n return f\"'{table}'\"\n return table\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html"}
+{"id": "bedde0423e99-0", "text": "Source code for langchain.utilities.pupmed\nimport json\nimport logging\nimport time\nimport urllib.error\nimport urllib.request\nfrom typing import List\nfrom pydantic import BaseModel, Extra\nfrom langchain.schema import Document\nlogger = logging.getLogger(__name__)\n[docs]class PubMedAPIWrapper(BaseModel):\n \"\"\"\n Wrapper around PubMed API.\n This wrapper will use the PubMed API to conduct searches and fetch\n document summaries. By default, it will return the document summaries\n of the top-k results of an input search.\n Parameters:\n top_k_results: number of the top-scored document used for the PubMed tool\n load_max_docs: a limit to the number of loaded documents\n load_all_available_meta:\n if True: the `metadata` of the loaded Documents gets all available meta info\n (see https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch)\n if False: the `metadata` gets only the most informative fields.\n \"\"\"\n base_url_esearch = \"https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?\"\n base_url_efetch = \"https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?\"\n max_retry = 5\n sleep_time = 0.2\n # Default values for the parameters\n top_k_results: int = 3\n load_max_docs: int = 25\n ARXIV_MAX_QUERY_LENGTH = 300\n doc_content_chars_max: int = 2000\n load_all_available_meta: bool = False\n email: str = \"your_email@example.com\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] def run(self, query: str) -> str:", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/pupmed.html"}
+{"id": "bedde0423e99-1", "text": "[docs] def run(self, query: str) -> str:\n \"\"\"\n Run PubMed search and get the article meta information.\n See https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch\n It uses only the most informative fields of article meta information.\n \"\"\"\n try:\n # Retrieve the top-k results for the query\n docs = [\n f\"Published: {result['pub_date']}\\nTitle: {result['title']}\\n\"\n f\"Summary: {result['summary']}\"\n for result in self.load(query[: self.ARXIV_MAX_QUERY_LENGTH])\n ]\n # Join the results and limit the character count\n return (\n \"\\n\\n\".join(docs)[: self.doc_content_chars_max]\n if docs\n else \"No good PubMed Result was found\"\n )\n except Exception as ex:\n return f\"PubMed exception: {ex}\"\n[docs] def load(self, query: str) -> List[dict]:\n \"\"\"\n Search PubMed for documents matching the query.\n Return a list of dictionaries containing the document metadata.\n \"\"\"\n url = (\n self.base_url_esearch\n + \"db=pubmed&term=\"\n + str({urllib.parse.quote(query)})\n + f\"&retmode=json&retmax={self.top_k_results}&usehistory=y\"\n )\n result = urllib.request.urlopen(url)\n text = result.read().decode(\"utf-8\")\n json_text = json.loads(text)\n articles = []\n webenv = json_text[\"esearchresult\"][\"webenv\"]\n for uid in json_text[\"esearchresult\"][\"idlist\"]:\n article = self.retrieve_article(uid, webenv)\n articles.append(article)", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/pupmed.html"}
+{"id": "bedde0423e99-2", "text": "article = self.retrieve_article(uid, webenv)\n articles.append(article)\n # Convert the list of articles to a JSON string\n return articles\n def _transform_doc(self, doc: dict) -> Document:\n summary = doc.pop(\"summary\")\n return Document(page_content=summary, metadata=doc)\n[docs] def load_docs(self, query: str) -> List[Document]:\n document_dicts = self.load(query=query)\n return [self._transform_doc(d) for d in document_dicts]\n[docs] def retrieve_article(self, uid: str, webenv: str) -> dict:\n url = (\n self.base_url_efetch\n + \"db=pubmed&retmode=xml&id=\"\n + uid\n + \"&webenv=\"\n + webenv\n )\n retry = 0\n while True:\n try:\n result = urllib.request.urlopen(url)\n break\n except urllib.error.HTTPError as e:\n if e.code == 429 and retry < self.max_retry:\n # Too Many Requests error\n # wait for an exponentially increasing amount of time\n print(\n f\"Too Many Requests, \"\n f\"waiting for {self.sleep_time:.2f} seconds...\"\n )\n time.sleep(self.sleep_time)\n self.sleep_time *= 2\n retry += 1\n else:\n raise e\n xml_text = result.read().decode(\"utf-8\")\n # Get title\n title = \"\"\n if \"\" in xml_text and \"\" in xml_text:\n start_tag = \"\"\n end_tag = \"\"\n title = xml_text[", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/pupmed.html"}
+{"id": "bedde0423e99-3", "text": "end_tag = \"\"\n title = xml_text[\n xml_text.index(start_tag) + len(start_tag) : xml_text.index(end_tag)\n ]\n # Get abstract\n abstract = \"\"\n if \"\" in xml_text and \"\" in xml_text:\n start_tag = \"\"\n end_tag = \"\"\n abstract = xml_text[\n xml_text.index(start_tag) + len(start_tag) : xml_text.index(end_tag)\n ]\n # Get publication date\n pub_date = \"\"\n if \"\" in xml_text and \"\" in xml_text:\n start_tag = \"\"\n end_tag = \"\"\n pub_date = xml_text[\n xml_text.index(start_tag) + len(start_tag) : xml_text.index(end_tag)\n ]\n # Return article as dictionary\n article = {\n \"uid\": uid,\n \"title\": title,\n \"summary\": abstract,\n \"pub_date\": pub_date,\n }\n return article\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/pupmed.html"}
+{"id": "a2b37c2ea00a-0", "text": "Source code for langchain.utilities.apify\nfrom typing import Any, Callable, Dict, Optional\nfrom pydantic import BaseModel, root_validator\nfrom langchain.document_loaders import ApifyDatasetLoader\nfrom langchain.document_loaders.base import Document\nfrom langchain.utils import get_from_dict_or_env\n[docs]class ApifyWrapper(BaseModel):\n \"\"\"Wrapper around Apify.\n To use, you should have the ``apify-client`` python package installed,\n and the environment variable ``APIFY_API_TOKEN`` set with your API key, or pass\n `apify_api_token` as a named parameter to the constructor.\n \"\"\"\n apify_client: Any\n apify_client_async: Any\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate environment.\n Validate that an Apify API token is set and the apify-client\n Python package exists in the current environment.\n \"\"\"\n apify_api_token = get_from_dict_or_env(\n values, \"apify_api_token\", \"APIFY_API_TOKEN\"\n )\n try:\n from apify_client import ApifyClient, ApifyClientAsync\n values[\"apify_client\"] = ApifyClient(apify_api_token)\n values[\"apify_client_async\"] = ApifyClientAsync(apify_api_token)\n except ImportError:\n raise ValueError(\n \"Could not import apify-client Python package. \"\n \"Please install it with `pip install apify-client`.\"\n )\n return values\n[docs] def call_actor(\n self,\n actor_id: str,\n run_input: Dict,\n dataset_mapping_function: Callable[[Dict], Document],\n *,\n build: Optional[str] = None,", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/apify.html"}
+{"id": "a2b37c2ea00a-1", "text": "*,\n build: Optional[str] = None,\n memory_mbytes: Optional[int] = None,\n timeout_secs: Optional[int] = None,\n ) -> ApifyDatasetLoader:\n \"\"\"Run an Actor on the Apify platform and wait for results to be ready.\n Args:\n actor_id (str): The ID or name of the Actor on the Apify platform.\n run_input (Dict): The input object of the Actor that you're trying to run.\n dataset_mapping_function (Callable): A function that takes a single\n dictionary (an Apify dataset item) and converts it to an\n instance of the Document class.\n build (str, optional): Optionally specifies the actor build to run.\n It can be either a build tag or build number.\n memory_mbytes (int, optional): Optional memory limit for the run,\n in megabytes.\n timeout_secs (int, optional): Optional timeout for the run, in seconds.\n Returns:\n ApifyDatasetLoader: A loader that will fetch the records from the\n Actor run's default dataset.\n \"\"\"\n actor_call = self.apify_client.actor(actor_id).call(\n run_input=run_input,\n build=build,\n memory_mbytes=memory_mbytes,\n timeout_secs=timeout_secs,\n )\n return ApifyDatasetLoader(\n dataset_id=actor_call[\"defaultDatasetId\"],\n dataset_mapping_function=dataset_mapping_function,\n )\n[docs] async def acall_actor(\n self,\n actor_id: str,\n run_input: Dict,\n dataset_mapping_function: Callable[[Dict], Document],\n *,\n build: Optional[str] = None,\n memory_mbytes: Optional[int] = None,", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/apify.html"}
+{"id": "a2b37c2ea00a-2", "text": "memory_mbytes: Optional[int] = None,\n timeout_secs: Optional[int] = None,\n ) -> ApifyDatasetLoader:\n \"\"\"Run an Actor on the Apify platform and wait for results to be ready.\n Args:\n actor_id (str): The ID or name of the Actor on the Apify platform.\n run_input (Dict): The input object of the Actor that you're trying to run.\n dataset_mapping_function (Callable): A function that takes a single\n dictionary (an Apify dataset item) and converts it to\n an instance of the Document class.\n build (str, optional): Optionally specifies the actor build to run.\n It can be either a build tag or build number.\n memory_mbytes (int, optional): Optional memory limit for the run,\n in megabytes.\n timeout_secs (int, optional): Optional timeout for the run, in seconds.\n Returns:\n ApifyDatasetLoader: A loader that will fetch the records from the\n Actor run's default dataset.\n \"\"\"\n actor_call = await self.apify_client_async.actor(actor_id).call(\n run_input=run_input,\n build=build,\n memory_mbytes=memory_mbytes,\n timeout_secs=timeout_secs,\n )\n return ApifyDatasetLoader(\n dataset_id=actor_call[\"defaultDatasetId\"],\n dataset_mapping_function=dataset_mapping_function,\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/apify.html"}
+{"id": "afbe5348ea6e-0", "text": "Source code for langchain.utilities.spark_sql\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Any, Iterable, List, Optional\nif TYPE_CHECKING:\n from pyspark.sql import DataFrame, Row, SparkSession\n[docs]class SparkSQL:\n def __init__(\n self,\n spark_session: Optional[SparkSession] = None,\n catalog: Optional[str] = None,\n schema: Optional[str] = None,\n ignore_tables: Optional[List[str]] = None,\n include_tables: Optional[List[str]] = None,\n sample_rows_in_table_info: int = 3,\n ):\n try:\n from pyspark.sql import SparkSession\n except ImportError:\n raise ValueError(\n \"pyspark is not installed. Please install it with `pip install pyspark`\"\n )\n self._spark = (\n spark_session if spark_session else SparkSession.builder.getOrCreate()\n )\n if catalog is not None:\n self._spark.catalog.setCurrentCatalog(catalog)\n if schema is not None:\n self._spark.catalog.setCurrentDatabase(schema)\n self._all_tables = set(self._get_all_table_names())\n self._include_tables = set(include_tables) if include_tables else set()\n if self._include_tables:\n missing_tables = self._include_tables - self._all_tables\n if missing_tables:\n raise ValueError(\n f\"include_tables {missing_tables} not found in database\"\n )\n self._ignore_tables = set(ignore_tables) if ignore_tables else set()\n if self._ignore_tables:\n missing_tables = self._ignore_tables - self._all_tables\n if missing_tables:\n raise ValueError(\n f\"ignore_tables {missing_tables} not found in database\"\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/spark_sql.html"}
+{"id": "afbe5348ea6e-1", "text": "f\"ignore_tables {missing_tables} not found in database\"\n )\n usable_tables = self.get_usable_table_names()\n self._usable_tables = set(usable_tables) if usable_tables else self._all_tables\n if not isinstance(sample_rows_in_table_info, int):\n raise TypeError(\"sample_rows_in_table_info must be an integer\")\n self._sample_rows_in_table_info = sample_rows_in_table_info\n[docs] @classmethod\n def from_uri(\n cls, database_uri: str, engine_args: Optional[dict] = None, **kwargs: Any\n ) -> SparkSQL:\n \"\"\"Creating a remote Spark Session via Spark connect.\n For example: SparkSQL.from_uri(\"sc://localhost:15002\")\n \"\"\"\n try:\n from pyspark.sql import SparkSession\n except ImportError:\n raise ValueError(\n \"pyspark is not installed. Please install it with `pip install pyspark`\"\n )\n spark = SparkSession.builder.remote(database_uri).getOrCreate()\n return cls(spark, **kwargs)\n[docs] def get_usable_table_names(self) -> Iterable[str]:\n \"\"\"Get names of tables available.\"\"\"\n if self._include_tables:\n return self._include_tables\n # sorting the result can help LLM understanding it.\n return sorted(self._all_tables - self._ignore_tables)\n def _get_all_table_names(self) -> Iterable[str]:\n rows = self._spark.sql(\"SHOW TABLES\").select(\"tableName\").collect()\n return list(map(lambda row: row.tableName, rows))\n def _get_create_table_stmt(self, table: str) -> str:\n statement = (\n self._spark.sql(f\"SHOW CREATE TABLE {table}\").collect()[0].createtab_stmt", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/spark_sql.html"}
+{"id": "afbe5348ea6e-2", "text": ")\n # Ignore the data source provider and options to reduce the number of tokens.\n using_clause_index = statement.find(\"USING\")\n return statement[:using_clause_index] + \";\"\n[docs] def get_table_info(self, table_names: Optional[List[str]] = None) -> str:\n all_table_names = self.get_usable_table_names()\n if table_names is not None:\n missing_tables = set(table_names).difference(all_table_names)\n if missing_tables:\n raise ValueError(f\"table_names {missing_tables} not found in database\")\n all_table_names = table_names\n tables = []\n for table_name in all_table_names:\n table_info = self._get_create_table_stmt(table_name)\n if self._sample_rows_in_table_info:\n table_info += \"\\n\\n/*\"\n table_info += f\"\\n{self._get_sample_spark_rows(table_name)}\\n\"\n table_info += \"*/\"\n tables.append(table_info)\n final_str = \"\\n\\n\".join(tables)\n return final_str\n def _get_sample_spark_rows(self, table: str) -> str:\n query = f\"SELECT * FROM {table} LIMIT {self._sample_rows_in_table_info}\"\n df = self._spark.sql(query)\n columns_str = \"\\t\".join(list(map(lambda f: f.name, df.schema.fields)))\n try:\n sample_rows = self._get_dataframe_results(df)\n # save the sample rows in string format\n sample_rows_str = \"\\n\".join([\"\\t\".join(row) for row in sample_rows])\n except Exception:\n sample_rows_str = \"\"\n return (\n f\"{self._sample_rows_in_table_info} rows from {table} table:\\n\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/spark_sql.html"}
+{"id": "afbe5348ea6e-3", "text": "f\"{columns_str}\\n\"\n f\"{sample_rows_str}\"\n )\n def _convert_row_as_tuple(self, row: Row) -> tuple:\n return tuple(map(str, row.asDict().values()))\n def _get_dataframe_results(self, df: DataFrame) -> list:\n return list(map(self._convert_row_as_tuple, df.collect()))\n[docs] def run(self, command: str, fetch: str = \"all\") -> str:\n df = self._spark.sql(command)\n if fetch == \"one\":\n df = df.limit(1)\n return str(self._get_dataframe_results(df))\n[docs] def get_table_info_no_throw(self, table_names: Optional[List[str]] = None) -> str:\n \"\"\"Get information about specified tables.\n Follows best practices as specified in: Rajkumar et al, 2022\n (https://arxiv.org/abs/2204.00498)\n If `sample_rows_in_table_info`, the specified number of sample rows will be\n appended to each table description. This can increase performance as\n demonstrated in the paper.\n \"\"\"\n try:\n return self.get_table_info(table_names)\n except ValueError as e:\n \"\"\"Format the error message\"\"\"\n return f\"Error: {e}\"\n[docs] def run_no_throw(self, command: str, fetch: str = \"all\") -> str:\n \"\"\"Execute a SQL command and return a string representing the results.\n If the statement returns rows, a string of the results is returned.\n If the statement returns no rows, an empty string is returned.\n If the statement throws an error, the error message is returned.\n \"\"\"\n try:\n from pyspark.errors import PySparkException", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/spark_sql.html"}
+{"id": "afbe5348ea6e-4", "text": "\"\"\"\n try:\n from pyspark.errors import PySparkException\n except ImportError:\n raise ValueError(\n \"pyspark is not installed. Please install it with `pip install pyspark`\"\n )\n try:\n return self.run(command, fetch)\n except PySparkException as e:\n \"\"\"Format the error message\"\"\"\n return f\"Error: {e}\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/spark_sql.html"}
+{"id": "e3cfe05aab50-0", "text": "Source code for langchain.utilities.graphql\nimport json\nfrom typing import Any, Callable, Dict, Optional\nfrom pydantic import BaseModel, Extra, root_validator\n[docs]class GraphQLAPIWrapper(BaseModel):\n \"\"\"Wrapper around GraphQL API.\n To use, you should have the ``gql`` python package installed.\n This wrapper will use the GraphQL API to conduct queries.\n \"\"\"\n custom_headers: Optional[Dict[str, str]] = None\n graphql_endpoint: str\n gql_client: Any #: :meta private:\n gql_function: Callable[[str], Any] #: :meta private:\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in the environment.\"\"\"\n try:\n from gql import Client, gql\n from gql.transport.requests import RequestsHTTPTransport\n except ImportError as e:\n raise ImportError(\n \"Could not import gql python package. \"\n f\"Try installing it with `pip install gql`. Received error: {e}\"\n )\n headers = values.get(\"custom_headers\")\n transport = RequestsHTTPTransport(\n url=values[\"graphql_endpoint\"],\n headers=headers,\n )\n client = Client(transport=transport, fetch_schema_from_transport=True)\n values[\"gql_client\"] = client\n values[\"gql_function\"] = gql\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"Run a GraphQL query and get the results.\"\"\"\n result = self._execute_query(query)\n return json.dumps(result, indent=2)", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/graphql.html"}
+{"id": "e3cfe05aab50-1", "text": "return json.dumps(result, indent=2)\n def _execute_query(self, query: str) -> Dict[str, Any]:\n \"\"\"Execute a GraphQL query and return the results.\"\"\"\n document_node = self.gql_function(query)\n result = self.gql_client.execute(document_node)\n return result\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/graphql.html"}
+{"id": "405d4587a104-0", "text": "Source code for langchain.utilities.metaphor_search\n\"\"\"Util that calls Metaphor Search API.\nIn order to set this up, follow instructions at:\n\"\"\"\nimport json\nfrom typing import Dict, List\nimport aiohttp\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.utils import get_from_dict_or_env\nMETAPHOR_API_URL = \"https://api.metaphor.systems\"\n[docs]class MetaphorSearchAPIWrapper(BaseModel):\n \"\"\"Wrapper for Metaphor Search API.\"\"\"\n metaphor_api_key: str\n k: int = 10\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n def _metaphor_search_results(self, query: str, num_results: int) -> List[dict]:\n headers = {\"X-Api-Key\": self.metaphor_api_key}\n params = {\"numResults\": num_results, \"query\": query}\n response = requests.post(\n # type: ignore\n f\"{METAPHOR_API_URL}/search\",\n headers=headers,\n json=params,\n )\n response.raise_for_status()\n search_results = response.json()\n print(search_results)\n return search_results[\"results\"]\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and endpoint exists in environment.\"\"\"\n metaphor_api_key = get_from_dict_or_env(\n values, \"metaphor_api_key\", \"METAPHOR_API_KEY\"\n )\n values[\"metaphor_api_key\"] = metaphor_api_key\n return values\n[docs] def results(self, query: str, num_results: int) -> List[Dict]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/metaphor_search.html"}
+{"id": "405d4587a104-1", "text": "\"\"\"Run query through Metaphor Search and return metadata.\n Args:\n query: The query to search for.\n num_results: The number of results to return.\n Returns:\n A list of dictionaries with the following keys:\n title - The title of the\n url - The url\n author - Author of the content, if applicable. Otherwise, None.\n date_created - Estimated date created,\n in YYYY-MM-DD format. Otherwise, None.\n \"\"\"\n raw_search_results = self._metaphor_search_results(\n query, num_results=num_results\n )\n return self._clean_results(raw_search_results)\n[docs] async def results_async(self, query: str, num_results: int) -> List[Dict]:\n \"\"\"Get results from the Metaphor Search API asynchronously.\"\"\"\n # Function to perform the API call\n async def fetch() -> str:\n headers = {\"X-Api-Key\": self.metaphor_api_key}\n params = {\"numResults\": num_results, \"query\": query}\n async with aiohttp.ClientSession() as session:\n async with session.post(\n f\"{METAPHOR_API_URL}/search\", json=params, headers=headers\n ) as res:\n if res.status == 200:\n data = await res.text()\n return data\n else:\n raise Exception(f\"Error {res.status}: {res.reason}\")\n results_json_str = await fetch()\n results_json = json.loads(results_json_str)\n return self._clean_results(results_json[\"results\"])\n def _clean_results(self, raw_search_results: List[Dict]) -> List[Dict]:\n cleaned_results = []\n for result in raw_search_results:\n cleaned_results.append(\n {", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/metaphor_search.html"}
+{"id": "405d4587a104-2", "text": "for result in raw_search_results:\n cleaned_results.append(\n {\n \"title\": result[\"title\"],\n \"url\": result[\"url\"],\n \"author\": result[\"author\"],\n \"date_created\": result[\"dateCreated\"],\n }\n )\n return cleaned_results\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/metaphor_search.html"}
+{"id": "0e36e0b9712b-0", "text": "Source code for langchain.utilities.wikipedia\n\"\"\"Util that calls Wikipedia.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.schema import Document\nlogger = logging.getLogger(__name__)\nWIKIPEDIA_MAX_QUERY_LENGTH = 300\n[docs]class WikipediaAPIWrapper(BaseModel):\n \"\"\"Wrapper around WikipediaAPI.\n To use, you should have the ``wikipedia`` python package installed.\n This wrapper will use the Wikipedia API to conduct searches and\n fetch page summaries. By default, it will return the page summaries\n of the top-k results.\n It limits the Document content by doc_content_chars_max.\n \"\"\"\n wiki_client: Any #: :meta private:\n top_k_results: int = 3\n lang: str = \"en\"\n load_all_available_meta: bool = False\n doc_content_chars_max: int = 4000\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in environment.\"\"\"\n try:\n import wikipedia\n wikipedia.set_lang(values[\"lang\"])\n values[\"wiki_client\"] = wikipedia\n except ImportError:\n raise ImportError(\n \"Could not import wikipedia python package. \"\n \"Please install it with `pip install wikipedia`.\"\n )\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"Run Wikipedia search and get page summaries.\"\"\"\n page_titles = self.wiki_client.search(query[:WIKIPEDIA_MAX_QUERY_LENGTH])\n summaries = []\n for page_title in page_titles[: self.top_k_results]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/wikipedia.html"}
+{"id": "0e36e0b9712b-1", "text": "summaries = []\n for page_title in page_titles[: self.top_k_results]:\n if wiki_page := self._fetch_page(page_title):\n if summary := self._formatted_page_summary(page_title, wiki_page):\n summaries.append(summary)\n if not summaries:\n return \"No good Wikipedia Search Result was found\"\n return \"\\n\\n\".join(summaries)[: self.doc_content_chars_max]\n @staticmethod\n def _formatted_page_summary(page_title: str, wiki_page: Any) -> Optional[str]:\n return f\"Page: {page_title}\\nSummary: {wiki_page.summary}\"\n def _page_to_document(self, page_title: str, wiki_page: Any) -> Document:\n main_meta = {\n \"title\": page_title,\n \"summary\": wiki_page.summary,\n \"source\": wiki_page.url,\n }\n add_meta = (\n {\n \"categories\": wiki_page.categories,\n \"page_url\": wiki_page.url,\n \"image_urls\": wiki_page.images,\n \"related_titles\": wiki_page.links,\n \"parent_id\": wiki_page.parent_id,\n \"references\": wiki_page.references,\n \"revision_id\": wiki_page.revision_id,\n \"sections\": wiki_page.sections,\n }\n if self.load_all_available_meta\n else {}\n )\n doc = Document(\n page_content=wiki_page.content[: self.doc_content_chars_max],\n metadata={\n **main_meta,\n **add_meta,\n },\n )\n return doc\n def _fetch_page(self, page: str) -> Optional[str]:\n try:\n return self.wiki_client.page(title=page, auto_suggest=False)\n except (\n self.wiki_client.exceptions.PageError,", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/wikipedia.html"}
+{"id": "0e36e0b9712b-2", "text": "except (\n self.wiki_client.exceptions.PageError,\n self.wiki_client.exceptions.DisambiguationError,\n ):\n return None\n[docs] def load(self, query: str) -> List[Document]:\n \"\"\"\n Run Wikipedia search and get the article text plus the meta information.\n See\n Returns: a list of documents.\n \"\"\"\n page_titles = self.wiki_client.search(query[:WIKIPEDIA_MAX_QUERY_LENGTH])\n docs = []\n for page_title in page_titles[: self.top_k_results]:\n if wiki_page := self._fetch_page(page_title):\n if doc := self._page_to_document(page_title, wiki_page):\n docs.append(doc)\n return docs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/wikipedia.html"}
+{"id": "0e634b1fce0c-0", "text": "Source code for langchain.utilities.bing_search\n\"\"\"Util that calls Bing Search.\nIn order to set this up, follow instructions at:\nhttps://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e\n\"\"\"\nfrom typing import Dict, List\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.utils import get_from_dict_or_env\n[docs]class BingSearchAPIWrapper(BaseModel):\n \"\"\"Wrapper for Bing Search API.\n In order to set this up, follow instructions at:\n https://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e\n \"\"\"\n bing_subscription_key: str\n bing_search_url: str\n k: int = 10\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n def _bing_search_results(self, search_term: str, count: int) -> List[dict]:\n headers = {\"Ocp-Apim-Subscription-Key\": self.bing_subscription_key}\n params = {\n \"q\": search_term,\n \"count\": count,\n \"textDecorations\": True,\n \"textFormat\": \"HTML\",\n }\n response = requests.get(\n self.bing_search_url, headers=headers, params=params # type: ignore\n )\n response.raise_for_status()\n search_results = response.json()\n return search_results[\"webPages\"][\"value\"]\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and endpoint exists in environment.\"\"\"\n bing_subscription_key = get_from_dict_or_env(", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/bing_search.html"}
+{"id": "0e634b1fce0c-1", "text": "bing_subscription_key = get_from_dict_or_env(\n values, \"bing_subscription_key\", \"BING_SUBSCRIPTION_KEY\"\n )\n values[\"bing_subscription_key\"] = bing_subscription_key\n bing_search_url = get_from_dict_or_env(\n values,\n \"bing_search_url\",\n \"BING_SEARCH_URL\",\n # default=\"https://api.bing.microsoft.com/v7.0/search\",\n )\n values[\"bing_search_url\"] = bing_search_url\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"Run query through BingSearch and parse result.\"\"\"\n snippets = []\n results = self._bing_search_results(query, count=self.k)\n if len(results) == 0:\n return \"No good Bing Search Result was found\"\n for result in results:\n snippets.append(result[\"snippet\"])\n return \" \".join(snippets)\n[docs] def results(self, query: str, num_results: int) -> List[Dict]:\n \"\"\"Run query through BingSearch and return metadata.\n Args:\n query: The query to search for.\n num_results: The number of results to return.\n Returns:\n A list of dictionaries with the following keys:\n snippet - The description of the result.\n title - The title of the result.\n link - The link to the result.\n \"\"\"\n metadata_results = []\n results = self._bing_search_results(query, count=num_results)\n if len(results) == 0:\n return [{\"Result\": \"No good Bing Search Result was found\"}]\n for result in results:\n metadata_result = {\n \"snippet\": result[\"snippet\"],\n \"title\": result[\"name\"],", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/bing_search.html"}
+{"id": "0e634b1fce0c-2", "text": "\"snippet\": result[\"snippet\"],\n \"title\": result[\"name\"],\n \"link\": result[\"url\"],\n }\n metadata_results.append(metadata_result)\n return metadata_results\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/bing_search.html"}
+{"id": "51c46d846653-0", "text": "Source code for langchain.utilities.openweathermap\n\"\"\"Util that calls OpenWeatherMap using PyOWM.\"\"\"\nfrom typing import Any, Dict, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.tools.base import BaseModel\nfrom langchain.utils import get_from_dict_or_env\n[docs]class OpenWeatherMapAPIWrapper(BaseModel):\n \"\"\"Wrapper for OpenWeatherMap API using PyOWM.\n Docs for using:\n 1. Go to OpenWeatherMap and sign up for an API key\n 2. Save your API KEY into OPENWEATHERMAP_API_KEY env variable\n 3. pip install pyowm\n \"\"\"\n owm: Any\n openweathermap_api_key: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key exists in environment.\"\"\"\n openweathermap_api_key = get_from_dict_or_env(\n values, \"openweathermap_api_key\", \"OPENWEATHERMAP_API_KEY\"\n )\n try:\n import pyowm\n except ImportError:\n raise ImportError(\n \"pyowm is not installed. Please install it with `pip install pyowm`\"\n )\n owm = pyowm.OWM(openweathermap_api_key)\n values[\"owm\"] = owm\n return values\n def _format_weather_info(self, location: str, w: Any) -> str:\n detailed_status = w.detailed_status\n wind = w.wind()\n humidity = w.humidity\n temperature = w.temperature(\"celsius\")\n rain = w.rain\n heat_index = w.heat_index\n clouds = w.clouds", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/openweathermap.html"}
+{"id": "51c46d846653-1", "text": "heat_index = w.heat_index\n clouds = w.clouds\n return (\n f\"In {location}, the current weather is as follows:\\n\"\n f\"Detailed status: {detailed_status}\\n\"\n f\"Wind speed: {wind['speed']} m/s, direction: {wind['deg']}\u00b0\\n\"\n f\"Humidity: {humidity}%\\n\"\n f\"Temperature: \\n\"\n f\" - Current: {temperature['temp']}\u00b0C\\n\"\n f\" - High: {temperature['temp_max']}\u00b0C\\n\"\n f\" - Low: {temperature['temp_min']}\u00b0C\\n\"\n f\" - Feels like: {temperature['feels_like']}\u00b0C\\n\"\n f\"Rain: {rain}\\n\"\n f\"Heat index: {heat_index}\\n\"\n f\"Cloud cover: {clouds}%\"\n )\n[docs] def run(self, location: str) -> str:\n \"\"\"Get the current weather information for a specified location.\"\"\"\n mgr = self.owm.weather_manager()\n observation = mgr.weather_at_place(location)\n w = observation.weather\n return self._format_weather_info(location, w)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/openweathermap.html"}
+{"id": "d09db68961b4-0", "text": "Source code for langchain.utilities.serpapi\n\"\"\"Chain that calls SerpAPI.\nHeavily borrowed from https://github.com/ofirpress/self-ask\n\"\"\"\nimport os\nimport sys\nfrom typing import Any, Dict, Optional, Tuple\nimport aiohttp\nfrom pydantic import BaseModel, Extra, Field, root_validator\nfrom langchain.utils import get_from_dict_or_env\nclass HiddenPrints:\n \"\"\"Context manager to hide prints.\"\"\"\n def __enter__(self) -> None:\n \"\"\"Open file to pipe stdout to.\"\"\"\n self._original_stdout = sys.stdout\n sys.stdout = open(os.devnull, \"w\")\n def __exit__(self, *_: Any) -> None:\n \"\"\"Close file that stdout was piped to.\"\"\"\n sys.stdout.close()\n sys.stdout = self._original_stdout\n[docs]class SerpAPIWrapper(BaseModel):\n \"\"\"Wrapper around SerpAPI.\n To use, you should have the ``google-search-results`` python package installed,\n and the environment variable ``SERPAPI_API_KEY`` set with your API key, or pass\n `serpapi_api_key` as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain import SerpAPIWrapper\n serpapi = SerpAPIWrapper()\n \"\"\"\n search_engine: Any #: :meta private:\n params: dict = Field(\n default={\n \"engine\": \"google\",\n \"google_domain\": \"google.com\",\n \"gl\": \"us\",\n \"hl\": \"en\",\n }\n )\n serpapi_api_key: Optional[str] = None\n aiosession: Optional[aiohttp.ClientSession] = None\n class Config:", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/serpapi.html"}
+{"id": "d09db68961b4-1", "text": "aiosession: Optional[aiohttp.ClientSession] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n serpapi_api_key = get_from_dict_or_env(\n values, \"serpapi_api_key\", \"SERPAPI_API_KEY\"\n )\n values[\"serpapi_api_key\"] = serpapi_api_key\n try:\n from serpapi import GoogleSearch\n values[\"search_engine\"] = GoogleSearch\n except ImportError:\n raise ValueError(\n \"Could not import serpapi python package. \"\n \"Please install it with `pip install google-search-results`.\"\n )\n return values\n[docs] async def arun(self, query: str, **kwargs: Any) -> str:\n \"\"\"Run query through SerpAPI and parse result async.\"\"\"\n return self._process_response(await self.aresults(query))\n[docs] def run(self, query: str, **kwargs: Any) -> str:\n \"\"\"Run query through SerpAPI and parse result.\"\"\"\n return self._process_response(self.results(query))\n[docs] def results(self, query: str) -> dict:\n \"\"\"Run query through SerpAPI and return the raw result.\"\"\"\n params = self.get_params(query)\n with HiddenPrints():\n search = self.search_engine(params)\n res = search.get_dict()\n return res\n[docs] async def aresults(self, query: str) -> dict:\n \"\"\"Use aiohttp to run query through SerpAPI and return the results async.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/serpapi.html"}
+{"id": "d09db68961b4-2", "text": "\"\"\"Use aiohttp to run query through SerpAPI and return the results async.\"\"\"\n def construct_url_and_params() -> Tuple[str, Dict[str, str]]:\n params = self.get_params(query)\n params[\"source\"] = \"python\"\n if self.serpapi_api_key:\n params[\"serp_api_key\"] = self.serpapi_api_key\n params[\"output\"] = \"json\"\n url = \"https://serpapi.com/search\"\n return url, params\n url, params = construct_url_and_params()\n if not self.aiosession:\n async with aiohttp.ClientSession() as session:\n async with session.get(url, params=params) as response:\n res = await response.json()\n else:\n async with self.aiosession.get(url, params=params) as response:\n res = await response.json()\n return res\n[docs] def get_params(self, query: str) -> Dict[str, str]:\n \"\"\"Get parameters for SerpAPI.\"\"\"\n _params = {\n \"api_key\": self.serpapi_api_key,\n \"q\": query,\n }\n params = {**self.params, **_params}\n return params\n @staticmethod\n def _process_response(res: dict) -> str:\n \"\"\"Process response from SerpAPI.\"\"\"\n if \"error\" in res.keys():\n raise ValueError(f\"Got error from SerpAPI: {res['error']}\")\n if \"answer_box\" in res.keys() and \"answer\" in res[\"answer_box\"].keys():\n toret = res[\"answer_box\"][\"answer\"]\n elif \"answer_box\" in res.keys() and \"snippet\" in res[\"answer_box\"].keys():\n toret = res[\"answer_box\"][\"snippet\"]", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/serpapi.html"}
+{"id": "d09db68961b4-3", "text": "toret = res[\"answer_box\"][\"snippet\"]\n elif (\n \"answer_box\" in res.keys()\n and \"snippet_highlighted_words\" in res[\"answer_box\"].keys()\n ):\n toret = res[\"answer_box\"][\"snippet_highlighted_words\"][0]\n elif (\n \"sports_results\" in res.keys()\n and \"game_spotlight\" in res[\"sports_results\"].keys()\n ):\n toret = res[\"sports_results\"][\"game_spotlight\"]\n elif (\n \"shopping_results\" in res.keys()\n and \"title\" in res[\"shopping_results\"][0].keys()\n ):\n toret = res[\"shopping_results\"][:3]\n elif (\n \"knowledge_graph\" in res.keys()\n and \"description\" in res[\"knowledge_graph\"].keys()\n ):\n toret = res[\"knowledge_graph\"][\"description\"]\n elif \"snippet\" in res[\"organic_results\"][0].keys():\n toret = res[\"organic_results\"][0][\"snippet\"]\n elif \"link\" in res[\"organic_results\"][0].keys():\n toret = res[\"organic_results\"][0][\"link\"]\n else:\n toret = \"No good search result found\"\n return toret\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/serpapi.html"}
+{"id": "a5497978e5fe-0", "text": "Source code for langchain.utilities.bash\n\"\"\"Wrapper around subprocess to run commands.\"\"\"\nfrom __future__ import annotations\nimport platform\nimport re\nimport subprocess\nfrom typing import TYPE_CHECKING, List, Union\nfrom uuid import uuid4\nif TYPE_CHECKING:\n import pexpect\ndef _lazy_import_pexpect() -> pexpect:\n \"\"\"Import pexpect only when needed.\"\"\"\n if platform.system() == \"Windows\":\n raise ValueError(\"Persistent bash processes are not yet supported on Windows.\")\n try:\n import pexpect\n except ImportError:\n raise ImportError(\n \"pexpect required for persistent bash processes.\"\n \" To install, run `pip install pexpect`.\"\n )\n return pexpect\n[docs]class BashProcess:\n \"\"\"Executes bash commands and returns the output.\"\"\"\n def __init__(\n self,\n strip_newlines: bool = False,\n return_err_output: bool = False,\n persistent: bool = False,\n ):\n \"\"\"Initialize with stripping newlines.\"\"\"\n self.strip_newlines = strip_newlines\n self.return_err_output = return_err_output\n self.prompt = \"\"\n self.process = None\n if persistent:\n self.prompt = str(uuid4())\n self.process = self._initialize_persistent_process(self.prompt)\n @staticmethod\n def _initialize_persistent_process(prompt: str) -> pexpect.spawn:\n # Start bash in a clean environment\n # Doesn't work on windows\n pexpect = _lazy_import_pexpect()\n process = pexpect.spawn(\n \"env\", [\"-i\", \"bash\", \"--norc\", \"--noprofile\"], encoding=\"utf-8\"\n )\n # Set the custom prompt\n process.sendline(\"PS1=\" + prompt)", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/bash.html"}
+{"id": "a5497978e5fe-1", "text": "# Set the custom prompt\n process.sendline(\"PS1=\" + prompt)\n process.expect_exact(prompt, timeout=10)\n return process\n[docs] def run(self, commands: Union[str, List[str]]) -> str:\n \"\"\"Run commands and return final output.\"\"\"\n if isinstance(commands, str):\n commands = [commands]\n commands = \";\".join(commands)\n if self.process is not None:\n return self._run_persistent(\n commands,\n )\n else:\n return self._run(commands)\n def _run(self, command: str) -> str:\n \"\"\"Run commands and return final output.\"\"\"\n try:\n output = subprocess.run(\n command,\n shell=True,\n check=True,\n stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT,\n ).stdout.decode()\n except subprocess.CalledProcessError as error:\n if self.return_err_output:\n return error.stdout.decode()\n return str(error)\n if self.strip_newlines:\n output = output.strip()\n return output\n[docs] def process_output(self, output: str, command: str) -> str:\n # Remove the command from the output using a regular expression\n pattern = re.escape(command) + r\"\\s*\\n\"\n output = re.sub(pattern, \"\", output, count=1)\n return output.strip()\n def _run_persistent(self, command: str) -> str:\n \"\"\"Run commands and return final output.\"\"\"\n pexpect = _lazy_import_pexpect()\n if self.process is None:\n raise ValueError(\"Process not initialized\")\n self.process.sendline(command)\n # Clear the output with an empty string\n self.process.expect(self.prompt, timeout=10)", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/bash.html"}
+{"id": "a5497978e5fe-2", "text": "self.process.expect(self.prompt, timeout=10)\n self.process.sendline(\"\")\n try:\n self.process.expect([self.prompt, pexpect.EOF], timeout=10)\n except pexpect.TIMEOUT:\n return f\"Timeout error while executing command {command}\"\n if self.process.after == pexpect.EOF:\n return f\"Exited with error status: {self.process.exitstatus}\"\n output = self.process.before\n output = self.process_output(output, command)\n if self.strip_newlines:\n return output.strip()\n return output\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/bash.html"}
+{"id": "f06e69224257-0", "text": "Source code for langchain.utilities.google_search\n\"\"\"Util that calls Google Search.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.utils import get_from_dict_or_env\n[docs]class GoogleSearchAPIWrapper(BaseModel):\n \"\"\"Wrapper for Google Search API.\n Adapted from: Instructions adapted from https://stackoverflow.com/questions/\n 37083058/\n programmatically-searching-google-in-python-using-custom-search\n TODO: DOCS for using it\n 1. Install google-api-python-client\n - If you don't already have a Google account, sign up.\n - If you have never created a Google APIs Console project,\n read the Managing Projects page and create a project in the Google API Console.\n - Install the library using pip install google-api-python-client\n The current version of the library is 2.70.0 at this time\n 2. To create an API key:\n - Navigate to the APIs & Services\u2192Credentials panel in Cloud Console.\n - Select Create credentials, then select API key from the drop-down menu.\n - The API key created dialog box displays your newly created key.\n - You now have an API_KEY\n 3. Setup Custom Search Engine so you can search the entire web\n - Create a custom search engine in this link.\n - In Sites to search, add any valid URL (i.e. www.stackoverflow.com).\n - That\u2019s all you have to fill up, the rest doesn\u2019t matter.\n In the left-side menu, click Edit search engine \u2192 {your search engine name}\n \u2192 Setup Set Search the entire web to ON. Remove the URL you added from\n the list of Sites to search.\n - Under Search engine ID you\u2019ll find the search-engine-ID.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/google_search.html"}
+{"id": "f06e69224257-1", "text": "- Under Search engine ID you\u2019ll find the search-engine-ID.\n 4. Enable the Custom Search API\n - Navigate to the APIs & Services\u2192Dashboard panel in Cloud Console.\n - Click Enable APIs and Services.\n - Search for Custom Search API and click on it.\n - Click Enable.\n URL for it: https://console.cloud.google.com/apis/library/customsearch.googleapis\n .com\n \"\"\"\n search_engine: Any #: :meta private:\n google_api_key: Optional[str] = None\n google_cse_id: Optional[str] = None\n k: int = 10\n siterestrict: bool = False\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n def _google_search_results(self, search_term: str, **kwargs: Any) -> List[dict]:\n cse = self.search_engine.cse()\n if self.siterestrict:\n cse = cse.siterestrict()\n res = cse.list(q=search_term, cx=self.google_cse_id, **kwargs).execute()\n return res.get(\"items\", [])\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n google_api_key = get_from_dict_or_env(\n values, \"google_api_key\", \"GOOGLE_API_KEY\"\n )\n values[\"google_api_key\"] = google_api_key\n google_cse_id = get_from_dict_or_env(values, \"google_cse_id\", \"GOOGLE_CSE_ID\")\n values[\"google_cse_id\"] = google_cse_id\n try:\n from googleapiclient.discovery import build\n except ImportError:\n raise ImportError(", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/google_search.html"}
+{"id": "f06e69224257-2", "text": "except ImportError:\n raise ImportError(\n \"google-api-python-client is not installed. \"\n \"Please install it with `pip install google-api-python-client`\"\n )\n service = build(\"customsearch\", \"v1\", developerKey=google_api_key)\n values[\"search_engine\"] = service\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"Run query through GoogleSearch and parse result.\"\"\"\n snippets = []\n results = self._google_search_results(query, num=self.k)\n if len(results) == 0:\n return \"No good Google Search Result was found\"\n for result in results:\n if \"snippet\" in result:\n snippets.append(result[\"snippet\"])\n return \" \".join(snippets)\n[docs] def results(self, query: str, num_results: int) -> List[Dict]:\n \"\"\"Run query through GoogleSearch and return metadata.\n Args:\n query: The query to search for.\n num_results: The number of results to return.\n Returns:\n A list of dictionaries with the following keys:\n snippet - The description of the result.\n title - The title of the result.\n link - The link to the result.\n \"\"\"\n metadata_results = []\n results = self._google_search_results(query, num=num_results)\n if len(results) == 0:\n return [{\"Result\": \"No good Google Search Result was found\"}]\n for result in results:\n metadata_result = {\n \"title\": result[\"title\"],\n \"link\": result[\"link\"],\n }\n if \"snippet\" in result:\n metadata_result[\"snippet\"] = result[\"snippet\"]\n metadata_results.append(metadata_result)", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/google_search.html"}
+{"id": "f06e69224257-3", "text": "metadata_result[\"snippet\"] = result[\"snippet\"]\n metadata_results.append(metadata_result)\n return metadata_results\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/google_search.html"}
+{"id": "9431822cf745-0", "text": "Source code for langchain.utilities.twilio\n\"\"\"Util that calls Twilio.\"\"\"\nfrom typing import Any, Dict, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.utils import get_from_dict_or_env\n[docs]class TwilioAPIWrapper(BaseModel):\n \"\"\"Sms Client using Twilio.\n To use, you should have the ``twilio`` python package installed,\n and the environment variables ``TWILIO_ACCOUNT_SID``, ``TWILIO_AUTH_TOKEN``, and\n ``TWILIO_FROM_NUMBER``, or pass `account_sid`, `auth_token`, and `from_number` as\n named parameters to the constructor.\n Example:\n .. code-block:: python\n from langchain.utilities.twilio import TwilioAPIWrapper\n twilio = TwilioAPIWrapper(\n account_sid=\"ACxxx\",\n auth_token=\"xxx\",\n from_number=\"+10123456789\"\n )\n twilio.run('test', '+12484345508')\n \"\"\"\n client: Any #: :meta private:\n account_sid: Optional[str] = None\n \"\"\"Twilio account string identifier.\"\"\"\n auth_token: Optional[str] = None\n \"\"\"Twilio auth token.\"\"\"\n from_number: Optional[str] = None\n \"\"\"A Twilio phone number in [E.164](https://www.twilio.com/docs/glossary/what-e164) \n format, an \n [alphanumeric sender ID](https://www.twilio.com/docs/sms/send-messages#use-an-alphanumeric-sender-id), \n or a [Channel Endpoint address](https://www.twilio.com/docs/sms/channels#channel-addresses) \n that is enabled for the type of message you want to send. Phone numbers or", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/twilio.html"}
+{"id": "9431822cf745-1", "text": "that is enabled for the type of message you want to send. Phone numbers or \n [short codes](https://www.twilio.com/docs/sms/api/short-code) purchased from \n Twilio also work here. You cannot, for example, spoof messages from a private \n cell phone number. If you are using `messaging_service_sid`, this parameter \n must be empty.\n \"\"\" # noqa: E501\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = False\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n try:\n from twilio.rest import Client\n except ImportError:\n raise ImportError(\n \"Could not import twilio python package. \"\n \"Please install it with `pip install twilio`.\"\n )\n account_sid = get_from_dict_or_env(values, \"account_sid\", \"TWILIO_ACCOUNT_SID\")\n auth_token = get_from_dict_or_env(values, \"auth_token\", \"TWILIO_AUTH_TOKEN\")\n values[\"from_number\"] = get_from_dict_or_env(\n values, \"from_number\", \"TWILIO_FROM_NUMBER\"\n )\n values[\"client\"] = Client(account_sid, auth_token)\n return values\n[docs] def run(self, body: str, to: str) -> str:\n \"\"\"Run body through Twilio and respond with message sid.\n Args:\n body: The text of the message you want to send. Can be up to 1,600\n characters in length.\n to: The destination phone number in", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/twilio.html"}
+{"id": "9431822cf745-2", "text": "characters in length.\n to: The destination phone number in\n [E.164](https://www.twilio.com/docs/glossary/what-e164) format for\n SMS/MMS or\n [Channel user address](https://www.twilio.com/docs/sms/channels#channel-addresses)\n for other 3rd-party channels.\n \"\"\" # noqa: E501\n message = self.client.messages.create(to, from_=self.from_number, body=body)\n return message.sid\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/twilio.html"}
+{"id": "f0f4d1276b5d-0", "text": "Source code for langchain.utilities.awslambda\n\"\"\"Util that calls Lambda.\"\"\"\nimport json\nfrom typing import Any, Dict, Optional\nfrom pydantic import BaseModel, Extra, root_validator\n[docs]class LambdaWrapper(BaseModel):\n \"\"\"Wrapper for AWS Lambda SDK.\n Docs for using:\n 1. pip install boto3\n 2. Create a lambda function using the AWS Console or CLI\n 3. Run `aws configure` and enter your AWS credentials\n \"\"\"\n lambda_client: Any #: :meta private:\n function_name: Optional[str] = None\n awslambda_tool_name: Optional[str] = None\n awslambda_tool_description: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that python package exists in environment.\"\"\"\n try:\n import boto3\n except ImportError:\n raise ImportError(\n \"boto3 is not installed. Please install it with `pip install boto3`\"\n )\n values[\"lambda_client\"] = boto3.client(\"lambda\")\n values[\"function_name\"] = values[\"function_name\"]\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"Invoke Lambda function and parse result.\"\"\"\n res = self.lambda_client.invoke(\n FunctionName=self.function_name,\n InvocationType=\"RequestResponse\",\n Payload=json.dumps({\"body\": query}),\n )\n try:\n payload_stream = res[\"Payload\"]\n payload_string = payload_stream.read().decode(\"utf-8\")\n answer = json.loads(payload_string)[\"body\"]\n except StopIteration:", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/awslambda.html"}
+{"id": "f0f4d1276b5d-1", "text": "answer = json.loads(payload_string)[\"body\"]\n except StopIteration:\n return \"Failed to parse response from Lambda\"\n if answer is None or answer == \"\":\n # We don't want to return the assumption alone if answer is empty\n return \"Request failed.\"\n else:\n return f\"Result: {answer}\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/awslambda.html"}
+{"id": "ea93c34850dc-0", "text": "Source code for langchain.utilities.google_places_api\n\"\"\"Chain that calls Google Places API.\n\"\"\"\nimport logging\nfrom typing import Any, Dict, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.utils import get_from_dict_or_env\n[docs]class GooglePlacesAPIWrapper(BaseModel):\n \"\"\"Wrapper around Google Places API.\n To use, you should have the ``googlemaps`` python package installed,\n **an API key for the google maps platform**,\n and the enviroment variable ''GPLACES_API_KEY''\n set with your API key , or pass 'gplaces_api_key'\n as a named parameter to the constructor.\n By default, this will return the all the results on the input query.\n You can use the top_k_results argument to limit the number of results.\n Example:\n .. code-block:: python\n from langchain import GooglePlacesAPIWrapper\n gplaceapi = GooglePlacesAPIWrapper()\n \"\"\"\n gplaces_api_key: Optional[str] = None\n google_map_client: Any #: :meta private:\n top_k_results: Optional[int] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key is in your environment variable.\"\"\"\n gplaces_api_key = get_from_dict_or_env(\n values, \"gplaces_api_key\", \"GPLACES_API_KEY\"\n )\n values[\"gplaces_api_key\"] = gplaces_api_key\n try:\n import googlemaps\n values[\"google_map_client\"] = googlemaps.Client(gplaces_api_key)\n except ImportError:\n raise ImportError(", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/google_places_api.html"}
+{"id": "ea93c34850dc-1", "text": "except ImportError:\n raise ImportError(\n \"Could not import googlemaps python package. \"\n \"Please install it with `pip install googlemaps`.\"\n )\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"Run Places search and get k number of places that exists that match.\"\"\"\n search_results = self.google_map_client.places(query)[\"results\"]\n num_to_return = len(search_results)\n places = []\n if num_to_return == 0:\n return \"Google Places did not find any places that match the description\"\n num_to_return = (\n num_to_return\n if self.top_k_results is None\n else min(num_to_return, self.top_k_results)\n )\n for i in range(num_to_return):\n result = search_results[i]\n details = self.fetch_place_details(result[\"place_id\"])\n if details is not None:\n places.append(details)\n return \"\\n\".join([f\"{i+1}. {item}\" for i, item in enumerate(places)])\n[docs] def fetch_place_details(self, place_id: str) -> Optional[str]:\n try:\n place_details = self.google_map_client.place(place_id)\n formatted_details = self.format_place_details(place_details)\n return formatted_details\n except Exception as e:\n logging.error(f\"An Error occurred while fetching place details: {e}\")\n return None\n[docs] def format_place_details(self, place_details: Dict[str, Any]) -> Optional[str]:\n try:\n name = place_details.get(\"result\", {}).get(\"name\", \"Unkown\")\n address = place_details.get(\"result\", {}).get(\n \"formatted_address\", \"Unknown\"\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/google_places_api.html"}
+{"id": "ea93c34850dc-2", "text": "\"formatted_address\", \"Unknown\"\n )\n phone_number = place_details.get(\"result\", {}).get(\n \"formatted_phone_number\", \"Unknown\"\n )\n website = place_details.get(\"result\", {}).get(\"website\", \"Unknown\")\n formatted_details = (\n f\"{name}\\nAddress: {address}\\n\"\n f\"Phone: {phone_number}\\nWebsite: {website}\\n\\n\"\n )\n return formatted_details\n except Exception as e:\n logging.error(f\"An error occurred while formatting place details: {e}\")\n return None\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/google_places_api.html"}
+{"id": "46a0f871d43f-0", "text": "Source code for langchain.utilities.python\nimport sys\nfrom io import StringIO\nfrom typing import Dict, Optional\nfrom pydantic import BaseModel, Field\n[docs]class PythonREPL(BaseModel):\n \"\"\"Simulates a standalone Python REPL.\"\"\"\n globals: Optional[Dict] = Field(default_factory=dict, alias=\"_globals\")\n locals: Optional[Dict] = Field(default_factory=dict, alias=\"_locals\")\n[docs] def run(self, command: str) -> str:\n \"\"\"Run command with own globals/locals and returns anything printed.\"\"\"\n old_stdout = sys.stdout\n sys.stdout = mystdout = StringIO()\n try:\n exec(command, self.globals, self.locals)\n sys.stdout = old_stdout\n output = mystdout.getvalue()\n except Exception as e:\n sys.stdout = old_stdout\n output = repr(e)\n return output\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/python.html"}
+{"id": "c9b645d8954f-0", "text": "Source code for langchain.utilities.google_serper\n\"\"\"Util that calls Google Search using the Serper.dev API.\"\"\"\nfrom typing import Any, Dict, List, Optional\nimport aiohttp\nimport requests\nfrom pydantic.class_validators import root_validator\nfrom pydantic.main import BaseModel\nfrom typing_extensions import Literal\nfrom langchain.utils import get_from_dict_or_env\n[docs]class GoogleSerperAPIWrapper(BaseModel):\n \"\"\"Wrapper around the Serper.dev Google Search API.\n You can create a free API key at https://serper.dev.\n To use, you should have the environment variable ``SERPER_API_KEY``\n set with your API key, or pass `serper_api_key` as a named parameter\n to the constructor.\n Example:\n .. code-block:: python\n from langchain import GoogleSerperAPIWrapper\n google_serper = GoogleSerperAPIWrapper()\n \"\"\"\n k: int = 10\n gl: str = \"us\"\n hl: str = \"en\"\n # \"places\" and \"images\" is available from Serper but not implemented in the\n # parser of run(). They can be used in results()\n type: Literal[\"news\", \"search\", \"places\", \"images\"] = \"search\"\n result_key_for_type = {\n \"news\": \"news\",\n \"places\": \"places\",\n \"images\": \"images\",\n \"search\": \"organic\",\n }\n tbs: Optional[str] = None\n serper_api_key: Optional[str] = None\n aiosession: Optional[aiohttp.ClientSession] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n @root_validator()", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/google_serper.html"}
+{"id": "c9b645d8954f-1", "text": "arbitrary_types_allowed = True\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key exists in environment.\"\"\"\n serper_api_key = get_from_dict_or_env(\n values, \"serper_api_key\", \"SERPER_API_KEY\"\n )\n values[\"serper_api_key\"] = serper_api_key\n return values\n[docs] def results(self, query: str, **kwargs: Any) -> Dict:\n \"\"\"Run query through GoogleSearch.\"\"\"\n return self._google_serper_api_results(\n query,\n gl=self.gl,\n hl=self.hl,\n num=self.k,\n tbs=self.tbs,\n search_type=self.type,\n **kwargs,\n )\n[docs] def run(self, query: str, **kwargs: Any) -> str:\n \"\"\"Run query through GoogleSearch and parse result.\"\"\"\n results = self._google_serper_api_results(\n query,\n gl=self.gl,\n hl=self.hl,\n num=self.k,\n tbs=self.tbs,\n search_type=self.type,\n **kwargs,\n )\n return self._parse_results(results)\n[docs] async def aresults(self, query: str, **kwargs: Any) -> Dict:\n \"\"\"Run query through GoogleSearch.\"\"\"\n results = await self._async_google_serper_search_results(\n query,\n gl=self.gl,\n hl=self.hl,\n num=self.k,\n search_type=self.type,\n tbs=self.tbs,\n **kwargs,\n )\n return results\n[docs] async def arun(self, query: str, **kwargs: Any) -> str:", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/google_serper.html"}
+{"id": "c9b645d8954f-2", "text": "\"\"\"Run query through GoogleSearch and parse result async.\"\"\"\n results = await self._async_google_serper_search_results(\n query,\n gl=self.gl,\n hl=self.hl,\n num=self.k,\n search_type=self.type,\n tbs=self.tbs,\n **kwargs,\n )\n return self._parse_results(results)\n def _parse_snippets(self, results: dict) -> List[str]:\n snippets = []\n if results.get(\"answerBox\"):\n answer_box = results.get(\"answerBox\", {})\n if answer_box.get(\"answer\"):\n return [answer_box.get(\"answer\")]\n elif answer_box.get(\"snippet\"):\n return [answer_box.get(\"snippet\").replace(\"\\n\", \" \")]\n elif answer_box.get(\"snippetHighlighted\"):\n return answer_box.get(\"snippetHighlighted\")\n if results.get(\"knowledgeGraph\"):\n kg = results.get(\"knowledgeGraph\", {})\n title = kg.get(\"title\")\n entity_type = kg.get(\"type\")\n if entity_type:\n snippets.append(f\"{title}: {entity_type}.\")\n description = kg.get(\"description\")\n if description:\n snippets.append(description)\n for attribute, value in kg.get(\"attributes\", {}).items():\n snippets.append(f\"{title} {attribute}: {value}.\")\n for result in results[self.result_key_for_type[self.type]][: self.k]:\n if \"snippet\" in result:\n snippets.append(result[\"snippet\"])\n for attribute, value in result.get(\"attributes\", {}).items():\n snippets.append(f\"{attribute}: {value}.\")\n if len(snippets) == 0:\n return [\"No good Google Search Result was found\"]\n return snippets", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/google_serper.html"}
+{"id": "c9b645d8954f-3", "text": "return [\"No good Google Search Result was found\"]\n return snippets\n def _parse_results(self, results: dict) -> str:\n return \" \".join(self._parse_snippets(results))\n def _google_serper_api_results(\n self, search_term: str, search_type: str = \"search\", **kwargs: Any\n ) -> dict:\n headers = {\n \"X-API-KEY\": self.serper_api_key or \"\",\n \"Content-Type\": \"application/json\",\n }\n params = {\n \"q\": search_term,\n **{key: value for key, value in kwargs.items() if value is not None},\n }\n response = requests.post(\n f\"https://google.serper.dev/{search_type}\", headers=headers, params=params\n )\n response.raise_for_status()\n search_results = response.json()\n return search_results\n async def _async_google_serper_search_results(\n self, search_term: str, search_type: str = \"search\", **kwargs: Any\n ) -> dict:\n headers = {\n \"X-API-KEY\": self.serper_api_key or \"\",\n \"Content-Type\": \"application/json\",\n }\n url = f\"https://google.serper.dev/{search_type}\"\n params = {\n \"q\": search_term,\n **{key: value for key, value in kwargs.items() if value is not None},\n }\n if not self.aiosession:\n async with aiohttp.ClientSession() as session:\n async with session.post(\n url, params=params, headers=headers, raise_for_status=False\n ) as response:\n search_results = await response.json()\n else:\n async with self.aiosession.post(", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/google_serper.html"}
+{"id": "c9b645d8954f-4", "text": "else:\n async with self.aiosession.post(\n url, params=params, headers=headers, raise_for_status=True\n ) as response:\n search_results = await response.json()\n return search_results\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/google_serper.html"}
+{"id": "c3855981a20a-0", "text": "Source code for langchain.utilities.duckduckgo_search\n\"\"\"Util that calls DuckDuckGo Search.\nNo setup required. Free.\nhttps://pypi.org/project/duckduckgo-search/\n\"\"\"\nfrom typing import Dict, List, Optional\nfrom pydantic import BaseModel, Extra\nfrom pydantic.class_validators import root_validator\n[docs]class DuckDuckGoSearchAPIWrapper(BaseModel):\n \"\"\"Wrapper for DuckDuckGo Search API.\n Free and does not require any setup\n \"\"\"\n k: int = 10\n region: Optional[str] = \"wt-wt\"\n safesearch: str = \"moderate\"\n time: Optional[str] = \"y\"\n max_results: int = 5\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that python package exists in environment.\"\"\"\n try:\n from duckduckgo_search import ddg # noqa: F401\n except ImportError:\n raise ValueError(\n \"Could not import duckduckgo-search python package. \"\n \"Please install it with `pip install duckduckgo-search`.\"\n )\n return values\n[docs] def get_snippets(self, query: str) -> List[str]:\n \"\"\"Run query through DuckDuckGo and return concatenated results.\"\"\"\n from duckduckgo_search import ddg\n results = ddg(\n query,\n region=self.region,\n safesearch=self.safesearch,\n time=self.time,\n max_results=self.max_results,\n )\n if results is None or len(results) == 0:", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/duckduckgo_search.html"}
+{"id": "c3855981a20a-1", "text": ")\n if results is None or len(results) == 0:\n return [\"No good DuckDuckGo Search Result was found\"]\n snippets = [result[\"body\"] for result in results]\n return snippets\n[docs] def run(self, query: str) -> str:\n snippets = self.get_snippets(query)\n return \" \".join(snippets)\n[docs] def results(self, query: str, num_results: int) -> List[Dict[str, str]]:\n \"\"\"Run query through DuckDuckGo and return metadata.\n Args:\n query: The query to search for.\n num_results: The number of results to return.\n Returns:\n A list of dictionaries with the following keys:\n snippet - The description of the result.\n title - The title of the result.\n link - The link to the result.\n \"\"\"\n from duckduckgo_search import ddg\n results = ddg(\n query,\n region=self.region,\n safesearch=self.safesearch,\n time=self.time,\n max_results=num_results,\n )\n if results is None or len(results) == 0:\n return [{\"Result\": \"No good DuckDuckGo Search Result was found\"}]\n def to_metadata(result: Dict) -> Dict[str, str]:\n return {\n \"snippet\": result[\"body\"],\n \"title\": result[\"title\"],\n \"link\": result[\"href\"],\n }\n return [to_metadata(result) for result in results]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/duckduckgo_search.html"}
+{"id": "749e78be2c5e-0", "text": "Source code for langchain.utilities.arxiv\n\"\"\"Util that calls Arxiv.\"\"\"\nimport logging\nimport os\nfrom typing import Any, Dict, List\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.schema import Document\nlogger = logging.getLogger(__name__)\n[docs]class ArxivAPIWrapper(BaseModel):\n \"\"\"Wrapper around ArxivAPI.\n To use, you should have the ``arxiv`` python package installed.\n https://lukasschwab.me/arxiv.py/index.html\n This wrapper will use the Arxiv API to conduct searches and\n fetch document summaries. By default, it will return the document summaries\n of the top-k results.\n It limits the Document content by doc_content_chars_max.\n Set doc_content_chars_max=None if you don't want to limit the content size.\n Parameters:\n top_k_results: number of the top-scored document used for the arxiv tool\n ARXIV_MAX_QUERY_LENGTH: the cut limit on the query used for the arxiv tool.\n load_max_docs: a limit to the number of loaded documents\n load_all_available_meta:\n if True: the `metadata` of the loaded Documents gets all available meta info\n (see https://lukasschwab.me/arxiv.py/index.html#Result),\n if False: the `metadata` gets only the most informative fields.\n \"\"\"\n arxiv_search: Any #: :meta private:\n arxiv_exceptions: Any # :meta private:\n top_k_results: int = 3\n ARXIV_MAX_QUERY_LENGTH = 300\n load_max_docs: int = 100\n load_all_available_meta: bool = False\n doc_content_chars_max: int = 4000\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/arxiv.html"}
+{"id": "749e78be2c5e-1", "text": "class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in environment.\"\"\"\n try:\n import arxiv\n values[\"arxiv_search\"] = arxiv.Search\n values[\"arxiv_exceptions\"] = (\n arxiv.ArxivError,\n arxiv.UnexpectedEmptyPageError,\n arxiv.HTTPError,\n )\n values[\"arxiv_result\"] = arxiv.Result\n except ImportError:\n raise ImportError(\n \"Could not import arxiv python package. \"\n \"Please install it with `pip install arxiv`.\"\n )\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"\n Run Arxiv search and get the article meta information.\n See https://lukasschwab.me/arxiv.py/index.html#Search\n See https://lukasschwab.me/arxiv.py/index.html#Result\n It uses only the most informative fields of article meta information.\n \"\"\"\n try:\n results = self.arxiv_search( # type: ignore\n query[: self.ARXIV_MAX_QUERY_LENGTH], max_results=self.top_k_results\n ).results()\n except self.arxiv_exceptions as ex:\n return f\"Arxiv exception: {ex}\"\n docs = [\n f\"Published: {result.updated.date()}\\nTitle: {result.title}\\n\"\n f\"Authors: {', '.join(a.name for a in result.authors)}\\n\"\n f\"Summary: {result.summary}\"\n for result in results\n ]\n if docs:", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/arxiv.html"}
+{"id": "749e78be2c5e-2", "text": "for result in results\n ]\n if docs:\n return \"\\n\\n\".join(docs)[: self.doc_content_chars_max]\n else:\n return \"No good Arxiv Result was found\"\n[docs] def load(self, query: str) -> List[Document]:\n \"\"\"\n Run Arxiv search and get the article texts plus the article meta information.\n See https://lukasschwab.me/arxiv.py/index.html#Search\n Returns: a list of documents with the document.page_content in text format\n \"\"\"\n try:\n import fitz\n except ImportError:\n raise ImportError(\n \"PyMuPDF package not found, please install it with \"\n \"`pip install pymupdf`\"\n )\n try:\n results = self.arxiv_search( # type: ignore\n query[: self.ARXIV_MAX_QUERY_LENGTH], max_results=self.load_max_docs\n ).results()\n except self.arxiv_exceptions as ex:\n logger.debug(\"Error on arxiv: %s\", ex)\n return []\n docs: List[Document] = []\n for result in results:\n try:\n doc_file_name: str = result.download_pdf()\n with fitz.open(doc_file_name) as doc_file:\n text: str = \"\".join(page.get_text() for page in doc_file)\n except FileNotFoundError as f_ex:\n logger.debug(f_ex)\n continue\n if self.load_all_available_meta:\n extra_metadata = {\n \"entry_id\": result.entry_id,\n \"published_first_time\": str(result.published.date()),\n \"comment\": result.comment,\n \"journal_ref\": result.journal_ref,\n \"doi\": result.doi,\n \"primary_category\": result.primary_category,", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/arxiv.html"}
+{"id": "749e78be2c5e-3", "text": "\"doi\": result.doi,\n \"primary_category\": result.primary_category,\n \"categories\": result.categories,\n \"links\": [link.href for link in result.links],\n }\n else:\n extra_metadata = {}\n metadata = {\n \"Published\": str(result.updated.date()),\n \"Title\": result.title,\n \"Authors\": \", \".join(a.name for a in result.authors),\n \"Summary\": result.summary,\n **extra_metadata,\n }\n doc = Document(\n page_content=text[: self.doc_content_chars_max], metadata=metadata\n )\n docs.append(doc)\n os.remove(doc_file_name)\n return docs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/utilities/arxiv.html"}
+{"id": "16f172f85368-0", "text": "Source code for langchain.docstore.in_memory\n\"\"\"Simple in memory docstore in the form of a dict.\"\"\"\nfrom typing import Dict, Union\nfrom langchain.docstore.base import AddableMixin, Docstore\nfrom langchain.docstore.document import Document\n[docs]class InMemoryDocstore(Docstore, AddableMixin):\n \"\"\"Simple in memory docstore in the form of a dict.\"\"\"\n def __init__(self, _dict: Dict[str, Document]):\n \"\"\"Initialize with dict.\"\"\"\n self._dict = _dict\n[docs] def add(self, texts: Dict[str, Document]) -> None:\n \"\"\"Add texts to in memory dictionary.\"\"\"\n overlapping = set(texts).intersection(self._dict)\n if overlapping:\n raise ValueError(f\"Tried to add ids that already exist: {overlapping}\")\n self._dict = dict(self._dict, **texts)\n[docs] def search(self, search: str) -> Union[str, Document]:\n \"\"\"Search via direct lookup.\"\"\"\n if search not in self._dict:\n return f\"ID {search} not found.\"\n else:\n return self._dict[search]\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/docstore/in_memory.html"}
+{"id": "b2bedbc91717-0", "text": "Source code for langchain.docstore.wikipedia\n\"\"\"Wrapper around wikipedia API.\"\"\"\nfrom typing import Union\nfrom langchain.docstore.base import Docstore\nfrom langchain.docstore.document import Document\n[docs]class Wikipedia(Docstore):\n \"\"\"Wrapper around wikipedia API.\"\"\"\n def __init__(self) -> None:\n \"\"\"Check that wikipedia package is installed.\"\"\"\n try:\n import wikipedia # noqa: F401\n except ImportError:\n raise ImportError(\n \"Could not import wikipedia python package. \"\n \"Please install it with `pip install wikipedia`.\"\n )\n[docs] def search(self, search: str) -> Union[str, Document]:\n \"\"\"Try to search for wiki page.\n If page exists, return the page summary, and a PageWithLookups object.\n If page does not exist, return similar entries.\n \"\"\"\n import wikipedia\n try:\n page_content = wikipedia.page(search).content\n url = wikipedia.page(search).url\n result: Union[str, Document] = Document(\n page_content=page_content, metadata={\"page\": url}\n )\n except wikipedia.PageError:\n result = f\"Could not find [{search}]. Similar: {wikipedia.search(search)}\"\n except wikipedia.DisambiguationError:\n result = f\"Could not find [{search}]. Similar: {wikipedia.search(search)}\"\n return result\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/docstore/wikipedia.html"}
+{"id": "2d255703a6a3-0", "text": "Source code for langchain.chains.sequential\n\"\"\"Chain pipeline where the outputs of one step feed directly into next.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.input import get_color_mapping\n[docs]class SequentialChain(Chain):\n \"\"\"Chain where the outputs of one chain feed directly into next.\"\"\"\n chains: List[Chain]\n input_variables: List[str]\n output_variables: List[str] #: :meta private:\n return_all: bool = False\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return expected input keys to the chain.\n :meta private:\n \"\"\"\n return self.input_variables\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output key.\n :meta private:\n \"\"\"\n return self.output_variables\n @root_validator(pre=True)\n def validate_chains(cls, values: Dict) -> Dict:\n \"\"\"Validate that the correct inputs exist for all chains.\"\"\"\n chains = values[\"chains\"]\n input_variables = values[\"input_variables\"]\n memory_keys = list()\n if \"memory\" in values and values[\"memory\"] is not None:\n \"\"\"Validate that prompt input variables are consistent.\"\"\"\n memory_keys = values[\"memory\"].memory_variables\n if set(input_variables).intersection(set(memory_keys)):\n overlapping_keys = set(input_variables) & set(memory_keys)\n raise ValueError(", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/sequential.html"}
+{"id": "2d255703a6a3-1", "text": "overlapping_keys = set(input_variables) & set(memory_keys)\n raise ValueError(\n f\"The the input key(s) {''.join(overlapping_keys)} are found \"\n f\"in the Memory keys ({memory_keys}) - please use input and \"\n f\"memory keys that don't overlap.\"\n )\n known_variables = set(input_variables + memory_keys)\n for chain in chains:\n missing_vars = set(chain.input_keys).difference(known_variables)\n if missing_vars:\n raise ValueError(\n f\"Missing required input keys: {missing_vars}, \"\n f\"only had {known_variables}\"\n )\n overlapping_keys = known_variables.intersection(chain.output_keys)\n if overlapping_keys:\n raise ValueError(\n f\"Chain returned keys that already exist: {overlapping_keys}\"\n )\n known_variables |= set(chain.output_keys)\n if \"output_variables\" not in values:\n if values.get(\"return_all\", False):\n output_keys = known_variables.difference(input_variables)\n else:\n output_keys = chains[-1].output_keys\n values[\"output_variables\"] = output_keys\n else:\n missing_vars = set(values[\"output_variables\"]).difference(known_variables)\n if missing_vars:\n raise ValueError(\n f\"Expected output variables that were not found: {missing_vars}.\"\n )\n return values\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n known_values = inputs.copy()\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n for i, chain in enumerate(self.chains):", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/sequential.html"}
+{"id": "2d255703a6a3-2", "text": "for i, chain in enumerate(self.chains):\n callbacks = _run_manager.get_child()\n outputs = chain(known_values, return_only_outputs=True, callbacks=callbacks)\n known_values.update(outputs)\n return {k: known_values[k] for k in self.output_variables}\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n known_values = inputs.copy()\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n for i, chain in enumerate(self.chains):\n outputs = await chain.acall(\n known_values, return_only_outputs=True, callbacks=callbacks\n )\n known_values.update(outputs)\n return {k: known_values[k] for k in self.output_variables}\n[docs]class SimpleSequentialChain(Chain):\n \"\"\"Simple chain where the outputs of one step feed directly into next.\"\"\"\n chains: List[Chain]\n strip_outputs: bool = False\n input_key: str = \"input\" #: :meta private:\n output_key: str = \"output\" #: :meta private:\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n @root_validator()", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/sequential.html"}
+{"id": "2d255703a6a3-3", "text": "\"\"\"\n return [self.output_key]\n @root_validator()\n def validate_chains(cls, values: Dict) -> Dict:\n \"\"\"Validate that chains are all single input/output.\"\"\"\n for chain in values[\"chains\"]:\n if len(chain.input_keys) != 1:\n raise ValueError(\n \"Chains used in SimplePipeline should all have one input, got \"\n f\"{chain} with {len(chain.input_keys)} inputs.\"\n )\n if len(chain.output_keys) != 1:\n raise ValueError(\n \"Chains used in SimplePipeline should all have one output, got \"\n f\"{chain} with {len(chain.output_keys)} outputs.\"\n )\n return values\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n _input = inputs[self.input_key]\n color_mapping = get_color_mapping([str(i) for i in range(len(self.chains))])\n for i, chain in enumerate(self.chains):\n _input = chain.run(_input, callbacks=_run_manager.get_child())\n if self.strip_outputs:\n _input = _input.strip()\n _run_manager.on_text(\n _input, color=color_mapping[str(i)], end=\"\\n\", verbose=self.verbose\n )\n return {self.output_key: _input}\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/sequential.html"}
+{"id": "2d255703a6a3-4", "text": ") -> Dict[str, Any]:\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n _input = inputs[self.input_key]\n color_mapping = get_color_mapping([str(i) for i in range(len(self.chains))])\n for i, chain in enumerate(self.chains):\n _input = await chain.arun(_input, callbacks=callbacks)\n if self.strip_outputs:\n _input = _input.strip()\n await _run_manager.on_text(\n _input, color=color_mapping[str(i)], end=\"\\n\", verbose=self.verbose\n )\n return {self.output_key: _input}\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/sequential.html"}
+{"id": "d5ea25e4cbd7-0", "text": "Source code for langchain.chains.mapreduce\n\"\"\"Map-reduce chain.\nSplits up a document, sends the smaller parts to the LLM with one prompt,\nthen combines the results with another one.\n\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun, Callbacks\nfrom langchain.chains.base import Chain\nfrom langchain.chains.combine_documents.base import BaseCombineDocumentsChain\nfrom langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.docstore.document import Document\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.text_splitter import TextSplitter\n[docs]class MapReduceChain(Chain):\n \"\"\"Map-reduce chain.\"\"\"\n combine_documents_chain: BaseCombineDocumentsChain\n \"\"\"Chain to use to combine documents.\"\"\"\n text_splitter: TextSplitter\n \"\"\"Text splitter to use.\"\"\"\n input_key: str = \"input_text\" #: :meta private:\n output_key: str = \"output_text\" #: :meta private:\n[docs] @classmethod\n def from_params(\n cls,\n llm: BaseLanguageModel,\n prompt: BasePromptTemplate,\n text_splitter: TextSplitter,\n callbacks: Callbacks = None,\n combine_chain_kwargs: Optional[Mapping[str, Any]] = None,\n reduce_chain_kwargs: Optional[Mapping[str, Any]] = None,\n **kwargs: Any,\n ) -> MapReduceChain:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/mapreduce.html"}
+{"id": "d5ea25e4cbd7-1", "text": "**kwargs: Any,\n ) -> MapReduceChain:\n \"\"\"Construct a map-reduce chain that uses the chain for map and reduce.\"\"\"\n llm_chain = LLMChain(llm=llm, prompt=prompt, callbacks=callbacks)\n reduce_chain = StuffDocumentsChain(\n llm_chain=llm_chain,\n callbacks=callbacks,\n **(reduce_chain_kwargs if reduce_chain_kwargs else {}),\n )\n combine_documents_chain = MapReduceDocumentsChain(\n llm_chain=llm_chain,\n combine_document_chain=reduce_chain,\n callbacks=callbacks,\n **(combine_chain_kwargs if combine_chain_kwargs else {}),\n )\n return cls(\n combine_documents_chain=combine_documents_chain,\n text_splitter=text_splitter,\n callbacks=callbacks,\n **kwargs,\n )\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n # Split the larger text into smaller chunks.\n doc_text = inputs.pop(self.input_key)\n texts = self.text_splitter.split_text(doc_text)", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/mapreduce.html"}
+{"id": "d5ea25e4cbd7-2", "text": "texts = self.text_splitter.split_text(doc_text)\n docs = [Document(page_content=text) for text in texts]\n _inputs: Dict[str, Any] = {\n **inputs,\n self.combine_documents_chain.input_key: docs,\n }\n outputs = self.combine_documents_chain.run(\n _inputs, callbacks=_run_manager.get_child()\n )\n return {self.output_key: outputs}\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/mapreduce.html"}
+{"id": "95f7f77482b1-0", "text": "Source code for langchain.chains.moderation\n\"\"\"Pass input through a moderation endpoint.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.utils import get_from_dict_or_env\n[docs]class OpenAIModerationChain(Chain):\n \"\"\"Pass input through a moderation endpoint.\n To use, you should have the ``openai`` python package installed, and the\n environment variable ``OPENAI_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the openai.create call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.chains import OpenAIModerationChain\n moderation = OpenAIModerationChain()\n \"\"\"\n client: Any #: :meta private:\n model_name: Optional[str] = None\n \"\"\"Moderation model name to use.\"\"\"\n error: bool = False\n \"\"\"Whether or not to error if bad content was found.\"\"\"\n input_key: str = \"input\" #: :meta private:\n output_key: str = \"output\" #: :meta private:\n openai_api_key: Optional[str] = None\n openai_organization: Optional[str] = None\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n openai_api_key = get_from_dict_or_env(\n values, \"openai_api_key\", \"OPENAI_API_KEY\"\n )\n openai_organization = get_from_dict_or_env(\n values,\n \"openai_organization\",", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/moderation.html"}
+{"id": "95f7f77482b1-1", "text": "values,\n \"openai_organization\",\n \"OPENAI_ORGANIZATION\",\n default=\"\",\n )\n try:\n import openai\n openai.api_key = openai_api_key\n if openai_organization:\n openai.organization = openai_organization\n values[\"client\"] = openai.Moderation\n except ImportError:\n raise ImportError(\n \"Could not import openai python package. \"\n \"Please install it with `pip install openai`.\"\n )\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n def _moderate(self, text: str, results: dict) -> str:\n if results[\"flagged\"]:\n error_str = \"Text was found that violates OpenAI's content policy.\"\n if self.error:\n raise ValueError(error_str)\n else:\n return error_str\n return text\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n text = inputs[self.input_key]\n results = self.client.create(text)\n output = self._moderate(text, results[\"results\"][0])\n return {self.output_key: output}\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/moderation.html"}
+{"id": "aa9e9e684820-0", "text": "Source code for langchain.chains.llm_requests\n\"\"\"Chain that hits a URL and then uses an LLM to parse results.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains import LLMChain\nfrom langchain.chains.base import Chain\nfrom langchain.requests import TextRequestsWrapper\nDEFAULT_HEADERS = {\n \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36\" # noqa: E501\n}\n[docs]class LLMRequestsChain(Chain):\n \"\"\"Chain that hits a URL and then uses an LLM to parse results.\"\"\"\n llm_chain: LLMChain\n requests_wrapper: TextRequestsWrapper = Field(\n default_factory=TextRequestsWrapper, exclude=True\n )\n text_length: int = 8000\n requests_key: str = \"requests_result\" #: :meta private:\n input_key: str = \"url\" #: :meta private:\n output_key: str = \"output\" #: :meta private:\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Will be whatever keys the prompt expects.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Will always return text key.\n :meta private:\n \"\"\"\n return [self.output_key]", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_requests.html"}
+{"id": "aa9e9e684820-1", "text": ":meta private:\n \"\"\"\n return [self.output_key]\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n try:\n from bs4 import BeautifulSoup # noqa: F401\n except ImportError:\n raise ValueError(\n \"Could not import bs4 python package. \"\n \"Please install it with `pip install bs4`.\"\n )\n return values\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n from bs4 import BeautifulSoup\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n # Other keys are assumed to be needed for LLM prediction\n other_keys = {k: v for k, v in inputs.items() if k != self.input_key}\n url = inputs[self.input_key]\n res = self.requests_wrapper.get(url)\n # extract the text from the html\n soup = BeautifulSoup(res, \"html.parser\")\n other_keys[self.requests_key] = soup.get_text()[: self.text_length]\n result = self.llm_chain.predict(\n callbacks=_run_manager.get_child(), **other_keys\n )\n return {self.output_key: result}\n @property\n def _chain_type(self) -> str:\n return \"llm_requests_chain\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_requests.html"}
+{"id": "a2edca350549-0", "text": "Source code for langchain.chains.loading\n\"\"\"Functionality for loading chains.\"\"\"\nimport json\nfrom pathlib import Path\nfrom typing import Any, Union\nimport yaml\nfrom langchain.chains.api.base import APIChain\nfrom langchain.chains.base import Chain\nfrom langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain\nfrom langchain.chains.combine_documents.map_rerank import MapRerankDocumentsChain\nfrom langchain.chains.combine_documents.refine import RefineDocumentsChain\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.hyde.base import HypotheticalDocumentEmbedder\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.llm_bash.base import LLMBashChain\nfrom langchain.chains.llm_checker.base import LLMCheckerChain\nfrom langchain.chains.llm_math.base import LLMMathChain\nfrom langchain.chains.llm_requests import LLMRequestsChain\nfrom langchain.chains.pal.base import PALChain\nfrom langchain.chains.qa_with_sources.base import QAWithSourcesChain\nfrom langchain.chains.qa_with_sources.vector_db import VectorDBQAWithSourcesChain\nfrom langchain.chains.retrieval_qa.base import RetrievalQA, VectorDBQA\nfrom langchain.chains.sql_database.base import SQLDatabaseChain\nfrom langchain.llms.loading import load_llm, load_llm_from_config\nfrom langchain.prompts.loading import load_prompt, load_prompt_from_config\nfrom langchain.utilities.loading import try_load_from_hub\nURL_BASE = \"https://raw.githubusercontent.com/hwchase17/langchain-hub/master/chains/\"\ndef _load_llm_chain(config: dict, **kwargs: Any) -> LLMChain:\n \"\"\"Load LLM chain from config dict.\"\"\"\n if \"llm\" in config:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html"}
+{"id": "a2edca350549-1", "text": "if \"llm\" in config:\n llm_config = config.pop(\"llm\")\n llm = load_llm_from_config(llm_config)\n elif \"llm_path\" in config:\n llm = load_llm(config.pop(\"llm_path\"))\n else:\n raise ValueError(\"One of `llm` or `llm_path` must be present.\")\n if \"prompt\" in config:\n prompt_config = config.pop(\"prompt\")\n prompt = load_prompt_from_config(prompt_config)\n elif \"prompt_path\" in config:\n prompt = load_prompt(config.pop(\"prompt_path\"))\n else:\n raise ValueError(\"One of `prompt` or `prompt_path` must be present.\")\n return LLMChain(llm=llm, prompt=prompt, **config)\ndef _load_hyde_chain(config: dict, **kwargs: Any) -> HypotheticalDocumentEmbedder:\n \"\"\"Load hypothetical document embedder chain from config dict.\"\"\"\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_path` must be present.\")\n if \"embeddings\" in kwargs:\n embeddings = kwargs.pop(\"embeddings\")\n else:\n raise ValueError(\"`embeddings` must be present.\")\n return HypotheticalDocumentEmbedder(\n llm_chain=llm_chain, base_embeddings=embeddings, **config\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html"}
+{"id": "a2edca350549-2", "text": ")\ndef _load_stuff_documents_chain(config: dict, **kwargs: Any) -> StuffDocumentsChain:\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_config` must be present.\")\n if not isinstance(llm_chain, LLMChain):\n raise ValueError(f\"Expected LLMChain, got {llm_chain}\")\n if \"document_prompt\" in config:\n prompt_config = config.pop(\"document_prompt\")\n document_prompt = load_prompt_from_config(prompt_config)\n elif \"document_prompt_path\" in config:\n document_prompt = load_prompt(config.pop(\"document_prompt_path\"))\n else:\n raise ValueError(\n \"One of `document_prompt` or `document_prompt_path` must be present.\"\n )\n return StuffDocumentsChain(\n llm_chain=llm_chain, document_prompt=document_prompt, **config\n )\ndef _load_map_reduce_documents_chain(\n config: dict, **kwargs: Any\n) -> MapReduceDocumentsChain:\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_config` must be present.\")\n if not isinstance(llm_chain, LLMChain):", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html"}
+{"id": "a2edca350549-3", "text": "if not isinstance(llm_chain, LLMChain):\n raise ValueError(f\"Expected LLMChain, got {llm_chain}\")\n if \"combine_document_chain\" in config:\n combine_document_chain_config = config.pop(\"combine_document_chain\")\n combine_document_chain = load_chain_from_config(combine_document_chain_config)\n elif \"combine_document_chain_path\" in config:\n combine_document_chain = load_chain(config.pop(\"combine_document_chain_path\"))\n else:\n raise ValueError(\n \"One of `combine_document_chain` or \"\n \"`combine_document_chain_path` must be present.\"\n )\n if \"collapse_document_chain\" in config:\n collapse_document_chain_config = config.pop(\"collapse_document_chain\")\n if collapse_document_chain_config is None:\n collapse_document_chain = None\n else:\n collapse_document_chain = load_chain_from_config(\n collapse_document_chain_config\n )\n elif \"collapse_document_chain_path\" in config:\n collapse_document_chain = load_chain(config.pop(\"collapse_document_chain_path\"))\n return MapReduceDocumentsChain(\n llm_chain=llm_chain,\n combine_document_chain=combine_document_chain,\n collapse_document_chain=collapse_document_chain,\n **config,\n )\ndef _load_llm_bash_chain(config: dict, **kwargs: Any) -> LLMBashChain:\n llm_chain = None\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n # llm attribute is deprecated in favor of llm_chain, here to support old configs", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html"}
+{"id": "a2edca350549-4", "text": "# llm attribute is deprecated in favor of llm_chain, here to support old configs\n elif \"llm\" in config:\n llm_config = config.pop(\"llm\")\n llm = load_llm_from_config(llm_config)\n # llm_path attribute is deprecated in favor of llm_chain_path,\n # its to support old configs\n elif \"llm_path\" in config:\n llm = load_llm(config.pop(\"llm_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_path` must be present.\")\n if \"prompt\" in config:\n prompt_config = config.pop(\"prompt\")\n prompt = load_prompt_from_config(prompt_config)\n elif \"prompt_path\" in config:\n prompt = load_prompt(config.pop(\"prompt_path\"))\n if llm_chain:\n return LLMBashChain(llm_chain=llm_chain, prompt=prompt, **config)\n else:\n return LLMBashChain(llm=llm, prompt=prompt, **config)\ndef _load_llm_checker_chain(config: dict, **kwargs: Any) -> LLMCheckerChain:\n if \"llm\" in config:\n llm_config = config.pop(\"llm\")\n llm = load_llm_from_config(llm_config)\n elif \"llm_path\" in config:\n llm = load_llm(config.pop(\"llm_path\"))\n else:\n raise ValueError(\"One of `llm` or `llm_path` must be present.\")\n if \"create_draft_answer_prompt\" in config:\n create_draft_answer_prompt_config = config.pop(\"create_draft_answer_prompt\")\n create_draft_answer_prompt = load_prompt_from_config(\n create_draft_answer_prompt_config\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html"}
+{"id": "a2edca350549-5", "text": "create_draft_answer_prompt_config\n )\n elif \"create_draft_answer_prompt_path\" in config:\n create_draft_answer_prompt = load_prompt(\n config.pop(\"create_draft_answer_prompt_path\")\n )\n if \"list_assertions_prompt\" in config:\n list_assertions_prompt_config = config.pop(\"list_assertions_prompt\")\n list_assertions_prompt = load_prompt_from_config(list_assertions_prompt_config)\n elif \"list_assertions_prompt_path\" in config:\n list_assertions_prompt = load_prompt(config.pop(\"list_assertions_prompt_path\"))\n if \"check_assertions_prompt\" in config:\n check_assertions_prompt_config = config.pop(\"check_assertions_prompt\")\n check_assertions_prompt = load_prompt_from_config(\n check_assertions_prompt_config\n )\n elif \"check_assertions_prompt_path\" in config:\n check_assertions_prompt = load_prompt(\n config.pop(\"check_assertions_prompt_path\")\n )\n if \"revised_answer_prompt\" in config:\n revised_answer_prompt_config = config.pop(\"revised_answer_prompt\")\n revised_answer_prompt = load_prompt_from_config(revised_answer_prompt_config)\n elif \"revised_answer_prompt_path\" in config:\n revised_answer_prompt = load_prompt(config.pop(\"revised_answer_prompt_path\"))\n return LLMCheckerChain(\n llm=llm,\n create_draft_answer_prompt=create_draft_answer_prompt,\n list_assertions_prompt=list_assertions_prompt,\n check_assertions_prompt=check_assertions_prompt,\n revised_answer_prompt=revised_answer_prompt,\n **config,\n )\ndef _load_llm_math_chain(config: dict, **kwargs: Any) -> LLMMathChain:\n llm_chain = None\n if \"llm_chain\" in config:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html"}
+{"id": "a2edca350549-6", "text": "llm_chain = None\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n # llm attribute is deprecated in favor of llm_chain, here to support old configs\n elif \"llm\" in config:\n llm_config = config.pop(\"llm\")\n llm = load_llm_from_config(llm_config)\n # llm_path attribute is deprecated in favor of llm_chain_path,\n # its to support old configs\n elif \"llm_path\" in config:\n llm = load_llm(config.pop(\"llm_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_path` must be present.\")\n if \"prompt\" in config:\n prompt_config = config.pop(\"prompt\")\n prompt = load_prompt_from_config(prompt_config)\n elif \"prompt_path\" in config:\n prompt = load_prompt(config.pop(\"prompt_path\"))\n if llm_chain:\n return LLMMathChain(llm_chain=llm_chain, prompt=prompt, **config)\n else:\n return LLMMathChain(llm=llm, prompt=prompt, **config)\ndef _load_map_rerank_documents_chain(\n config: dict, **kwargs: Any\n) -> MapRerankDocumentsChain:\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html"}
+{"id": "a2edca350549-7", "text": "elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_config` must be present.\")\n return MapRerankDocumentsChain(llm_chain=llm_chain, **config)\ndef _load_pal_chain(config: dict, **kwargs: Any) -> PALChain:\n llm_chain = None\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n # llm attribute is deprecated in favor of llm_chain, here to support old configs\n elif \"llm\" in config:\n llm_config = config.pop(\"llm\")\n llm = load_llm_from_config(llm_config)\n # llm_path attribute is deprecated in favor of llm_chain_path,\n # its to support old configs\n elif \"llm_path\" in config:\n llm = load_llm(config.pop(\"llm_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_path` must be present.\")\n if \"prompt\" in config:\n prompt_config = config.pop(\"prompt\")\n prompt = load_prompt_from_config(prompt_config)\n elif \"prompt_path\" in config:\n prompt = load_prompt(config.pop(\"prompt_path\"))\n else:\n raise ValueError(\"One of `prompt` or `prompt_path` must be present.\")\n if llm_chain:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html"}
+{"id": "a2edca350549-8", "text": "if llm_chain:\n return PALChain(llm_chain=llm_chain, prompt=prompt, **config)\n else:\n return PALChain(llm=llm, prompt=prompt, **config)\ndef _load_refine_documents_chain(config: dict, **kwargs: Any) -> RefineDocumentsChain:\n if \"initial_llm_chain\" in config:\n initial_llm_chain_config = config.pop(\"initial_llm_chain\")\n initial_llm_chain = load_chain_from_config(initial_llm_chain_config)\n elif \"initial_llm_chain_path\" in config:\n initial_llm_chain = load_chain(config.pop(\"initial_llm_chain_path\"))\n else:\n raise ValueError(\n \"One of `initial_llm_chain` or `initial_llm_chain_config` must be present.\"\n )\n if \"refine_llm_chain\" in config:\n refine_llm_chain_config = config.pop(\"refine_llm_chain\")\n refine_llm_chain = load_chain_from_config(refine_llm_chain_config)\n elif \"refine_llm_chain_path\" in config:\n refine_llm_chain = load_chain(config.pop(\"refine_llm_chain_path\"))\n else:\n raise ValueError(\n \"One of `refine_llm_chain` or `refine_llm_chain_config` must be present.\"\n )\n if \"document_prompt\" in config:\n prompt_config = config.pop(\"document_prompt\")\n document_prompt = load_prompt_from_config(prompt_config)\n elif \"document_prompt_path\" in config:\n document_prompt = load_prompt(config.pop(\"document_prompt_path\"))\n return RefineDocumentsChain(\n initial_llm_chain=initial_llm_chain,\n refine_llm_chain=refine_llm_chain,\n document_prompt=document_prompt,", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html"}
+{"id": "a2edca350549-9", "text": "refine_llm_chain=refine_llm_chain,\n document_prompt=document_prompt,\n **config,\n )\ndef _load_qa_with_sources_chain(config: dict, **kwargs: Any) -> QAWithSourcesChain:\n if \"combine_documents_chain\" in config:\n combine_documents_chain_config = config.pop(\"combine_documents_chain\")\n combine_documents_chain = load_chain_from_config(combine_documents_chain_config)\n elif \"combine_documents_chain_path\" in config:\n combine_documents_chain = load_chain(config.pop(\"combine_documents_chain_path\"))\n else:\n raise ValueError(\n \"One of `combine_documents_chain` or \"\n \"`combine_documents_chain_path` must be present.\"\n )\n return QAWithSourcesChain(combine_documents_chain=combine_documents_chain, **config)\ndef _load_sql_database_chain(config: dict, **kwargs: Any) -> SQLDatabaseChain:\n if \"database\" in kwargs:\n database = kwargs.pop(\"database\")\n else:\n raise ValueError(\"`database` must be present.\")\n if \"llm\" in config:\n llm_config = config.pop(\"llm\")\n llm = load_llm_from_config(llm_config)\n elif \"llm_path\" in config:\n llm = load_llm(config.pop(\"llm_path\"))\n else:\n raise ValueError(\"One of `llm` or `llm_path` must be present.\")\n if \"prompt\" in config:\n prompt_config = config.pop(\"prompt\")\n prompt = load_prompt_from_config(prompt_config)\n else:\n prompt = None\n return SQLDatabaseChain.from_llm(llm, database, prompt=prompt, **config)\ndef _load_vector_db_qa_with_sources_chain(\n config: dict, **kwargs: Any", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html"}
+{"id": "a2edca350549-10", "text": "config: dict, **kwargs: Any\n) -> VectorDBQAWithSourcesChain:\n if \"vectorstore\" in kwargs:\n vectorstore = kwargs.pop(\"vectorstore\")\n else:\n raise ValueError(\"`vectorstore` must be present.\")\n if \"combine_documents_chain\" in config:\n combine_documents_chain_config = config.pop(\"combine_documents_chain\")\n combine_documents_chain = load_chain_from_config(combine_documents_chain_config)\n elif \"combine_documents_chain_path\" in config:\n combine_documents_chain = load_chain(config.pop(\"combine_documents_chain_path\"))\n else:\n raise ValueError(\n \"One of `combine_documents_chain` or \"\n \"`combine_documents_chain_path` must be present.\"\n )\n return VectorDBQAWithSourcesChain(\n combine_documents_chain=combine_documents_chain,\n vectorstore=vectorstore,\n **config,\n )\ndef _load_retrieval_qa(config: dict, **kwargs: Any) -> RetrievalQA:\n if \"retriever\" in kwargs:\n retriever = kwargs.pop(\"retriever\")\n else:\n raise ValueError(\"`retriever` must be present.\")\n if \"combine_documents_chain\" in config:\n combine_documents_chain_config = config.pop(\"combine_documents_chain\")\n combine_documents_chain = load_chain_from_config(combine_documents_chain_config)\n elif \"combine_documents_chain_path\" in config:\n combine_documents_chain = load_chain(config.pop(\"combine_documents_chain_path\"))\n else:\n raise ValueError(\n \"One of `combine_documents_chain` or \"\n \"`combine_documents_chain_path` must be present.\"\n )\n return RetrievalQA(\n combine_documents_chain=combine_documents_chain,\n retriever=retriever,\n **config,\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html"}
+{"id": "a2edca350549-11", "text": "retriever=retriever,\n **config,\n )\ndef _load_vector_db_qa(config: dict, **kwargs: Any) -> VectorDBQA:\n if \"vectorstore\" in kwargs:\n vectorstore = kwargs.pop(\"vectorstore\")\n else:\n raise ValueError(\"`vectorstore` must be present.\")\n if \"combine_documents_chain\" in config:\n combine_documents_chain_config = config.pop(\"combine_documents_chain\")\n combine_documents_chain = load_chain_from_config(combine_documents_chain_config)\n elif \"combine_documents_chain_path\" in config:\n combine_documents_chain = load_chain(config.pop(\"combine_documents_chain_path\"))\n else:\n raise ValueError(\n \"One of `combine_documents_chain` or \"\n \"`combine_documents_chain_path` must be present.\"\n )\n return VectorDBQA(\n combine_documents_chain=combine_documents_chain,\n vectorstore=vectorstore,\n **config,\n )\ndef _load_api_chain(config: dict, **kwargs: Any) -> APIChain:\n if \"api_request_chain\" in config:\n api_request_chain_config = config.pop(\"api_request_chain\")\n api_request_chain = load_chain_from_config(api_request_chain_config)\n elif \"api_request_chain_path\" in config:\n api_request_chain = load_chain(config.pop(\"api_request_chain_path\"))\n else:\n raise ValueError(\n \"One of `api_request_chain` or `api_request_chain_path` must be present.\"\n )\n if \"api_answer_chain\" in config:\n api_answer_chain_config = config.pop(\"api_answer_chain\")\n api_answer_chain = load_chain_from_config(api_answer_chain_config)\n elif \"api_answer_chain_path\" in config:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html"}
+{"id": "a2edca350549-12", "text": "elif \"api_answer_chain_path\" in config:\n api_answer_chain = load_chain(config.pop(\"api_answer_chain_path\"))\n else:\n raise ValueError(\n \"One of `api_answer_chain` or `api_answer_chain_path` must be present.\"\n )\n if \"requests_wrapper\" in kwargs:\n requests_wrapper = kwargs.pop(\"requests_wrapper\")\n else:\n raise ValueError(\"`requests_wrapper` must be present.\")\n return APIChain(\n api_request_chain=api_request_chain,\n api_answer_chain=api_answer_chain,\n requests_wrapper=requests_wrapper,\n **config,\n )\ndef _load_llm_requests_chain(config: dict, **kwargs: Any) -> LLMRequestsChain:\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_path` must be present.\")\n if \"requests_wrapper\" in kwargs:\n requests_wrapper = kwargs.pop(\"requests_wrapper\")\n return LLMRequestsChain(\n llm_chain=llm_chain, requests_wrapper=requests_wrapper, **config\n )\n else:\n return LLMRequestsChain(llm_chain=llm_chain, **config)\ntype_to_loader_dict = {\n \"api_chain\": _load_api_chain,\n \"hyde_chain\": _load_hyde_chain,\n \"llm_chain\": _load_llm_chain,\n \"llm_bash_chain\": _load_llm_bash_chain,", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html"}
+{"id": "a2edca350549-13", "text": "\"llm_bash_chain\": _load_llm_bash_chain,\n \"llm_checker_chain\": _load_llm_checker_chain,\n \"llm_math_chain\": _load_llm_math_chain,\n \"llm_requests_chain\": _load_llm_requests_chain,\n \"pal_chain\": _load_pal_chain,\n \"qa_with_sources_chain\": _load_qa_with_sources_chain,\n \"stuff_documents_chain\": _load_stuff_documents_chain,\n \"map_reduce_documents_chain\": _load_map_reduce_documents_chain,\n \"map_rerank_documents_chain\": _load_map_rerank_documents_chain,\n \"refine_documents_chain\": _load_refine_documents_chain,\n \"sql_database_chain\": _load_sql_database_chain,\n \"vector_db_qa_with_sources_chain\": _load_vector_db_qa_with_sources_chain,\n \"vector_db_qa\": _load_vector_db_qa,\n \"retrieval_qa\": _load_retrieval_qa,\n}\ndef load_chain_from_config(config: dict, **kwargs: Any) -> Chain:\n \"\"\"Load chain from Config Dict.\"\"\"\n if \"_type\" not in config:\n raise ValueError(\"Must specify a chain Type in config\")\n config_type = config.pop(\"_type\")\n if config_type not in type_to_loader_dict:\n raise ValueError(f\"Loading {config_type} chain not supported\")\n chain_loader = type_to_loader_dict[config_type]\n return chain_loader(config, **kwargs)\n[docs]def load_chain(path: Union[str, Path], **kwargs: Any) -> Chain:\n \"\"\"Unified method for loading a chain from LangChainHub or local fs.\"\"\"\n if hub_result := try_load_from_hub(\n path, _load_chain_from_file, \"chains\", {\"json\", \"yaml\"}, **kwargs", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html"}
+{"id": "a2edca350549-14", "text": "):\n return hub_result\n else:\n return _load_chain_from_file(path, **kwargs)\ndef _load_chain_from_file(file: Union[str, Path], **kwargs: Any) -> Chain:\n \"\"\"Load chain from file.\"\"\"\n # Convert file to Path object.\n if isinstance(file, str):\n file_path = Path(file)\n else:\n file_path = file\n # Load from either json or yaml.\n if file_path.suffix == \".json\":\n with open(file_path) as f:\n config = json.load(f)\n elif file_path.suffix == \".yaml\":\n with open(file_path, \"r\") as f:\n config = yaml.safe_load(f)\n else:\n raise ValueError(\"File type must be json or yaml\")\n # Override default 'verbose' and 'memory' for the chain\n if \"verbose\" in kwargs:\n config[\"verbose\"] = kwargs.pop(\"verbose\")\n if \"memory\" in kwargs:\n config[\"memory\"] = kwargs.pop(\"memory\")\n # Load the chain from the config now.\n return load_chain_from_config(config, **kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html"}
+{"id": "48dd8cb3e272-0", "text": "Source code for langchain.chains.transform\n\"\"\"Chain that runs an arbitrary python function.\"\"\"\nfrom typing import Callable, Dict, List, Optional\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\n[docs]class TransformChain(Chain):\n \"\"\"Chain transform chain output.\n Example:\n .. code-block:: python\n from langchain import TransformChain\n transform_chain = TransformChain(input_variables=[\"text\"],\n output_variables[\"entities\"], transform=func())\n \"\"\"\n input_variables: List[str]\n output_variables: List[str]\n transform: Callable[[Dict[str, str]], Dict[str, str]]\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input keys.\n :meta private:\n \"\"\"\n return self.input_variables\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output keys.\n :meta private:\n \"\"\"\n return self.output_variables\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n return self.transform(inputs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/transform.html"}
+{"id": "db1056e59def-0", "text": "Source code for langchain.chains.llm\n\"\"\"Chain that just formats a prompt and calls an LLM.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional, Sequence, Tuple, Union\nfrom pydantic import Extra\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManager,\n AsyncCallbackManagerForChainRun,\n CallbackManager,\n CallbackManagerForChainRun,\n Callbacks,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.input import get_colored_text\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import LLMResult, PromptValue\n[docs]class LLMChain(Chain):\n \"\"\"Chain to run queries against LLMs.\n Example:\n .. code-block:: python\n from langchain import LLMChain, OpenAI, PromptTemplate\n prompt_template = \"Tell me a {adjective} joke\"\n prompt = PromptTemplate(\n input_variables=[\"adjective\"], template=prompt_template\n )\n llm = LLMChain(llm=OpenAI(), prompt=prompt)\n \"\"\"\n prompt: BasePromptTemplate\n \"\"\"Prompt object to use.\"\"\"\n llm: BaseLanguageModel\n output_key: str = \"text\" #: :meta private:\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Will be whatever keys the prompt expects.\n :meta private:\n \"\"\"\n return self.prompt.input_variables\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Will always return text key.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm.html"}
+{"id": "db1056e59def-1", "text": "def output_keys(self) -> List[str]:\n \"\"\"Will always return text key.\n :meta private:\n \"\"\"\n return [self.output_key]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n response = self.generate([inputs], run_manager=run_manager)\n return self.create_outputs(response)[0]\n[docs] def generate(\n self,\n input_list: List[Dict[str, Any]],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> LLMResult:\n \"\"\"Generate LLM result from inputs.\"\"\"\n prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)\n return self.llm.generate_prompt(\n prompts, stop, callbacks=run_manager.get_child() if run_manager else None\n )\n[docs] async def agenerate(\n self,\n input_list: List[Dict[str, Any]],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> LLMResult:\n \"\"\"Generate LLM result from inputs.\"\"\"\n prompts, stop = await self.aprep_prompts(input_list, run_manager=run_manager)\n return await self.llm.agenerate_prompt(\n prompts, stop, callbacks=run_manager.get_child() if run_manager else None\n )\n[docs] def prep_prompts(\n self,\n input_list: List[Dict[str, Any]],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Tuple[List[PromptValue], Optional[List[str]]]:\n \"\"\"Prepare prompts from inputs.\"\"\"\n stop = None", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm.html"}
+{"id": "db1056e59def-2", "text": "\"\"\"Prepare prompts from inputs.\"\"\"\n stop = None\n if \"stop\" in input_list[0]:\n stop = input_list[0][\"stop\"]\n prompts = []\n for inputs in input_list:\n selected_inputs = {k: inputs[k] for k in self.prompt.input_variables}\n prompt = self.prompt.format_prompt(**selected_inputs)\n _colored_text = get_colored_text(prompt.to_string(), \"green\")\n _text = \"Prompt after formatting:\\n\" + _colored_text\n if run_manager:\n run_manager.on_text(_text, end=\"\\n\", verbose=self.verbose)\n if \"stop\" in inputs and inputs[\"stop\"] != stop:\n raise ValueError(\n \"If `stop` is present in any inputs, should be present in all.\"\n )\n prompts.append(prompt)\n return prompts, stop\n[docs] async def aprep_prompts(\n self,\n input_list: List[Dict[str, Any]],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Tuple[List[PromptValue], Optional[List[str]]]:\n \"\"\"Prepare prompts from inputs.\"\"\"\n stop = None\n if \"stop\" in input_list[0]:\n stop = input_list[0][\"stop\"]\n prompts = []\n for inputs in input_list:\n selected_inputs = {k: inputs[k] for k in self.prompt.input_variables}\n prompt = self.prompt.format_prompt(**selected_inputs)\n _colored_text = get_colored_text(prompt.to_string(), \"green\")\n _text = \"Prompt after formatting:\\n\" + _colored_text\n if run_manager:\n await run_manager.on_text(_text, end=\"\\n\", verbose=self.verbose)", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm.html"}
+{"id": "db1056e59def-3", "text": "await run_manager.on_text(_text, end=\"\\n\", verbose=self.verbose)\n if \"stop\" in inputs and inputs[\"stop\"] != stop:\n raise ValueError(\n \"If `stop` is present in any inputs, should be present in all.\"\n )\n prompts.append(prompt)\n return prompts, stop\n[docs] def apply(\n self, input_list: List[Dict[str, Any]], callbacks: Callbacks = None\n ) -> List[Dict[str, str]]:\n \"\"\"Utilize the LLM generate method for speed gains.\"\"\"\n callback_manager = CallbackManager.configure(\n callbacks, self.callbacks, self.verbose\n )\n run_manager = callback_manager.on_chain_start(\n {\"name\": self.__class__.__name__},\n {\"input_list\": input_list},\n )\n try:\n response = self.generate(input_list, run_manager=run_manager)\n except (KeyboardInterrupt, Exception) as e:\n run_manager.on_chain_error(e)\n raise e\n outputs = self.create_outputs(response)\n run_manager.on_chain_end({\"outputs\": outputs})\n return outputs\n[docs] async def aapply(\n self, input_list: List[Dict[str, Any]], callbacks: Callbacks = None\n ) -> List[Dict[str, str]]:\n \"\"\"Utilize the LLM generate method for speed gains.\"\"\"\n callback_manager = AsyncCallbackManager.configure(\n callbacks, self.callbacks, self.verbose\n )\n run_manager = await callback_manager.on_chain_start(\n {\"name\": self.__class__.__name__},\n {\"input_list\": input_list},\n )\n try:\n response = await self.agenerate(input_list, run_manager=run_manager)\n except (KeyboardInterrupt, Exception) as e:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm.html"}
+{"id": "db1056e59def-4", "text": "except (KeyboardInterrupt, Exception) as e:\n await run_manager.on_chain_error(e)\n raise e\n outputs = self.create_outputs(response)\n await run_manager.on_chain_end({\"outputs\": outputs})\n return outputs\n[docs] def create_outputs(self, response: LLMResult) -> List[Dict[str, str]]:\n \"\"\"Create outputs from response.\"\"\"\n return [\n # Get the text of the top generated string.\n {self.output_key: generation[0].text}\n for generation in response.generations\n ]\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n response = await self.agenerate([inputs], run_manager=run_manager)\n return self.create_outputs(response)[0]\n[docs] def predict(self, callbacks: Callbacks = None, **kwargs: Any) -> str:\n \"\"\"Format prompt with kwargs and pass to LLM.\n Args:\n callbacks: Callbacks to pass to LLMChain\n **kwargs: Keys to pass to prompt template.\n Returns:\n Completion from LLM.\n Example:\n .. code-block:: python\n completion = llm.predict(adjective=\"funny\")\n \"\"\"\n return self(kwargs, callbacks=callbacks)[self.output_key]\n[docs] async def apredict(self, callbacks: Callbacks = None, **kwargs: Any) -> str:\n \"\"\"Format prompt with kwargs and pass to LLM.\n Args:\n callbacks: Callbacks to pass to LLMChain\n **kwargs: Keys to pass to prompt template.\n Returns:\n Completion from LLM.\n Example:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm.html"}
+{"id": "db1056e59def-5", "text": "Returns:\n Completion from LLM.\n Example:\n .. code-block:: python\n completion = llm.predict(adjective=\"funny\")\n \"\"\"\n return (await self.acall(kwargs, callbacks=callbacks))[self.output_key]\n[docs] def predict_and_parse(\n self, callbacks: Callbacks = None, **kwargs: Any\n ) -> Union[str, List[str], Dict[str, Any]]:\n \"\"\"Call predict and then parse the results.\"\"\"\n result = self.predict(callbacks=callbacks, **kwargs)\n if self.prompt.output_parser is not None:\n return self.prompt.output_parser.parse(result)\n else:\n return result\n[docs] async def apredict_and_parse(\n self, callbacks: Callbacks = None, **kwargs: Any\n ) -> Union[str, List[str], Dict[str, str]]:\n \"\"\"Call apredict and then parse the results.\"\"\"\n result = await self.apredict(callbacks=callbacks, **kwargs)\n if self.prompt.output_parser is not None:\n return self.prompt.output_parser.parse(result)\n else:\n return result\n[docs] def apply_and_parse(\n self, input_list: List[Dict[str, Any]], callbacks: Callbacks = None\n ) -> Sequence[Union[str, List[str], Dict[str, str]]]:\n \"\"\"Call apply and then parse the results.\"\"\"\n result = self.apply(input_list, callbacks=callbacks)\n return self._parse_result(result)\n def _parse_result(\n self, result: List[Dict[str, str]]\n ) -> Sequence[Union[str, List[str], Dict[str, str]]]:\n if self.prompt.output_parser is not None:\n return [", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm.html"}
+{"id": "db1056e59def-6", "text": "if self.prompt.output_parser is not None:\n return [\n self.prompt.output_parser.parse(res[self.output_key]) for res in result\n ]\n else:\n return result\n[docs] async def aapply_and_parse(\n self, input_list: List[Dict[str, Any]], callbacks: Callbacks = None\n ) -> Sequence[Union[str, List[str], Dict[str, str]]]:\n \"\"\"Call apply and then parse the results.\"\"\"\n result = await self.aapply(input_list, callbacks=callbacks)\n return self._parse_result(result)\n @property\n def _chain_type(self) -> str:\n return \"llm_chain\"\n[docs] @classmethod\n def from_string(cls, llm: BaseLanguageModel, template: str) -> Chain:\n \"\"\"Create LLMChain from LLM and template.\"\"\"\n prompt_template = PromptTemplate.from_template(template)\n return cls(llm=llm, prompt=prompt_template)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm.html"}
+{"id": "b53caca62d6b-0", "text": "Source code for langchain.chains.hyde.base\n\"\"\"Hypothetical Document Embeddings.\nhttps://arxiv.org/abs/2212.10496\n\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional\nimport numpy as np\nfrom pydantic import Extra\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.hyde.prompts import PROMPT_MAP\nfrom langchain.chains.llm import LLMChain\nfrom langchain.embeddings.base import Embeddings\n[docs]class HypotheticalDocumentEmbedder(Chain, Embeddings):\n \"\"\"Generate hypothetical document for query, and then embed that.\n Based on https://arxiv.org/abs/2212.10496\n \"\"\"\n base_embeddings: Embeddings\n llm_chain: LLMChain\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Input keys for Hyde's LLM chain.\"\"\"\n return self.llm_chain.input_keys\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Output keys for Hyde's LLM chain.\"\"\"\n return self.llm_chain.output_keys\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Call the base embeddings.\"\"\"\n return self.base_embeddings.embed_documents(texts)\n[docs] def combine_embeddings(self, embeddings: List[List[float]]) -> List[float]:\n \"\"\"Combine embeddings into final embeddings.\"\"\"\n return list(np.array(embeddings).mean(axis=0))", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/hyde/base.html"}
+{"id": "b53caca62d6b-1", "text": "return list(np.array(embeddings).mean(axis=0))\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Generate a hypothetical document and embedded it.\"\"\"\n var_name = self.llm_chain.input_keys[0]\n result = self.llm_chain.generate([{var_name: text}])\n documents = [generation.text for generation in result.generations[0]]\n embeddings = self.embed_documents(documents)\n return self.combine_embeddings(embeddings)\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n \"\"\"Call the internal llm chain.\"\"\"\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n return self.llm_chain(inputs, callbacks=_run_manager.get_child())\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n base_embeddings: Embeddings,\n prompt_key: str,\n **kwargs: Any,\n ) -> HypotheticalDocumentEmbedder:\n \"\"\"Load and use LLMChain for a specific prompt key.\"\"\"\n prompt = PROMPT_MAP[prompt_key]\n llm_chain = LLMChain(llm=llm, prompt=prompt)\n return cls(base_embeddings=base_embeddings, llm_chain=llm_chain, **kwargs)\n @property\n def _chain_type(self) -> str:\n return \"hyde_chain\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/hyde/base.html"}
+{"id": "ce2295d38545-0", "text": "Source code for langchain.chains.pal.base\n\"\"\"Implements Program-Aided Language Models.\nAs in https://arxiv.org/pdf/2211.10435.pdf.\n\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.pal.colored_object_prompt import COLORED_OBJECT_PROMPT\nfrom langchain.chains.pal.math_prompt import MATH_PROMPT\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.utilities import PythonREPL\n[docs]class PALChain(Chain):\n \"\"\"Implements Program-Aided Language Models.\"\"\"\n llm_chain: LLMChain\n llm: Optional[BaseLanguageModel] = None\n \"\"\"[Deprecated]\"\"\"\n prompt: BasePromptTemplate = MATH_PROMPT\n \"\"\"[Deprecated]\"\"\"\n stop: str = \"\\n\\n\"\n get_answer_expr: str = \"print(solution())\"\n python_globals: Optional[Dict[str, Any]] = None\n python_locals: Optional[Dict[str, Any]] = None\n output_key: str = \"result\" #: :meta private:\n return_intermediate_steps: bool = False\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator(pre=True)\n def raise_deprecation(cls, values: Dict) -> Dict:\n if \"llm\" in values:\n warnings.warn(\n \"Directly instantiating an PALChain with an llm is deprecated. \"", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/pal/base.html"}
+{"id": "ce2295d38545-1", "text": "\"Directly instantiating an PALChain with an llm is deprecated. \"\n \"Please instantiate with llm_chain argument or using the one of \"\n \"the class method constructors from_math_prompt, \"\n \"from_colored_object_prompt.\"\n )\n if \"llm_chain\" not in values and values[\"llm\"] is not None:\n values[\"llm_chain\"] = LLMChain(llm=values[\"llm\"], prompt=MATH_PROMPT)\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the singular input key.\n :meta private:\n \"\"\"\n return self.prompt.input_variables\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the singular output key.\n :meta private:\n \"\"\"\n if not self.return_intermediate_steps:\n return [self.output_key]\n else:\n return [self.output_key, \"intermediate_steps\"]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n code = self.llm_chain.predict(\n stop=[self.stop], callbacks=_run_manager.get_child(), **inputs\n )\n _run_manager.on_text(code, color=\"green\", end=\"\\n\", verbose=self.verbose)\n repl = PythonREPL(_globals=self.python_globals, _locals=self.python_locals)\n res = repl.run(code + f\"\\n{self.get_answer_expr}\")\n output = {self.output_key: res.strip()}\n if self.return_intermediate_steps:\n output[\"intermediate_steps\"] = code", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/pal/base.html"}
+{"id": "ce2295d38545-2", "text": "if self.return_intermediate_steps:\n output[\"intermediate_steps\"] = code\n return output\n[docs] @classmethod\n def from_math_prompt(cls, llm: BaseLanguageModel, **kwargs: Any) -> PALChain:\n \"\"\"Load PAL from math prompt.\"\"\"\n llm_chain = LLMChain(llm=llm, prompt=MATH_PROMPT)\n return cls(\n llm_chain=llm_chain,\n stop=\"\\n\\n\",\n get_answer_expr=\"print(solution())\",\n **kwargs,\n )\n[docs] @classmethod\n def from_colored_object_prompt(\n cls, llm: BaseLanguageModel, **kwargs: Any\n ) -> PALChain:\n \"\"\"Load PAL from colored object prompt.\"\"\"\n llm_chain = LLMChain(llm=llm, prompt=COLORED_OBJECT_PROMPT)\n return cls(\n llm_chain=llm_chain,\n stop=\"\\n\\n\\n\",\n get_answer_expr=\"print(answer)\",\n **kwargs,\n )\n @property\n def _chain_type(self) -> str:\n return \"pal_chain\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/pal/base.html"}
+{"id": "6924a5212d04-0", "text": "Source code for langchain.chains.graph_qa.base\n\"\"\"Question answering over a graph.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Field\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.graph_qa.prompts import ENTITY_EXTRACTION_PROMPT, PROMPT\nfrom langchain.chains.llm import LLMChain\nfrom langchain.graphs.networkx_graph import NetworkxEntityGraph, get_entities\nfrom langchain.prompts.base import BasePromptTemplate\n[docs]class GraphQAChain(Chain):\n \"\"\"Chain for question-answering against a graph.\"\"\"\n graph: NetworkxEntityGraph = Field(exclude=True)\n entity_extraction_chain: LLMChain\n qa_chain: LLMChain\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the output keys.\n :meta private:\n \"\"\"\n _output_keys = [self.output_key]\n return _output_keys\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n qa_prompt: BasePromptTemplate = PROMPT,\n entity_prompt: BasePromptTemplate = ENTITY_EXTRACTION_PROMPT,\n **kwargs: Any,\n ) -> GraphQAChain:\n \"\"\"Initialize from LLM.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/base.html"}
+{"id": "6924a5212d04-1", "text": ") -> GraphQAChain:\n \"\"\"Initialize from LLM.\"\"\"\n qa_chain = LLMChain(llm=llm, prompt=qa_prompt)\n entity_chain = LLMChain(llm=llm, prompt=entity_prompt)\n return cls(\n qa_chain=qa_chain,\n entity_extraction_chain=entity_chain,\n **kwargs,\n )\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n \"\"\"Extract entities, look up info and answer question.\"\"\"\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n question = inputs[self.input_key]\n entity_string = self.entity_extraction_chain.run(question)\n _run_manager.on_text(\"Entities Extracted:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n entity_string, color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n entities = get_entities(entity_string)\n context = \"\"\n for entity in entities:\n triplets = self.graph.get_entity_knowledge(entity)\n context += \"\\n\".join(triplets)\n _run_manager.on_text(\"Full Context:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(context, color=\"green\", end=\"\\n\", verbose=self.verbose)\n result = self.qa_chain(\n {\"question\": question, \"context\": context},\n callbacks=_run_manager.get_child(),\n )\n return {self.output_key: result[self.qa_chain.output_key]}\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/base.html"}
+{"id": "b8d70dee77bf-0", "text": "Source code for langchain.chains.graph_qa.cypher\n\"\"\"Question answering over a graph.\"\"\"\nfrom __future__ import annotations\nimport re\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Field\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.graph_qa.prompts import CYPHER_GENERATION_PROMPT, CYPHER_QA_PROMPT\nfrom langchain.chains.llm import LLMChain\nfrom langchain.graphs.neo4j_graph import Neo4jGraph\nfrom langchain.prompts.base import BasePromptTemplate\nINTERMEDIATE_STEPS_KEY = \"intermediate_steps\"\ndef extract_cypher(text: str) -> str:\n # The pattern to find Cypher code enclosed in triple backticks\n pattern = r\"```(.*?)```\"\n # Find all matches in the input text\n matches = re.findall(pattern, text, re.DOTALL)\n return matches[0] if matches else text\n[docs]class GraphCypherQAChain(Chain):\n \"\"\"Chain for question-answering against a graph by generating Cypher statements.\"\"\"\n graph: Neo4jGraph = Field(exclude=True)\n cypher_generation_chain: LLMChain\n qa_chain: LLMChain\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n top_k: int = 10\n \"\"\"Number of results to return from the query\"\"\"\n return_intermediate_steps: bool = False\n \"\"\"Whether or not to return the intermediate steps along with the final answer.\"\"\"\n return_direct: bool = False\n \"\"\"Whether or not to return the result of querying the graph directly.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/cypher.html"}
+{"id": "b8d70dee77bf-1", "text": "\"\"\"Whether or not to return the result of querying the graph directly.\"\"\"\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the output keys.\n :meta private:\n \"\"\"\n _output_keys = [self.output_key]\n return _output_keys\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n *,\n qa_prompt: BasePromptTemplate = CYPHER_QA_PROMPT,\n cypher_prompt: BasePromptTemplate = CYPHER_GENERATION_PROMPT,\n **kwargs: Any,\n ) -> GraphCypherQAChain:\n \"\"\"Initialize from LLM.\"\"\"\n qa_chain = LLMChain(llm=llm, prompt=qa_prompt)\n cypher_generation_chain = LLMChain(llm=llm, prompt=cypher_prompt)\n return cls(\n qa_chain=qa_chain,\n cypher_generation_chain=cypher_generation_chain,\n **kwargs,\n )\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Generate Cypher statement, use it to look up in db and answer question.\"\"\"\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n question = inputs[self.input_key]\n intermediate_steps: List = []\n generated_cypher = self.cypher_generation_chain.run(", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/cypher.html"}
+{"id": "b8d70dee77bf-2", "text": "generated_cypher = self.cypher_generation_chain.run(\n {\"question\": question, \"schema\": self.graph.get_schema}, callbacks=callbacks\n )\n # Extract Cypher code if it is wrapped in backticks\n generated_cypher = extract_cypher(generated_cypher)\n _run_manager.on_text(\"Generated Cypher:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n generated_cypher, color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n intermediate_steps.append({\"query\": generated_cypher})\n # Retrieve and limit the number of results\n context = self.graph.query(generated_cypher)[: self.top_k]\n if self.return_direct:\n final_result = context\n else:\n _run_manager.on_text(\"Full Context:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n str(context), color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n intermediate_steps.append({\"context\": context})\n result = self.qa_chain(\n {\"question\": question, \"context\": context},\n callbacks=callbacks,\n )\n final_result = result[self.qa_chain.output_key]\n chain_result: Dict[str, Any] = {self.output_key: final_result}\n if self.return_intermediate_steps:\n chain_result[INTERMEDIATE_STEPS_KEY] = intermediate_steps\n return chain_result\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/cypher.html"}
+{"id": "5a64f19bb28a-0", "text": "Source code for langchain.chains.graph_qa.nebulagraph\n\"\"\"Question answering over a graph.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Field\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.graph_qa.prompts import CYPHER_QA_PROMPT, NGQL_GENERATION_PROMPT\nfrom langchain.chains.llm import LLMChain\nfrom langchain.graphs.nebula_graph import NebulaGraph\nfrom langchain.prompts.base import BasePromptTemplate\n[docs]class NebulaGraphQAChain(Chain):\n \"\"\"Chain for question-answering against a graph by generating nGQL statements.\"\"\"\n graph: NebulaGraph = Field(exclude=True)\n ngql_generation_chain: LLMChain\n qa_chain: LLMChain\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the output keys.\n :meta private:\n \"\"\"\n _output_keys = [self.output_key]\n return _output_keys\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n *,\n qa_prompt: BasePromptTemplate = CYPHER_QA_PROMPT,\n ngql_prompt: BasePromptTemplate = NGQL_GENERATION_PROMPT,\n **kwargs: Any,", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/nebulagraph.html"}
+{"id": "5a64f19bb28a-1", "text": "**kwargs: Any,\n ) -> NebulaGraphQAChain:\n \"\"\"Initialize from LLM.\"\"\"\n qa_chain = LLMChain(llm=llm, prompt=qa_prompt)\n ngql_generation_chain = LLMChain(llm=llm, prompt=ngql_prompt)\n return cls(\n qa_chain=qa_chain,\n ngql_generation_chain=ngql_generation_chain,\n **kwargs,\n )\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n \"\"\"Generate nGQL statement, use it to look up in db and answer question.\"\"\"\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n question = inputs[self.input_key]\n generated_ngql = self.ngql_generation_chain.run(\n {\"question\": question, \"schema\": self.graph.get_schema}, callbacks=callbacks\n )\n _run_manager.on_text(\"Generated nGQL:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n generated_ngql, color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n context = self.graph.query(generated_ngql)\n _run_manager.on_text(\"Full Context:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n str(context), color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n result = self.qa_chain(\n {\"question\": question, \"context\": context},\n callbacks=callbacks,\n )\n return {self.output_key: result[self.qa_chain.output_key]}\nBy Harrison Chase", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/nebulagraph.html"}
+{"id": "5a64f19bb28a-2", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/nebulagraph.html"}
+{"id": "34a0cad81a9c-0", "text": "Source code for langchain.chains.conversational_retrieval.base\n\"\"\"Chain for chatting with a vector database.\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom abc import abstractmethod\nfrom pathlib import Path\nfrom typing import Any, Callable, Dict, List, Optional, Tuple, Union\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n Callbacks,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.chains.combine_documents.base import BaseCombineDocumentsChain\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.question_answering import load_qa_chain\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.schema import BaseMessage, BaseRetriever, Document\nfrom langchain.vectorstores.base import VectorStore\n# Depending on the memory type and configuration, the chat history format may differ.\n# This needs to be consolidated.\nCHAT_TURN_TYPE = Union[Tuple[str, str], BaseMessage]\n_ROLE_MAP = {\"human\": \"Human: \", \"ai\": \"Assistant: \"}\ndef _get_chat_history(chat_history: List[CHAT_TURN_TYPE]) -> str:\n buffer = \"\"\n for dialogue_turn in chat_history:\n if isinstance(dialogue_turn, BaseMessage):\n role_prefix = _ROLE_MAP.get(dialogue_turn.type, f\"{dialogue_turn.type}: \")\n buffer += f\"\\n{role_prefix}{dialogue_turn.content}\"\n elif isinstance(dialogue_turn, tuple):\n human = \"Human: \" + dialogue_turn[0]", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"}
+{"id": "34a0cad81a9c-1", "text": "human = \"Human: \" + dialogue_turn[0]\n ai = \"Assistant: \" + dialogue_turn[1]\n buffer += \"\\n\" + \"\\n\".join([human, ai])\n else:\n raise ValueError(\n f\"Unsupported chat history format: {type(dialogue_turn)}.\"\n f\" Full chat history: {chat_history} \"\n )\n return buffer\nclass BaseConversationalRetrievalChain(Chain):\n \"\"\"Chain for chatting with an index.\"\"\"\n combine_docs_chain: BaseCombineDocumentsChain\n question_generator: LLMChain\n output_key: str = \"answer\"\n return_source_documents: bool = False\n return_generated_question: bool = False\n get_chat_history: Optional[Callable[[CHAT_TURN_TYPE], str]] = None\n \"\"\"Return the source documents.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n allow_population_by_field_name = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Input keys.\"\"\"\n return [\"question\", \"chat_history\"]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the output keys.\n :meta private:\n \"\"\"\n _output_keys = [self.output_key]\n if self.return_source_documents:\n _output_keys = _output_keys + [\"source_documents\"]\n if self.return_generated_question:\n _output_keys = _output_keys + [\"generated_question\"]\n return _output_keys\n @abstractmethod\n def _get_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]:\n \"\"\"Get docs.\"\"\"\n def _call(\n self,", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"}
+{"id": "34a0cad81a9c-2", "text": "\"\"\"Get docs.\"\"\"\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n question = inputs[\"question\"]\n get_chat_history = self.get_chat_history or _get_chat_history\n chat_history_str = get_chat_history(inputs[\"chat_history\"])\n if chat_history_str:\n callbacks = _run_manager.get_child()\n new_question = self.question_generator.run(\n question=question, chat_history=chat_history_str, callbacks=callbacks\n )\n else:\n new_question = question\n docs = self._get_docs(new_question, inputs)\n new_inputs = inputs.copy()\n new_inputs[\"question\"] = new_question\n new_inputs[\"chat_history\"] = chat_history_str\n answer = self.combine_docs_chain.run(\n input_documents=docs, callbacks=_run_manager.get_child(), **new_inputs\n )\n output: Dict[str, Any] = {self.output_key: answer}\n if self.return_source_documents:\n output[\"source_documents\"] = docs\n if self.return_generated_question:\n output[\"generated_question\"] = new_question\n return output\n @abstractmethod\n async def _aget_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]:\n \"\"\"Get docs.\"\"\"\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n question = inputs[\"question\"]", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"}
+{"id": "34a0cad81a9c-3", "text": "question = inputs[\"question\"]\n get_chat_history = self.get_chat_history or _get_chat_history\n chat_history_str = get_chat_history(inputs[\"chat_history\"])\n if chat_history_str:\n callbacks = _run_manager.get_child()\n new_question = await self.question_generator.arun(\n question=question, chat_history=chat_history_str, callbacks=callbacks\n )\n else:\n new_question = question\n docs = await self._aget_docs(new_question, inputs)\n new_inputs = inputs.copy()\n new_inputs[\"question\"] = new_question\n new_inputs[\"chat_history\"] = chat_history_str\n answer = await self.combine_docs_chain.arun(\n input_documents=docs, callbacks=_run_manager.get_child(), **new_inputs\n )\n output: Dict[str, Any] = {self.output_key: answer}\n if self.return_source_documents:\n output[\"source_documents\"] = docs\n if self.return_generated_question:\n output[\"generated_question\"] = new_question\n return output\n def save(self, file_path: Union[Path, str]) -> None:\n if self.get_chat_history:\n raise ValueError(\"Chain not savable when `get_chat_history` is not None.\")\n super().save(file_path)\n[docs]class ConversationalRetrievalChain(BaseConversationalRetrievalChain):\n \"\"\"Chain for chatting with an index.\"\"\"\n retriever: BaseRetriever\n \"\"\"Index to connect to.\"\"\"\n max_tokens_limit: Optional[int] = None\n \"\"\"If set, restricts the docs to return from store based on tokens, enforced only\n for StuffDocumentChain\"\"\"\n def _reduce_tokens_below_limit(self, docs: List[Document]) -> List[Document]:\n num_docs = len(docs)", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"}
+{"id": "34a0cad81a9c-4", "text": "num_docs = len(docs)\n if self.max_tokens_limit and isinstance(\n self.combine_docs_chain, StuffDocumentsChain\n ):\n tokens = [\n self.combine_docs_chain.llm_chain.llm.get_num_tokens(doc.page_content)\n for doc in docs\n ]\n token_count = sum(tokens[:num_docs])\n while token_count > self.max_tokens_limit:\n num_docs -= 1\n token_count -= tokens[num_docs]\n return docs[:num_docs]\n def _get_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]:\n docs = self.retriever.get_relevant_documents(question)\n return self._reduce_tokens_below_limit(docs)\n async def _aget_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]:\n docs = await self.retriever.aget_relevant_documents(question)\n return self._reduce_tokens_below_limit(docs)\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n retriever: BaseRetriever,\n condense_question_prompt: BasePromptTemplate = CONDENSE_QUESTION_PROMPT,\n chain_type: str = \"stuff\",\n verbose: bool = False,\n condense_question_llm: Optional[BaseLanguageModel] = None,\n combine_docs_chain_kwargs: Optional[Dict] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> BaseConversationalRetrievalChain:\n \"\"\"Load chain from LLM.\"\"\"\n combine_docs_chain_kwargs = combine_docs_chain_kwargs or {}\n doc_chain = load_qa_chain(\n llm,\n chain_type=chain_type,\n verbose=verbose,\n callbacks=callbacks,", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"}
+{"id": "34a0cad81a9c-5", "text": "chain_type=chain_type,\n verbose=verbose,\n callbacks=callbacks,\n **combine_docs_chain_kwargs,\n )\n _llm = condense_question_llm or llm\n condense_question_chain = LLMChain(\n llm=_llm,\n prompt=condense_question_prompt,\n verbose=verbose,\n callbacks=callbacks,\n )\n return cls(\n retriever=retriever,\n combine_docs_chain=doc_chain,\n question_generator=condense_question_chain,\n callbacks=callbacks,\n **kwargs,\n )\n[docs]class ChatVectorDBChain(BaseConversationalRetrievalChain):\n \"\"\"Chain for chatting with a vector database.\"\"\"\n vectorstore: VectorStore = Field(alias=\"vectorstore\")\n top_k_docs_for_context: int = 4\n search_kwargs: dict = Field(default_factory=dict)\n @property\n def _chain_type(self) -> str:\n return \"chat-vector-db\"\n @root_validator()\n def raise_deprecation(cls, values: Dict) -> Dict:\n warnings.warn(\n \"`ChatVectorDBChain` is deprecated - \"\n \"please use `from langchain.chains import ConversationalRetrievalChain`\"\n )\n return values\n def _get_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]:\n vectordbkwargs = inputs.get(\"vectordbkwargs\", {})\n full_kwargs = {**self.search_kwargs, **vectordbkwargs}\n return self.vectorstore.similarity_search(\n question, k=self.top_k_docs_for_context, **full_kwargs\n )\n async def _aget_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"}
+{"id": "34a0cad81a9c-6", "text": "raise NotImplementedError(\"ChatVectorDBChain does not support async\")\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n vectorstore: VectorStore,\n condense_question_prompt: BasePromptTemplate = CONDENSE_QUESTION_PROMPT,\n chain_type: str = \"stuff\",\n combine_docs_chain_kwargs: Optional[Dict] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> BaseConversationalRetrievalChain:\n \"\"\"Load chain from LLM.\"\"\"\n combine_docs_chain_kwargs = combine_docs_chain_kwargs or {}\n doc_chain = load_qa_chain(\n llm,\n chain_type=chain_type,\n callbacks=callbacks,\n **combine_docs_chain_kwargs,\n )\n condense_question_chain = LLMChain(\n llm=llm, prompt=condense_question_prompt, callbacks=callbacks\n )\n return cls(\n vectorstore=vectorstore,\n combine_docs_chain=doc_chain,\n question_generator=condense_question_chain,\n callbacks=callbacks,\n **kwargs,\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"}
+{"id": "4c64e8238a87-0", "text": "Source code for langchain.chains.llm_checker.base\n\"\"\"Chain for question-answering with self-verification.\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.llm_checker.prompt import (\n CHECK_ASSERTIONS_PROMPT,\n CREATE_DRAFT_ANSWER_PROMPT,\n LIST_ASSERTIONS_PROMPT,\n REVISED_ANSWER_PROMPT,\n)\nfrom langchain.chains.sequential import SequentialChain\nfrom langchain.prompts import PromptTemplate\ndef _load_question_to_checked_assertions_chain(\n llm: BaseLanguageModel,\n create_draft_answer_prompt: PromptTemplate,\n list_assertions_prompt: PromptTemplate,\n check_assertions_prompt: PromptTemplate,\n revised_answer_prompt: PromptTemplate,\n) -> SequentialChain:\n create_draft_answer_chain = LLMChain(\n llm=llm,\n prompt=create_draft_answer_prompt,\n output_key=\"statement\",\n )\n list_assertions_chain = LLMChain(\n llm=llm,\n prompt=list_assertions_prompt,\n output_key=\"assertions\",\n )\n check_assertions_chain = LLMChain(\n llm=llm,\n prompt=check_assertions_prompt,\n output_key=\"checked_assertions\",\n )\n revised_answer_chain = LLMChain(\n llm=llm,\n prompt=revised_answer_prompt,\n output_key=\"revised_statement\",\n )\n chains = [\n create_draft_answer_chain,", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_checker/base.html"}
+{"id": "4c64e8238a87-1", "text": ")\n chains = [\n create_draft_answer_chain,\n list_assertions_chain,\n check_assertions_chain,\n revised_answer_chain,\n ]\n question_to_checked_assertions_chain = SequentialChain(\n chains=chains,\n input_variables=[\"question\"],\n output_variables=[\"revised_statement\"],\n verbose=True,\n )\n return question_to_checked_assertions_chain\n[docs]class LLMCheckerChain(Chain):\n \"\"\"Chain for question-answering with self-verification.\n Example:\n .. code-block:: python\n from langchain import OpenAI, LLMCheckerChain\n llm = OpenAI(temperature=0.7)\n checker_chain = LLMCheckerChain.from_llm(llm)\n \"\"\"\n question_to_checked_assertions_chain: SequentialChain\n llm: Optional[BaseLanguageModel] = None\n \"\"\"[Deprecated] LLM wrapper to use.\"\"\"\n create_draft_answer_prompt: PromptTemplate = CREATE_DRAFT_ANSWER_PROMPT\n \"\"\"[Deprecated]\"\"\"\n list_assertions_prompt: PromptTemplate = LIST_ASSERTIONS_PROMPT\n \"\"\"[Deprecated]\"\"\"\n check_assertions_prompt: PromptTemplate = CHECK_ASSERTIONS_PROMPT\n \"\"\"[Deprecated]\"\"\"\n revised_answer_prompt: PromptTemplate = REVISED_ANSWER_PROMPT\n \"\"\"[Deprecated] Prompt to use when questioning the documents.\"\"\"\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator(pre=True)\n def raise_deprecation(cls, values: Dict) -> Dict:\n if \"llm\" in values:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_checker/base.html"}
+{"id": "4c64e8238a87-2", "text": "if \"llm\" in values:\n warnings.warn(\n \"Directly instantiating an LLMCheckerChain with an llm is deprecated. \"\n \"Please instantiate with question_to_checked_assertions_chain \"\n \"or using the from_llm class method.\"\n )\n if (\n \"question_to_checked_assertions_chain\" not in values\n and values[\"llm\"] is not None\n ):\n question_to_checked_assertions_chain = (\n _load_question_to_checked_assertions_chain(\n values[\"llm\"],\n values.get(\n \"create_draft_answer_prompt\", CREATE_DRAFT_ANSWER_PROMPT\n ),\n values.get(\"list_assertions_prompt\", LIST_ASSERTIONS_PROMPT),\n values.get(\"check_assertions_prompt\", CHECK_ASSERTIONS_PROMPT),\n values.get(\"revised_answer_prompt\", REVISED_ANSWER_PROMPT),\n )\n )\n values[\n \"question_to_checked_assertions_chain\"\n ] = question_to_checked_assertions_chain\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the singular input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the singular output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n question = inputs[self.input_key]\n output = self.question_to_checked_assertions_chain(", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_checker/base.html"}
+{"id": "4c64e8238a87-3", "text": "output = self.question_to_checked_assertions_chain(\n {\"question\": question}, callbacks=_run_manager.get_child()\n )\n return {self.output_key: output[\"revised_statement\"]}\n @property\n def _chain_type(self) -> str:\n return \"llm_checker_chain\"\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n create_draft_answer_prompt: PromptTemplate = CREATE_DRAFT_ANSWER_PROMPT,\n list_assertions_prompt: PromptTemplate = LIST_ASSERTIONS_PROMPT,\n check_assertions_prompt: PromptTemplate = CHECK_ASSERTIONS_PROMPT,\n revised_answer_prompt: PromptTemplate = REVISED_ANSWER_PROMPT,\n **kwargs: Any,\n ) -> LLMCheckerChain:\n question_to_checked_assertions_chain = (\n _load_question_to_checked_assertions_chain(\n llm,\n create_draft_answer_prompt,\n list_assertions_prompt,\n check_assertions_prompt,\n revised_answer_prompt,\n )\n )\n return cls(\n question_to_checked_assertions_chain=question_to_checked_assertions_chain,\n **kwargs,\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_checker/base.html"}
+{"id": "49c29dd25a6d-0", "text": "Source code for langchain.chains.qa_generation.base\nfrom __future__ import annotations\nimport json\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Field\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.qa_generation.prompt import PROMPT_SELECTOR\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter, TextSplitter\n[docs]class QAGenerationChain(Chain):\n llm_chain: LLMChain\n text_splitter: TextSplitter = Field(\n default=RecursiveCharacterTextSplitter(chunk_overlap=500)\n )\n input_key: str = \"text\"\n output_key: str = \"questions\"\n k: Optional[int] = None\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n prompt: Optional[BasePromptTemplate] = None,\n **kwargs: Any,\n ) -> QAGenerationChain:\n _prompt = prompt or PROMPT_SELECTOR.get_prompt(llm)\n chain = LLMChain(llm=llm, prompt=_prompt)\n return cls(llm_chain=chain, **kwargs)\n @property\n def _chain_type(self) -> str:\n raise NotImplementedError\n @property\n def input_keys(self) -> List[str]:\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n return [self.output_key]\n def _call(\n self,\n inputs: Dict[str, Any],", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/qa_generation/base.html"}
+{"id": "49c29dd25a6d-1", "text": "def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, List]:\n docs = self.text_splitter.create_documents([inputs[self.input_key]])\n results = self.llm_chain.generate(\n [{\"text\": d.page_content} for d in docs], run_manager=run_manager\n )\n qa = [json.loads(res[0].text) for res in results.generations]\n return {self.output_key: qa}\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/qa_generation/base.html"}
+{"id": "a76c3cd8c05b-0", "text": "Source code for langchain.chains.sql_database.base\n\"\"\"Chain for interacting with SQL Database.\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.sql_database.prompt import DECIDER_PROMPT, PROMPT, SQL_PROMPTS\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.sql_database import SQLDatabase\nfrom langchain.tools.sql_database.prompt import QUERY_CHECKER\nINTERMEDIATE_STEPS_KEY = \"intermediate_steps\"\n[docs]class SQLDatabaseChain(Chain):\n \"\"\"Chain for interacting with SQL Database.\n Example:\n .. code-block:: python\n from langchain import SQLDatabaseChain, OpenAI, SQLDatabase\n db = SQLDatabase(...)\n db_chain = SQLDatabaseChain.from_llm(OpenAI(), db)\n \"\"\"\n llm_chain: LLMChain\n llm: Optional[BaseLanguageModel] = None\n \"\"\"[Deprecated] LLM wrapper to use.\"\"\"\n database: SQLDatabase = Field(exclude=True)\n \"\"\"SQL Database to connect to.\"\"\"\n prompt: Optional[BasePromptTemplate] = None\n \"\"\"[Deprecated] Prompt to use to translate natural language to SQL.\"\"\"\n top_k: int = 5\n \"\"\"Number of results to return from the query\"\"\"\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n return_intermediate_steps: bool = False", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html"}
+{"id": "a76c3cd8c05b-1", "text": "return_intermediate_steps: bool = False\n \"\"\"Whether or not to return the intermediate steps along with the final answer.\"\"\"\n return_direct: bool = False\n \"\"\"Whether or not to return the result of querying the SQL table directly.\"\"\"\n use_query_checker: bool = False\n \"\"\"Whether or not the query checker tool should be used to attempt \n to fix the initial SQL from the LLM.\"\"\"\n query_checker_prompt: Optional[BasePromptTemplate] = None\n \"\"\"The prompt template that should be used by the query checker\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator(pre=True)\n def raise_deprecation(cls, values: Dict) -> Dict:\n if \"llm\" in values:\n warnings.warn(\n \"Directly instantiating an SQLDatabaseChain with an llm is deprecated. \"\n \"Please instantiate with llm_chain argument or using the from_llm \"\n \"class method.\"\n )\n if \"llm_chain\" not in values and values[\"llm\"] is not None:\n database = values[\"database\"]\n prompt = values.get(\"prompt\") or SQL_PROMPTS.get(\n database.dialect, PROMPT\n )\n values[\"llm_chain\"] = LLMChain(llm=values[\"llm\"], prompt=prompt)\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the singular input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the singular output key.\n :meta private:\n \"\"\"\n if not self.return_intermediate_steps:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html"}
+{"id": "a76c3cd8c05b-2", "text": ":meta private:\n \"\"\"\n if not self.return_intermediate_steps:\n return [self.output_key]\n else:\n return [self.output_key, INTERMEDIATE_STEPS_KEY]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n input_text = f\"{inputs[self.input_key]}\\nSQLQuery:\"\n _run_manager.on_text(input_text, verbose=self.verbose)\n # If not present, then defaults to None which is all tables.\n table_names_to_use = inputs.get(\"table_names_to_use\")\n table_info = self.database.get_table_info(table_names=table_names_to_use)\n llm_inputs = {\n \"input\": input_text,\n \"top_k\": str(self.top_k),\n \"dialect\": self.database.dialect,\n \"table_info\": table_info,\n \"stop\": [\"\\nSQLResult:\"],\n }\n intermediate_steps: List = []\n try:\n intermediate_steps.append(llm_inputs) # input: sql generation\n sql_cmd = self.llm_chain.predict(\n callbacks=_run_manager.get_child(),\n **llm_inputs,\n ).strip()\n if not self.use_query_checker:\n _run_manager.on_text(sql_cmd, color=\"green\", verbose=self.verbose)\n intermediate_steps.append(\n sql_cmd\n ) # output: sql generation (no checker)\n intermediate_steps.append({\"sql_cmd\": sql_cmd}) # input: sql exec\n result = self.database.run(sql_cmd)", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html"}
+{"id": "a76c3cd8c05b-3", "text": "result = self.database.run(sql_cmd)\n intermediate_steps.append(str(result)) # output: sql exec\n else:\n query_checker_prompt = self.query_checker_prompt or PromptTemplate(\n template=QUERY_CHECKER, input_variables=[\"query\", \"dialect\"]\n )\n query_checker_chain = LLMChain(\n llm=self.llm_chain.llm, prompt=query_checker_prompt\n )\n query_checker_inputs = {\n \"query\": sql_cmd,\n \"dialect\": self.database.dialect,\n }\n checked_sql_command: str = query_checker_chain.predict(\n callbacks=_run_manager.get_child(), **query_checker_inputs\n ).strip()\n intermediate_steps.append(\n checked_sql_command\n ) # output: sql generation (checker)\n _run_manager.on_text(\n checked_sql_command, color=\"green\", verbose=self.verbose\n )\n intermediate_steps.append(\n {\"sql_cmd\": checked_sql_command}\n ) # input: sql exec\n result = self.database.run(checked_sql_command)\n intermediate_steps.append(str(result)) # output: sql exec\n sql_cmd = checked_sql_command\n _run_manager.on_text(\"\\nSQLResult: \", verbose=self.verbose)\n _run_manager.on_text(result, color=\"yellow\", verbose=self.verbose)\n # If return direct, we just set the final result equal to\n # the result of the sql query result, otherwise try to get a human readable\n # final answer\n if self.return_direct:\n final_result = result\n else:\n _run_manager.on_text(\"\\nAnswer:\", verbose=self.verbose)\n input_text += f\"{sql_cmd}\\nSQLResult: {result}\\nAnswer:\"\n llm_inputs[\"input\"] = input_text", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html"}
+{"id": "a76c3cd8c05b-4", "text": "llm_inputs[\"input\"] = input_text\n intermediate_steps.append(llm_inputs) # input: final answer\n final_result = self.llm_chain.predict(\n callbacks=_run_manager.get_child(),\n **llm_inputs,\n ).strip()\n intermediate_steps.append(final_result) # output: final answer\n _run_manager.on_text(final_result, color=\"green\", verbose=self.verbose)\n chain_result: Dict[str, Any] = {self.output_key: final_result}\n if self.return_intermediate_steps:\n chain_result[INTERMEDIATE_STEPS_KEY] = intermediate_steps\n return chain_result\n except Exception as exc:\n # Append intermediate steps to exception, to aid in logging and later\n # improvement of few shot prompt seeds\n exc.intermediate_steps = intermediate_steps # type: ignore\n raise exc\n @property\n def _chain_type(self) -> str:\n return \"sql_database_chain\"\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n db: SQLDatabase,\n prompt: Optional[BasePromptTemplate] = None,\n **kwargs: Any,\n ) -> SQLDatabaseChain:\n prompt = prompt or SQL_PROMPTS.get(db.dialect, PROMPT)\n llm_chain = LLMChain(llm=llm, prompt=prompt)\n return cls(llm_chain=llm_chain, database=db, **kwargs)\n[docs]class SQLDatabaseSequentialChain(Chain):\n \"\"\"Chain for querying SQL database that is a sequential chain.\n The chain is as follows:\n 1. Based on the query, determine which tables to use.\n 2. Based on those tables, call the normal SQL database chain.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html"}
+{"id": "a76c3cd8c05b-5", "text": "2. Based on those tables, call the normal SQL database chain.\n This is useful in cases where the number of tables in the database is large.\n \"\"\"\n decider_chain: LLMChain\n sql_chain: SQLDatabaseChain\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n return_intermediate_steps: bool = False\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n database: SQLDatabase,\n query_prompt: BasePromptTemplate = PROMPT,\n decider_prompt: BasePromptTemplate = DECIDER_PROMPT,\n **kwargs: Any,\n ) -> SQLDatabaseSequentialChain:\n \"\"\"Load the necessary chains.\"\"\"\n sql_chain = SQLDatabaseChain.from_llm(\n llm, database, prompt=query_prompt, **kwargs\n )\n decider_chain = LLMChain(\n llm=llm, prompt=decider_prompt, output_key=\"table_names\"\n )\n return cls(sql_chain=sql_chain, decider_chain=decider_chain, **kwargs)\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the singular input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the singular output key.\n :meta private:\n \"\"\"\n if not self.return_intermediate_steps:\n return [self.output_key]\n else:\n return [self.output_key, INTERMEDIATE_STEPS_KEY]\n def _call(\n self,\n inputs: Dict[str, Any],", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html"}
+{"id": "a76c3cd8c05b-6", "text": "def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n _table_names = self.sql_chain.database.get_usable_table_names()\n table_names = \", \".join(_table_names)\n llm_inputs = {\n \"query\": inputs[self.input_key],\n \"table_names\": table_names,\n }\n _lowercased_table_names = [name.lower() for name in _table_names]\n table_names_from_chain = self.decider_chain.predict_and_parse(**llm_inputs)\n table_names_to_use = [\n name\n for name in table_names_from_chain\n if name.lower() in _lowercased_table_names\n ]\n _run_manager.on_text(\"Table names to use:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n str(table_names_to_use), color=\"yellow\", verbose=self.verbose\n )\n new_inputs = {\n self.sql_chain.input_key: inputs[self.input_key],\n \"table_names_to_use\": table_names_to_use,\n }\n return self.sql_chain(\n new_inputs, callbacks=_run_manager.get_child(), return_only_outputs=True\n )\n @property\n def _chain_type(self) -> str:\n return \"sql_database_sequential_chain\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html"}
+{"id": "1b8e80a2cd38-0", "text": "Source code for langchain.chains.llm_math.base\n\"\"\"Chain that interprets a prompt and executes python code to do math.\"\"\"\nfrom __future__ import annotations\nimport math\nimport re\nimport warnings\nfrom typing import Any, Dict, List, Optional\nimport numexpr\nfrom pydantic import Extra, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.llm_math.prompt import PROMPT\nfrom langchain.prompts.base import BasePromptTemplate\n[docs]class LLMMathChain(Chain):\n \"\"\"Chain that interprets a prompt and executes python code to do math.\n Example:\n .. code-block:: python\n from langchain import LLMMathChain, OpenAI\n llm_math = LLMMathChain.from_llm(OpenAI())\n \"\"\"\n llm_chain: LLMChain\n llm: Optional[BaseLanguageModel] = None\n \"\"\"[Deprecated] LLM wrapper to use.\"\"\"\n prompt: BasePromptTemplate = PROMPT\n \"\"\"[Deprecated] Prompt to use to translate to python if necessary.\"\"\"\n input_key: str = \"question\" #: :meta private:\n output_key: str = \"answer\" #: :meta private:\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator(pre=True)\n def raise_deprecation(cls, values: Dict) -> Dict:\n if \"llm\" in values:\n warnings.warn(", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_math/base.html"}
+{"id": "1b8e80a2cd38-1", "text": "if \"llm\" in values:\n warnings.warn(\n \"Directly instantiating an LLMMathChain with an llm is deprecated. \"\n \"Please instantiate with llm_chain argument or using the from_llm \"\n \"class method.\"\n )\n if \"llm_chain\" not in values and values[\"llm\"] is not None:\n prompt = values.get(\"prompt\", PROMPT)\n values[\"llm_chain\"] = LLMChain(llm=values[\"llm\"], prompt=prompt)\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Expect output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n def _evaluate_expression(self, expression: str) -> str:\n try:\n local_dict = {\"pi\": math.pi, \"e\": math.e}\n output = str(\n numexpr.evaluate(\n expression.strip(),\n global_dict={}, # restrict access to globals\n local_dict=local_dict, # add common mathematical functions\n )\n )\n except Exception as e:\n raise ValueError(\n f'LLMMathChain._evaluate(\"{expression}\") raised error: {e}.'\n \" Please try again with a valid numerical expression\"\n )\n # Remove any leading and trailing brackets from the output\n return re.sub(r\"^\\[|\\]$\", \"\", output)\n def _process_llm_result(\n self, llm_output: str, run_manager: CallbackManagerForChainRun\n ) -> Dict[str, str]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_math/base.html"}
+{"id": "1b8e80a2cd38-2", "text": ") -> Dict[str, str]:\n run_manager.on_text(llm_output, color=\"green\", verbose=self.verbose)\n llm_output = llm_output.strip()\n text_match = re.search(r\"^```text(.*?)```\", llm_output, re.DOTALL)\n if text_match:\n expression = text_match.group(1)\n output = self._evaluate_expression(expression)\n run_manager.on_text(\"\\nAnswer: \", verbose=self.verbose)\n run_manager.on_text(output, color=\"yellow\", verbose=self.verbose)\n answer = \"Answer: \" + output\n elif llm_output.startswith(\"Answer:\"):\n answer = llm_output\n elif \"Answer:\" in llm_output:\n answer = \"Answer: \" + llm_output.split(\"Answer:\")[-1]\n else:\n raise ValueError(f\"unknown format from LLM: {llm_output}\")\n return {self.output_key: answer}\n async def _aprocess_llm_result(\n self,\n llm_output: str,\n run_manager: AsyncCallbackManagerForChainRun,\n ) -> Dict[str, str]:\n await run_manager.on_text(llm_output, color=\"green\", verbose=self.verbose)\n llm_output = llm_output.strip()\n text_match = re.search(r\"^```text(.*?)```\", llm_output, re.DOTALL)\n if text_match:\n expression = text_match.group(1)\n output = self._evaluate_expression(expression)\n await run_manager.on_text(\"\\nAnswer: \", verbose=self.verbose)\n await run_manager.on_text(output, color=\"yellow\", verbose=self.verbose)\n answer = \"Answer: \" + output\n elif llm_output.startswith(\"Answer:\"):\n answer = llm_output", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_math/base.html"}
+{"id": "1b8e80a2cd38-3", "text": "elif llm_output.startswith(\"Answer:\"):\n answer = llm_output\n elif \"Answer:\" in llm_output:\n answer = \"Answer: \" + llm_output.split(\"Answer:\")[-1]\n else:\n raise ValueError(f\"unknown format from LLM: {llm_output}\")\n return {self.output_key: answer}\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n _run_manager.on_text(inputs[self.input_key])\n llm_output = self.llm_chain.predict(\n question=inputs[self.input_key],\n stop=[\"```output\"],\n callbacks=_run_manager.get_child(),\n )\n return self._process_llm_result(llm_output, _run_manager)\n async def _acall(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n await _run_manager.on_text(inputs[self.input_key])\n llm_output = await self.llm_chain.apredict(\n question=inputs[self.input_key],\n stop=[\"```output\"],\n callbacks=_run_manager.get_child(),\n )\n return await self._aprocess_llm_result(llm_output, _run_manager)\n @property\n def _chain_type(self) -> str:\n return \"llm_math_chain\"\n[docs] @classmethod\n def from_llm(\n cls,", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_math/base.html"}
+{"id": "1b8e80a2cd38-4", "text": "[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n prompt: BasePromptTemplate = PROMPT,\n **kwargs: Any,\n ) -> LLMMathChain:\n llm_chain = LLMChain(llm=llm, prompt=prompt)\n return cls(llm_chain=llm_chain, **kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_math/base.html"}
+{"id": "5324cdfd83b2-0", "text": "Source code for langchain.chains.api.base\n\"\"\"Chain that makes API calls and summarizes the responses to answer a question.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Field, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.api.prompt import API_RESPONSE_PROMPT, API_URL_PROMPT\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts import BasePromptTemplate\nfrom langchain.requests import TextRequestsWrapper\n[docs]class APIChain(Chain):\n \"\"\"Chain that makes API calls and summarizes the responses to answer a question.\"\"\"\n api_request_chain: LLMChain\n api_answer_chain: LLMChain\n requests_wrapper: TextRequestsWrapper = Field(exclude=True)\n api_docs: str\n question_key: str = \"question\" #: :meta private:\n output_key: str = \"output\" #: :meta private:\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.question_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Expect output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n @root_validator(pre=True)\n def validate_api_request_prompt(cls, values: Dict) -> Dict:\n \"\"\"Check that api request prompt expects the right variables.\"\"\"\n input_vars = values[\"api_request_chain\"].prompt.input_variables\n expected_vars = {\"question\", \"api_docs\"}\n if set(input_vars) != expected_vars:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/api/base.html"}
+{"id": "5324cdfd83b2-1", "text": "if set(input_vars) != expected_vars:\n raise ValueError(\n f\"Input variables should be {expected_vars}, got {input_vars}\"\n )\n return values\n @root_validator(pre=True)\n def validate_api_answer_prompt(cls, values: Dict) -> Dict:\n \"\"\"Check that api answer prompt expects the right variables.\"\"\"\n input_vars = values[\"api_answer_chain\"].prompt.input_variables\n expected_vars = {\"question\", \"api_docs\", \"api_url\", \"api_response\"}\n if set(input_vars) != expected_vars:\n raise ValueError(\n f\"Input variables should be {expected_vars}, got {input_vars}\"\n )\n return values\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n question = inputs[self.question_key]\n api_url = self.api_request_chain.predict(\n question=question,\n api_docs=self.api_docs,\n callbacks=_run_manager.get_child(),\n )\n _run_manager.on_text(api_url, color=\"green\", end=\"\\n\", verbose=self.verbose)\n api_response = self.requests_wrapper.get(api_url)\n _run_manager.on_text(\n api_response, color=\"yellow\", end=\"\\n\", verbose=self.verbose\n )\n answer = self.api_answer_chain.predict(\n question=question,\n api_docs=self.api_docs,\n api_url=api_url,\n api_response=api_response,\n callbacks=_run_manager.get_child(),\n )\n return {self.output_key: answer}\n async def _acall(\n self,", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/api/base.html"}
+{"id": "5324cdfd83b2-2", "text": "async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n question = inputs[self.question_key]\n api_url = await self.api_request_chain.apredict(\n question=question,\n api_docs=self.api_docs,\n callbacks=_run_manager.get_child(),\n )\n await _run_manager.on_text(\n api_url, color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n api_response = await self.requests_wrapper.aget(api_url)\n await _run_manager.on_text(\n api_response, color=\"yellow\", end=\"\\n\", verbose=self.verbose\n )\n answer = await self.api_answer_chain.apredict(\n question=question,\n api_docs=self.api_docs,\n api_url=api_url,\n api_response=api_response,\n callbacks=_run_manager.get_child(),\n )\n return {self.output_key: answer}\n[docs] @classmethod\n def from_llm_and_api_docs(\n cls,\n llm: BaseLanguageModel,\n api_docs: str,\n headers: Optional[dict] = None,\n api_url_prompt: BasePromptTemplate = API_URL_PROMPT,\n api_response_prompt: BasePromptTemplate = API_RESPONSE_PROMPT,\n **kwargs: Any,\n ) -> APIChain:\n \"\"\"Load chain from just an LLM and the api docs.\"\"\"\n get_request_chain = LLMChain(llm=llm, prompt=api_url_prompt)\n requests_wrapper = TextRequestsWrapper(headers=headers)", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/api/base.html"}
+{"id": "5324cdfd83b2-3", "text": "requests_wrapper = TextRequestsWrapper(headers=headers)\n get_answer_chain = LLMChain(llm=llm, prompt=api_response_prompt)\n return cls(\n api_request_chain=get_request_chain,\n api_answer_chain=get_answer_chain,\n requests_wrapper=requests_wrapper,\n api_docs=api_docs,\n **kwargs,\n )\n @property\n def _chain_type(self) -> str:\n return \"api_chain\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/api/base.html"}
+{"id": "7ff2178f8c30-0", "text": "Source code for langchain.chains.api.openapi.chain\n\"\"\"Chain that makes API calls and summarizes the responses to answer a question.\"\"\"\nfrom __future__ import annotations\nimport json\nfrom typing import Any, Dict, List, NamedTuple, Optional, cast\nfrom pydantic import BaseModel, Field\nfrom requests import Response\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun, Callbacks\nfrom langchain.chains.api.openapi.requests_chain import APIRequesterChain\nfrom langchain.chains.api.openapi.response_chain import APIResponderChain\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.requests import Requests\nfrom langchain.tools.openapi.utils.api_models import APIOperation\nclass _ParamMapping(NamedTuple):\n \"\"\"Mapping from parameter name to parameter value.\"\"\"\n query_params: List[str]\n body_params: List[str]\n path_params: List[str]\n[docs]class OpenAPIEndpointChain(Chain, BaseModel):\n \"\"\"Chain interacts with an OpenAPI endpoint using natural language.\"\"\"\n api_request_chain: LLMChain\n api_response_chain: Optional[LLMChain]\n api_operation: APIOperation\n requests: Requests = Field(exclude=True, default_factory=Requests)\n param_mapping: _ParamMapping = Field(alias=\"param_mapping\")\n return_intermediate_steps: bool = False\n instructions_key: str = \"instructions\" #: :meta private:\n output_key: str = \"output\" #: :meta private:\n max_text_length: Optional[int] = Field(ge=0) #: :meta private:\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.instructions_key]\n @property", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html"}
+{"id": "7ff2178f8c30-1", "text": "\"\"\"\n return [self.instructions_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Expect output key.\n :meta private:\n \"\"\"\n if not self.return_intermediate_steps:\n return [self.output_key]\n else:\n return [self.output_key, \"intermediate_steps\"]\n def _construct_path(self, args: Dict[str, str]) -> str:\n \"\"\"Construct the path from the deserialized input.\"\"\"\n path = self.api_operation.base_url + self.api_operation.path\n for param in self.param_mapping.path_params:\n path = path.replace(f\"{{{param}}}\", str(args.pop(param, \"\")))\n return path\n def _extract_query_params(self, args: Dict[str, str]) -> Dict[str, str]:\n \"\"\"Extract the query params from the deserialized input.\"\"\"\n query_params = {}\n for param in self.param_mapping.query_params:\n if param in args:\n query_params[param] = args.pop(param)\n return query_params\n def _extract_body_params(self, args: Dict[str, str]) -> Optional[Dict[str, str]]:\n \"\"\"Extract the request body params from the deserialized input.\"\"\"\n body_params = None\n if self.param_mapping.body_params:\n body_params = {}\n for param in self.param_mapping.body_params:\n if param in args:\n body_params[param] = args.pop(param)\n return body_params\n[docs] def deserialize_json_input(self, serialized_args: str) -> dict:\n \"\"\"Use the serialized typescript dictionary.\n Resolve the path, query params dict, and optional requestBody dict.\n \"\"\"\n args: dict = json.loads(serialized_args)\n path = self._construct_path(args)", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html"}
+{"id": "7ff2178f8c30-2", "text": "path = self._construct_path(args)\n body_params = self._extract_body_params(args)\n query_params = self._extract_query_params(args)\n return {\n \"url\": path,\n \"data\": body_params,\n \"params\": query_params,\n }\n def _get_output(self, output: str, intermediate_steps: dict) -> dict:\n \"\"\"Return the output from the API call.\"\"\"\n if self.return_intermediate_steps:\n return {\n self.output_key: output,\n \"intermediate_steps\": intermediate_steps,\n }\n else:\n return {self.output_key: output}\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n intermediate_steps = {}\n instructions = inputs[self.instructions_key]\n instructions = instructions[: self.max_text_length]\n _api_arguments = self.api_request_chain.predict_and_parse(\n instructions=instructions, callbacks=_run_manager.get_child()\n )\n api_arguments = cast(str, _api_arguments)\n intermediate_steps[\"request_args\"] = api_arguments\n _run_manager.on_text(\n api_arguments, color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n if api_arguments.startswith(\"ERROR\"):\n return self._get_output(api_arguments, intermediate_steps)\n elif api_arguments.startswith(\"MESSAGE:\"):\n return self._get_output(\n api_arguments[len(\"MESSAGE:\") :], intermediate_steps\n )\n try:\n request_args = self.deserialize_json_input(api_arguments)\n method = getattr(self.requests, self.api_operation.method.value)", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html"}
+{"id": "7ff2178f8c30-3", "text": "method = getattr(self.requests, self.api_operation.method.value)\n api_response: Response = method(**request_args)\n if api_response.status_code != 200:\n method_str = str(self.api_operation.method.value)\n response_text = (\n f\"{api_response.status_code}: {api_response.reason}\"\n + f\"\\nFor {method_str.upper()} {request_args['url']}\\n\"\n + f\"Called with args: {request_args['params']}\"\n )\n else:\n response_text = api_response.text\n except Exception as e:\n response_text = f\"Error with message {str(e)}\"\n response_text = response_text[: self.max_text_length]\n intermediate_steps[\"response_text\"] = response_text\n _run_manager.on_text(\n response_text, color=\"blue\", end=\"\\n\", verbose=self.verbose\n )\n if self.api_response_chain is not None:\n _answer = self.api_response_chain.predict_and_parse(\n response=response_text,\n instructions=instructions,\n callbacks=_run_manager.get_child(),\n )\n answer = cast(str, _answer)\n _run_manager.on_text(answer, color=\"yellow\", end=\"\\n\", verbose=self.verbose)\n return self._get_output(answer, intermediate_steps)\n else:\n return self._get_output(response_text, intermediate_steps)\n[docs] @classmethod\n def from_url_and_method(\n cls,\n spec_url: str,\n path: str,\n method: str,\n llm: BaseLanguageModel,\n requests: Optional[Requests] = None,\n return_intermediate_steps: bool = False,\n **kwargs: Any\n # TODO: Handle async\n ) -> \"OpenAPIEndpointChain\":", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html"}
+{"id": "7ff2178f8c30-4", "text": "# TODO: Handle async\n ) -> \"OpenAPIEndpointChain\":\n \"\"\"Create an OpenAPIEndpoint from a spec at the specified url.\"\"\"\n operation = APIOperation.from_openapi_url(spec_url, path, method)\n return cls.from_api_operation(\n operation,\n requests=requests,\n llm=llm,\n return_intermediate_steps=return_intermediate_steps,\n **kwargs,\n )\n[docs] @classmethod\n def from_api_operation(\n cls,\n operation: APIOperation,\n llm: BaseLanguageModel,\n requests: Optional[Requests] = None,\n verbose: bool = False,\n return_intermediate_steps: bool = False,\n raw_response: bool = False,\n callbacks: Callbacks = None,\n **kwargs: Any\n # TODO: Handle async\n ) -> \"OpenAPIEndpointChain\":\n \"\"\"Create an OpenAPIEndpointChain from an operation and a spec.\"\"\"\n param_mapping = _ParamMapping(\n query_params=operation.query_params,\n body_params=operation.body_params,\n path_params=operation.path_params,\n )\n requests_chain = APIRequesterChain.from_llm_and_typescript(\n llm,\n typescript_definition=operation.to_typescript(),\n verbose=verbose,\n callbacks=callbacks,\n )\n if raw_response:\n response_chain = None\n else:\n response_chain = APIResponderChain.from_llm(\n llm, verbose=verbose, callbacks=callbacks\n )\n _requests = requests or Requests()\n return cls(\n api_request_chain=requests_chain,\n api_response_chain=response_chain,\n api_operation=operation,\n requests=_requests,\n param_mapping=param_mapping,", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html"}
+{"id": "7ff2178f8c30-5", "text": "requests=_requests,\n param_mapping=param_mapping,\n verbose=verbose,\n return_intermediate_steps=return_intermediate_steps,\n callbacks=callbacks,\n **kwargs,\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html"}
+{"id": "c9dfcee453ee-0", "text": "Source code for langchain.chains.qa_with_sources.vector_db\n\"\"\"Question-answering with sources over a vector database.\"\"\"\nimport warnings\nfrom typing import Any, Dict, List\nfrom pydantic import Field, root_validator\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.qa_with_sources.base import BaseQAWithSourcesChain\nfrom langchain.docstore.document import Document\nfrom langchain.vectorstores.base import VectorStore\n[docs]class VectorDBQAWithSourcesChain(BaseQAWithSourcesChain):\n \"\"\"Question-answering with sources over a vector database.\"\"\"\n vectorstore: VectorStore = Field(exclude=True)\n \"\"\"Vector Database to connect to.\"\"\"\n k: int = 4\n \"\"\"Number of results to return from store\"\"\"\n reduce_k_below_max_tokens: bool = False\n \"\"\"Reduce the number of results to return from store based on tokens limit\"\"\"\n max_tokens_limit: int = 3375\n \"\"\"Restrict the docs to return from store based on tokens,\n enforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true\"\"\"\n search_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Extra search args.\"\"\"\n def _reduce_tokens_below_limit(self, docs: List[Document]) -> List[Document]:\n num_docs = len(docs)\n if self.reduce_k_below_max_tokens and isinstance(\n self.combine_documents_chain, StuffDocumentsChain\n ):\n tokens = [\n self.combine_documents_chain.llm_chain.llm.get_num_tokens(\n doc.page_content\n )\n for doc in docs\n ]\n token_count = sum(tokens[:num_docs])\n while token_count > self.max_tokens_limit:\n num_docs -= 1\n token_count -= tokens[num_docs]", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/vector_db.html"}
+{"id": "c9dfcee453ee-1", "text": "num_docs -= 1\n token_count -= tokens[num_docs]\n return docs[:num_docs]\n def _get_docs(self, inputs: Dict[str, Any]) -> List[Document]:\n question = inputs[self.question_key]\n docs = self.vectorstore.similarity_search(\n question, k=self.k, **self.search_kwargs\n )\n return self._reduce_tokens_below_limit(docs)\n async def _aget_docs(self, inputs: Dict[str, Any]) -> List[Document]:\n raise NotImplementedError(\"VectorDBQAWithSourcesChain does not support async\")\n @root_validator()\n def raise_deprecation(cls, values: Dict) -> Dict:\n warnings.warn(\n \"`VectorDBQAWithSourcesChain` is deprecated - \"\n \"please use `from langchain.chains import RetrievalQAWithSourcesChain`\"\n )\n return values\n @property\n def _chain_type(self) -> str:\n return \"vector_db_qa_with_sources_chain\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/vector_db.html"}
+{"id": "407c37287e7f-0", "text": "Source code for langchain.chains.qa_with_sources.base\n\"\"\"Question answering with sources over documents.\"\"\"\nfrom __future__ import annotations\nimport re\nfrom abc import ABC, abstractmethod\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.chains.combine_documents.base import BaseCombineDocumentsChain\nfrom langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.qa_with_sources.loading import load_qa_with_sources_chain\nfrom langchain.chains.qa_with_sources.map_reduce_prompt import (\n COMBINE_PROMPT,\n EXAMPLE_PROMPT,\n QUESTION_PROMPT,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.prompts.base import BasePromptTemplate\nclass BaseQAWithSourcesChain(Chain, ABC):\n \"\"\"Question answering with sources over documents.\"\"\"\n combine_documents_chain: BaseCombineDocumentsChain\n \"\"\"Chain to use to combine documents.\"\"\"\n question_key: str = \"question\" #: :meta private:\n input_docs_key: str = \"docs\" #: :meta private:\n answer_key: str = \"answer\" #: :meta private:\n sources_answer_key: str = \"sources\" #: :meta private:\n return_source_documents: bool = False\n \"\"\"Return the source documents.\"\"\"\n @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n document_prompt: BasePromptTemplate = EXAMPLE_PROMPT,", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/base.html"}
+{"id": "407c37287e7f-1", "text": "document_prompt: BasePromptTemplate = EXAMPLE_PROMPT,\n question_prompt: BasePromptTemplate = QUESTION_PROMPT,\n combine_prompt: BasePromptTemplate = COMBINE_PROMPT,\n **kwargs: Any,\n ) -> BaseQAWithSourcesChain:\n \"\"\"Construct the chain from an LLM.\"\"\"\n llm_question_chain = LLMChain(llm=llm, prompt=question_prompt)\n llm_combine_chain = LLMChain(llm=llm, prompt=combine_prompt)\n combine_results_chain = StuffDocumentsChain(\n llm_chain=llm_combine_chain,\n document_prompt=document_prompt,\n document_variable_name=\"summaries\",\n )\n combine_document_chain = MapReduceDocumentsChain(\n llm_chain=llm_question_chain,\n combine_document_chain=combine_results_chain,\n document_variable_name=\"context\",\n )\n return cls(\n combine_documents_chain=combine_document_chain,\n **kwargs,\n )\n @classmethod\n def from_chain_type(\n cls,\n llm: BaseLanguageModel,\n chain_type: str = \"stuff\",\n chain_type_kwargs: Optional[dict] = None,\n **kwargs: Any,\n ) -> BaseQAWithSourcesChain:\n \"\"\"Load chain from chain type.\"\"\"\n _chain_kwargs = chain_type_kwargs or {}\n combine_document_chain = load_qa_with_sources_chain(\n llm, chain_type=chain_type, **_chain_kwargs\n )\n return cls(combine_documents_chain=combine_document_chain, **kwargs)\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/base.html"}
+{"id": "407c37287e7f-2", "text": "def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.question_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output key.\n :meta private:\n \"\"\"\n _output_keys = [self.answer_key, self.sources_answer_key]\n if self.return_source_documents:\n _output_keys = _output_keys + [\"source_documents\"]\n return _output_keys\n @root_validator(pre=True)\n def validate_naming(cls, values: Dict) -> Dict:\n \"\"\"Fix backwards compatability in naming.\"\"\"\n if \"combine_document_chain\" in values:\n values[\"combine_documents_chain\"] = values.pop(\"combine_document_chain\")\n return values\n @abstractmethod\n def _get_docs(self, inputs: Dict[str, Any]) -> List[Document]:\n \"\"\"Get docs to run questioning over.\"\"\"\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n docs = self._get_docs(inputs)\n answer = self.combine_documents_chain.run(\n input_documents=docs, callbacks=_run_manager.get_child(), **inputs\n )\n if re.search(r\"SOURCES:\\s\", answer):\n answer, sources = re.split(r\"SOURCES:\\s\", answer)\n else:\n sources = \"\"\n result: Dict[str, Any] = {\n self.answer_key: answer,\n self.sources_answer_key: sources,\n }\n if self.return_source_documents:\n result[\"source_documents\"] = docs", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/base.html"}
+{"id": "407c37287e7f-3", "text": "}\n if self.return_source_documents:\n result[\"source_documents\"] = docs\n return result\n @abstractmethod\n async def _aget_docs(self, inputs: Dict[str, Any]) -> List[Document]:\n \"\"\"Get docs to run questioning over.\"\"\"\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n docs = await self._aget_docs(inputs)\n answer = await self.combine_documents_chain.arun(\n input_documents=docs, callbacks=_run_manager.get_child(), **inputs\n )\n if re.search(r\"SOURCES:\\s\", answer):\n answer, sources = re.split(r\"SOURCES:\\s\", answer)\n else:\n sources = \"\"\n result: Dict[str, Any] = {\n self.answer_key: answer,\n self.sources_answer_key: sources,\n }\n if self.return_source_documents:\n result[\"source_documents\"] = docs\n return result\n[docs]class QAWithSourcesChain(BaseQAWithSourcesChain):\n \"\"\"Question answering with sources over documents.\"\"\"\n input_docs_key: str = \"docs\" #: :meta private:\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_docs_key, self.question_key]\n def _get_docs(self, inputs: Dict[str, Any]) -> List[Document]:\n return inputs.pop(self.input_docs_key)\n async def _aget_docs(self, inputs: Dict[str, Any]) -> List[Document]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/base.html"}
+{"id": "407c37287e7f-4", "text": "return inputs.pop(self.input_docs_key)\n @property\n def _chain_type(self) -> str:\n return \"qa_with_sources_chain\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/base.html"}
+{"id": "1ebe33b41017-0", "text": "Source code for langchain.chains.qa_with_sources.retrieval\n\"\"\"Question-answering with sources over an index.\"\"\"\nfrom typing import Any, Dict, List\nfrom pydantic import Field\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.qa_with_sources.base import BaseQAWithSourcesChain\nfrom langchain.docstore.document import Document\nfrom langchain.schema import BaseRetriever\n[docs]class RetrievalQAWithSourcesChain(BaseQAWithSourcesChain):\n \"\"\"Question-answering with sources over an index.\"\"\"\n retriever: BaseRetriever = Field(exclude=True)\n \"\"\"Index to connect to.\"\"\"\n reduce_k_below_max_tokens: bool = False\n \"\"\"Reduce the number of results to return from store based on tokens limit\"\"\"\n max_tokens_limit: int = 3375\n \"\"\"Restrict the docs to return from store based on tokens,\n enforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true\"\"\"\n def _reduce_tokens_below_limit(self, docs: List[Document]) -> List[Document]:\n num_docs = len(docs)\n if self.reduce_k_below_max_tokens and isinstance(\n self.combine_documents_chain, StuffDocumentsChain\n ):\n tokens = [\n self.combine_documents_chain.llm_chain.llm.get_num_tokens(\n doc.page_content\n )\n for doc in docs\n ]\n token_count = sum(tokens[:num_docs])\n while token_count > self.max_tokens_limit:\n num_docs -= 1\n token_count -= tokens[num_docs]\n return docs[:num_docs]\n def _get_docs(self, inputs: Dict[str, Any]) -> List[Document]:\n question = inputs[self.question_key]\n docs = self.retriever.get_relevant_documents(question)", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/retrieval.html"}
+{"id": "1ebe33b41017-1", "text": "docs = self.retriever.get_relevant_documents(question)\n return self._reduce_tokens_below_limit(docs)\n async def _aget_docs(self, inputs: Dict[str, Any]) -> List[Document]:\n question = inputs[self.question_key]\n docs = await self.retriever.aget_relevant_documents(question)\n return self._reduce_tokens_below_limit(docs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/retrieval.html"}
+{"id": "c25d7c62eb01-0", "text": "Source code for langchain.chains.flare.base\nfrom __future__ import annotations\nimport re\nfrom abc import abstractmethod\nfrom typing import Any, Dict, List, Optional, Sequence, Tuple\nimport numpy as np\nfrom pydantic import Field\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import (\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.chains.flare.prompts import (\n PROMPT,\n QUESTION_GENERATOR_PROMPT,\n FinishedOutputParser,\n)\nfrom langchain.chains.llm import LLMChain\nfrom langchain.llms import OpenAI\nfrom langchain.prompts import BasePromptTemplate\nfrom langchain.schema import BaseRetriever, Generation\nclass _ResponseChain(LLMChain):\n prompt: BasePromptTemplate = PROMPT\n @property\n def input_keys(self) -> List[str]:\n return self.prompt.input_variables\n def generate_tokens_and_log_probs(\n self,\n _input: Dict[str, Any],\n *,\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Tuple[Sequence[str], Sequence[float]]:\n llm_result = self.generate([_input], run_manager=run_manager)\n return self._extract_tokens_and_log_probs(llm_result.generations[0])\n @abstractmethod\n def _extract_tokens_and_log_probs(\n self, generations: List[Generation]\n ) -> Tuple[Sequence[str], Sequence[float]]:\n \"\"\"Extract tokens and log probs from response.\"\"\"\nclass _OpenAIResponseChain(_ResponseChain):\n llm: OpenAI = Field(\n default_factory=lambda: OpenAI(\n max_tokens=32, model_kwargs={\"logprobs\": 1}, temperature=0\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/flare/base.html"}
+{"id": "c25d7c62eb01-1", "text": ")\n )\n def _extract_tokens_and_log_probs(\n self, generations: List[Generation]\n ) -> Tuple[Sequence[str], Sequence[float]]:\n tokens = []\n log_probs = []\n for gen in generations:\n if gen.generation_info is None:\n raise ValueError\n tokens.extend(gen.generation_info[\"logprobs\"][\"tokens\"])\n log_probs.extend(gen.generation_info[\"logprobs\"][\"token_logprobs\"])\n return tokens, log_probs\nclass QuestionGeneratorChain(LLMChain):\n prompt: BasePromptTemplate = QUESTION_GENERATOR_PROMPT\n @property\n def input_keys(self) -> List[str]:\n return [\"user_input\", \"context\", \"response\"]\ndef _low_confidence_spans(\n tokens: Sequence[str],\n log_probs: Sequence[float],\n min_prob: float,\n min_token_gap: int,\n num_pad_tokens: int,\n) -> List[str]:\n _low_idx = np.where(np.exp(log_probs) < min_prob)[0]\n low_idx = [i for i in _low_idx if re.search(r\"\\w\", tokens[i])]\n if len(low_idx) == 0:\n return []\n spans = [[low_idx[0], low_idx[0] + num_pad_tokens + 1]]\n for i, idx in enumerate(low_idx[1:]):\n end = idx + num_pad_tokens + 1\n if idx - low_idx[i] < min_token_gap:\n spans[-1][1] = end\n else:\n spans.append([idx, end])\n return [\"\".join(tokens[start:end]) for start, end in spans]\n[docs]class FlareChain(Chain):\n question_generator_chain: QuestionGeneratorChain", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/flare/base.html"}
+{"id": "c25d7c62eb01-2", "text": "[docs]class FlareChain(Chain):\n question_generator_chain: QuestionGeneratorChain\n response_chain: _ResponseChain = Field(default_factory=_OpenAIResponseChain)\n output_parser: FinishedOutputParser = Field(default_factory=FinishedOutputParser)\n retriever: BaseRetriever\n min_prob: float = 0.2\n min_token_gap: int = 5\n num_pad_tokens: int = 2\n max_iter: int = 10\n start_with_retrieval: bool = True\n @property\n def input_keys(self) -> List[str]:\n return [\"user_input\"]\n @property\n def output_keys(self) -> List[str]:\n return [\"response\"]\n def _do_generation(\n self,\n questions: List[str],\n user_input: str,\n response: str,\n _run_manager: CallbackManagerForChainRun,\n ) -> Tuple[str, bool]:\n callbacks = _run_manager.get_child()\n docs = []\n for question in questions:\n docs.extend(self.retriever.get_relevant_documents(question))\n context = \"\\n\\n\".join(d.page_content for d in docs)\n result = self.response_chain.predict(\n user_input=user_input,\n context=context,\n response=response,\n callbacks=callbacks,\n )\n marginal, finished = self.output_parser.parse(result)\n return marginal, finished\n def _do_retrieval(\n self,\n low_confidence_spans: List[str],\n _run_manager: CallbackManagerForChainRun,\n user_input: str,\n response: str,\n initial_response: str,\n ) -> Tuple[str, bool]:\n question_gen_inputs = [\n {\n \"user_input\": user_input,", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/flare/base.html"}
+{"id": "c25d7c62eb01-3", "text": "question_gen_inputs = [\n {\n \"user_input\": user_input,\n \"current_response\": initial_response,\n \"uncertain_span\": span,\n }\n for span in low_confidence_spans\n ]\n callbacks = _run_manager.get_child()\n question_gen_outputs = self.question_generator_chain.apply(\n question_gen_inputs, callbacks=callbacks\n )\n questions = [\n output[self.question_generator_chain.output_keys[0]]\n for output in question_gen_outputs\n ]\n _run_manager.on_text(\n f\"Generated Questions: {questions}\", color=\"yellow\", end=\"\\n\"\n )\n return self._do_generation(questions, user_input, response, _run_manager)\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n user_input = inputs[self.input_keys[0]]\n response = \"\"\n for i in range(self.max_iter):\n _run_manager.on_text(\n f\"Current Response: {response}\", color=\"blue\", end=\"\\n\"\n )\n _input = {\"user_input\": user_input, \"context\": \"\", \"response\": response}\n tokens, log_probs = self.response_chain.generate_tokens_and_log_probs(\n _input, run_manager=_run_manager\n )\n low_confidence_spans = _low_confidence_spans(\n tokens,\n log_probs,\n self.min_prob,\n self.min_token_gap,\n self.num_pad_tokens,\n )\n initial_response = response.strip() + \" \" + \"\".join(tokens)", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/flare/base.html"}
+{"id": "c25d7c62eb01-4", "text": ")\n initial_response = response.strip() + \" \" + \"\".join(tokens)\n if not low_confidence_spans:\n response = initial_response\n final_response, finished = self.output_parser.parse(response)\n if finished:\n return {self.output_keys[0]: final_response}\n continue\n marginal, finished = self._do_retrieval(\n low_confidence_spans,\n _run_manager,\n user_input,\n response,\n initial_response,\n )\n response = response.strip() + \" \" + marginal\n if finished:\n break\n return {self.output_keys[0]: response}\n[docs] @classmethod\n def from_llm(\n cls, llm: BaseLanguageModel, max_generation_len: int = 32, **kwargs: Any\n ) -> FlareChain:\n question_gen_chain = QuestionGeneratorChain(llm=llm)\n response_llm = OpenAI(\n max_tokens=max_generation_len, model_kwargs={\"logprobs\": 1}, temperature=0\n )\n response_chain = _OpenAIResponseChain(llm=response_llm)\n return cls(\n question_generator_chain=question_gen_chain,\n response_chain=response_chain,\n **kwargs,\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/flare/base.html"}
+{"id": "85376e47619a-0", "text": "Source code for langchain.chains.retrieval_qa.base\n\"\"\"Chain for question-answering against a vector database.\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom abc import abstractmethod\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.chains.combine_documents.base import BaseCombineDocumentsChain\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.question_answering import load_qa_chain\nfrom langchain.chains.question_answering.stuff_prompt import PROMPT_SELECTOR\nfrom langchain.prompts import PromptTemplate\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.vectorstores.base import VectorStore\nclass BaseRetrievalQA(Chain):\n combine_documents_chain: BaseCombineDocumentsChain\n \"\"\"Chain to use to combine the documents.\"\"\"\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n return_source_documents: bool = False\n \"\"\"Return the source documents.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n allow_population_by_field_name = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the output keys.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/retrieval_qa/base.html"}
+{"id": "85376e47619a-1", "text": "def output_keys(self) -> List[str]:\n \"\"\"Return the output keys.\n :meta private:\n \"\"\"\n _output_keys = [self.output_key]\n if self.return_source_documents:\n _output_keys = _output_keys + [\"source_documents\"]\n return _output_keys\n @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n prompt: Optional[PromptTemplate] = None,\n **kwargs: Any,\n ) -> BaseRetrievalQA:\n \"\"\"Initialize from LLM.\"\"\"\n _prompt = prompt or PROMPT_SELECTOR.get_prompt(llm)\n llm_chain = LLMChain(llm=llm, prompt=_prompt)\n document_prompt = PromptTemplate(\n input_variables=[\"page_content\"], template=\"Context:\\n{page_content}\"\n )\n combine_documents_chain = StuffDocumentsChain(\n llm_chain=llm_chain,\n document_variable_name=\"context\",\n document_prompt=document_prompt,\n )\n return cls(combine_documents_chain=combine_documents_chain, **kwargs)\n @classmethod\n def from_chain_type(\n cls,\n llm: BaseLanguageModel,\n chain_type: str = \"stuff\",\n chain_type_kwargs: Optional[dict] = None,\n **kwargs: Any,\n ) -> BaseRetrievalQA:\n \"\"\"Load chain from chain type.\"\"\"\n _chain_type_kwargs = chain_type_kwargs or {}\n combine_documents_chain = load_qa_chain(\n llm, chain_type=chain_type, **_chain_type_kwargs\n )\n return cls(combine_documents_chain=combine_documents_chain, **kwargs)\n @abstractmethod\n def _get_docs(self, question: str) -> List[Document]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/retrieval_qa/base.html"}
+{"id": "85376e47619a-2", "text": "def _get_docs(self, question: str) -> List[Document]:\n \"\"\"Get documents to do question answering over.\"\"\"\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Run get_relevant_text and llm on input query.\n If chain has 'return_source_documents' as 'True', returns\n the retrieved documents as well under the key 'source_documents'.\n Example:\n .. code-block:: python\n res = indexqa({'query': 'This is my query'})\n answer, docs = res['result'], res['source_documents']\n \"\"\"\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n question = inputs[self.input_key]\n docs = self._get_docs(question)\n answer = self.combine_documents_chain.run(\n input_documents=docs, question=question, callbacks=_run_manager.get_child()\n )\n if self.return_source_documents:\n return {self.output_key: answer, \"source_documents\": docs}\n else:\n return {self.output_key: answer}\n @abstractmethod\n async def _aget_docs(self, question: str) -> List[Document]:\n \"\"\"Get documents to do question answering over.\"\"\"\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Run get_relevant_text and llm on input query.\n If chain has 'return_source_documents' as 'True', returns\n the retrieved documents as well under the key 'source_documents'.\n Example:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/retrieval_qa/base.html"}
+{"id": "85376e47619a-3", "text": "the retrieved documents as well under the key 'source_documents'.\n Example:\n .. code-block:: python\n res = indexqa({'query': 'This is my query'})\n answer, docs = res['result'], res['source_documents']\n \"\"\"\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n question = inputs[self.input_key]\n docs = await self._aget_docs(question)\n answer = await self.combine_documents_chain.arun(\n input_documents=docs, question=question, callbacks=_run_manager.get_child()\n )\n if self.return_source_documents:\n return {self.output_key: answer, \"source_documents\": docs}\n else:\n return {self.output_key: answer}\n[docs]class RetrievalQA(BaseRetrievalQA):\n \"\"\"Chain for question-answering against an index.\n Example:\n .. code-block:: python\n from langchain.llms import OpenAI\n from langchain.chains import RetrievalQA\n from langchain.faiss import FAISS\n from langchain.vectorstores.base import VectorStoreRetriever\n retriever = VectorStoreRetriever(vectorstore=FAISS(...))\n retrievalQA = RetrievalQA.from_llm(llm=OpenAI(), retriever=retriever)\n \"\"\"\n retriever: BaseRetriever = Field(exclude=True)\n def _get_docs(self, question: str) -> List[Document]:\n return self.retriever.get_relevant_documents(question)\n async def _aget_docs(self, question: str) -> List[Document]:\n return await self.retriever.aget_relevant_documents(question)\n @property\n def _chain_type(self) -> str:\n \"\"\"Return the chain type.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/retrieval_qa/base.html"}
+{"id": "85376e47619a-4", "text": "def _chain_type(self) -> str:\n \"\"\"Return the chain type.\"\"\"\n return \"retrieval_qa\"\n[docs]class VectorDBQA(BaseRetrievalQA):\n \"\"\"Chain for question-answering against a vector database.\"\"\"\n vectorstore: VectorStore = Field(exclude=True, alias=\"vectorstore\")\n \"\"\"Vector Database to connect to.\"\"\"\n k: int = 4\n \"\"\"Number of documents to query for.\"\"\"\n search_type: str = \"similarity\"\n \"\"\"Search type to use over vectorstore. `similarity` or `mmr`.\"\"\"\n search_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Extra search args.\"\"\"\n @root_validator()\n def raise_deprecation(cls, values: Dict) -> Dict:\n warnings.warn(\n \"`VectorDBQA` is deprecated - \"\n \"please use `from langchain.chains import RetrievalQA`\"\n )\n return values\n @root_validator()\n def validate_search_type(cls, values: Dict) -> Dict:\n \"\"\"Validate search type.\"\"\"\n if \"search_type\" in values:\n search_type = values[\"search_type\"]\n if search_type not in (\"similarity\", \"mmr\"):\n raise ValueError(f\"search_type of {search_type} not allowed.\")\n return values\n def _get_docs(self, question: str) -> List[Document]:\n if self.search_type == \"similarity\":\n docs = self.vectorstore.similarity_search(\n question, k=self.k, **self.search_kwargs\n )\n elif self.search_type == \"mmr\":\n docs = self.vectorstore.max_marginal_relevance_search(\n question, k=self.k, **self.search_kwargs\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/retrieval_qa/base.html"}
+{"id": "85376e47619a-5", "text": "question, k=self.k, **self.search_kwargs\n )\n else:\n raise ValueError(f\"search_type of {self.search_type} not allowed.\")\n return docs\n async def _aget_docs(self, question: str) -> List[Document]:\n raise NotImplementedError(\"VectorDBQA does not support async\")\n @property\n def _chain_type(self) -> str:\n \"\"\"Return the chain type.\"\"\"\n return \"vector_db_qa\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/retrieval_qa/base.html"}
+{"id": "39ed26610433-0", "text": "Source code for langchain.chains.conversation.base\n\"\"\"Chain that carries on a conversation and calls an LLM.\"\"\"\nfrom typing import Dict, List\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.chains.conversation.prompt import PROMPT\nfrom langchain.chains.llm import LLMChain\nfrom langchain.memory.buffer import ConversationBufferMemory\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.schema import BaseMemory\n[docs]class ConversationChain(LLMChain):\n \"\"\"Chain to have a conversation and load context from memory.\n Example:\n .. code-block:: python\n from langchain import ConversationChain, OpenAI\n conversation = ConversationChain(llm=OpenAI())\n \"\"\"\n memory: BaseMemory = Field(default_factory=ConversationBufferMemory)\n \"\"\"Default memory store.\"\"\"\n prompt: BasePromptTemplate = PROMPT\n \"\"\"Default conversation prompt to use.\"\"\"\n input_key: str = \"input\" #: :meta private:\n output_key: str = \"response\" #: :meta private:\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Use this since so some prompt vars come from history.\"\"\"\n return [self.input_key]\n @root_validator()\n def validate_prompt_input_variables(cls, values: Dict) -> Dict:\n \"\"\"Validate that prompt input variables are consistent.\"\"\"\n memory_keys = values[\"memory\"].memory_variables\n input_key = values[\"input_key\"]\n if input_key in memory_keys:\n raise ValueError(\n f\"The input key {input_key} was also found in the memory keys \"", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/conversation/base.html"}
+{"id": "39ed26610433-1", "text": "f\"The input key {input_key} was also found in the memory keys \"\n f\"({memory_keys}) - please provide keys that don't overlap.\"\n )\n prompt_variables = values[\"prompt\"].input_variables\n expected_keys = memory_keys + [input_key]\n if set(expected_keys) != set(prompt_variables):\n raise ValueError(\n \"Got unexpected prompt input variables. The prompt expects \"\n f\"{prompt_variables}, but got {memory_keys} as inputs from \"\n f\"memory, and {input_key} as the normal input key.\"\n )\n return values\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/conversation/base.html"}
+{"id": "b10c666cd675-0", "text": "Source code for langchain.chains.llm_bash.base\n\"\"\"Chain that interprets a prompt and executes bash code to perform bash operations.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport warnings\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.llm_bash.prompt import PROMPT\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.schema import OutputParserException\nfrom langchain.utilities.bash import BashProcess\nlogger = logging.getLogger(__name__)\n[docs]class LLMBashChain(Chain):\n \"\"\"Chain that interprets a prompt and executes bash code to perform bash operations.\n Example:\n .. code-block:: python\n from langchain import LLMBashChain, OpenAI\n llm_bash = LLMBashChain.from_llm(OpenAI())\n \"\"\"\n llm_chain: LLMChain\n llm: Optional[BaseLanguageModel] = None\n \"\"\"[Deprecated] LLM wrapper to use.\"\"\"\n input_key: str = \"question\" #: :meta private:\n output_key: str = \"answer\" #: :meta private:\n prompt: BasePromptTemplate = PROMPT\n \"\"\"[Deprecated]\"\"\"\n bash_process: BashProcess = Field(default_factory=BashProcess) #: :meta private:\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator(pre=True)\n def raise_deprecation(cls, values: Dict) -> Dict:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_bash/base.html"}
+{"id": "b10c666cd675-1", "text": "def raise_deprecation(cls, values: Dict) -> Dict:\n if \"llm\" in values:\n warnings.warn(\n \"Directly instantiating an LLMBashChain with an llm is deprecated. \"\n \"Please instantiate with llm_chain or using the from_llm class method.\"\n )\n if \"llm_chain\" not in values and values[\"llm\"] is not None:\n prompt = values.get(\"prompt\", PROMPT)\n values[\"llm_chain\"] = LLMChain(llm=values[\"llm\"], prompt=prompt)\n return values\n @root_validator\n def validate_prompt(cls, values: Dict) -> Dict:\n if values[\"llm_chain\"].prompt.output_parser is None:\n raise ValueError(\n \"The prompt used by llm_chain is expected to have an output_parser.\"\n )\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Expect output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n _run_manager.on_text(inputs[self.input_key], verbose=self.verbose)\n t = self.llm_chain.predict(\n question=inputs[self.input_key], callbacks=_run_manager.get_child()\n )\n _run_manager.on_text(t, color=\"green\", verbose=self.verbose)", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_bash/base.html"}
+{"id": "b10c666cd675-2", "text": ")\n _run_manager.on_text(t, color=\"green\", verbose=self.verbose)\n t = t.strip()\n try:\n parser = self.llm_chain.prompt.output_parser\n command_list = parser.parse(t) # type: ignore[union-attr]\n except OutputParserException as e:\n _run_manager.on_chain_error(e, verbose=self.verbose)\n raise e\n if self.verbose:\n _run_manager.on_text(\"\\nCode: \", verbose=self.verbose)\n _run_manager.on_text(\n str(command_list), color=\"yellow\", verbose=self.verbose\n )\n output = self.bash_process.run(command_list)\n _run_manager.on_text(\"\\nAnswer: \", verbose=self.verbose)\n _run_manager.on_text(output, color=\"yellow\", verbose=self.verbose)\n return {self.output_key: output}\n @property\n def _chain_type(self) -> str:\n return \"llm_bash_chain\"\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n prompt: BasePromptTemplate = PROMPT,\n **kwargs: Any,\n ) -> LLMBashChain:\n llm_chain = LLMChain(llm=llm, prompt=prompt)\n return cls(llm_chain=llm_chain, **kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_bash/base.html"}
+{"id": "50d43427de4a-0", "text": "Source code for langchain.chains.llm_summarization_checker.base\n\"\"\"Chain for summarization with self-verification.\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.sequential import SequentialChain\nfrom langchain.prompts.prompt import PromptTemplate\nPROMPTS_DIR = Path(__file__).parent / \"prompts\"\nCREATE_ASSERTIONS_PROMPT = PromptTemplate.from_file(\n PROMPTS_DIR / \"create_facts.txt\", [\"summary\"]\n)\nCHECK_ASSERTIONS_PROMPT = PromptTemplate.from_file(\n PROMPTS_DIR / \"check_facts.txt\", [\"assertions\"]\n)\nREVISED_SUMMARY_PROMPT = PromptTemplate.from_file(\n PROMPTS_DIR / \"revise_summary.txt\", [\"checked_assertions\", \"summary\"]\n)\nARE_ALL_TRUE_PROMPT = PromptTemplate.from_file(\n PROMPTS_DIR / \"are_all_true_prompt.txt\", [\"checked_assertions\"]\n)\ndef _load_sequential_chain(\n llm: BaseLanguageModel,\n create_assertions_prompt: PromptTemplate,\n check_assertions_prompt: PromptTemplate,\n revised_summary_prompt: PromptTemplate,\n are_all_true_prompt: PromptTemplate,\n verbose: bool = False,\n) -> SequentialChain:\n chain = SequentialChain(\n chains=[\n LLMChain(\n llm=llm,\n prompt=create_assertions_prompt,\n output_key=\"assertions\",\n verbose=verbose,\n ),\n LLMChain(", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_summarization_checker/base.html"}
+{"id": "50d43427de4a-1", "text": "verbose=verbose,\n ),\n LLMChain(\n llm=llm,\n prompt=check_assertions_prompt,\n output_key=\"checked_assertions\",\n verbose=verbose,\n ),\n LLMChain(\n llm=llm,\n prompt=revised_summary_prompt,\n output_key=\"revised_summary\",\n verbose=verbose,\n ),\n LLMChain(\n llm=llm,\n output_key=\"all_true\",\n prompt=are_all_true_prompt,\n verbose=verbose,\n ),\n ],\n input_variables=[\"summary\"],\n output_variables=[\"all_true\", \"revised_summary\"],\n verbose=verbose,\n )\n return chain\n[docs]class LLMSummarizationCheckerChain(Chain):\n \"\"\"Chain for question-answering with self-verification.\n Example:\n .. code-block:: python\n from langchain import OpenAI, LLMSummarizationCheckerChain\n llm = OpenAI(temperature=0.0)\n checker_chain = LLMSummarizationCheckerChain.from_llm(llm)\n \"\"\"\n sequential_chain: SequentialChain\n llm: Optional[BaseLanguageModel] = None\n \"\"\"[Deprecated] LLM wrapper to use.\"\"\"\n create_assertions_prompt: PromptTemplate = CREATE_ASSERTIONS_PROMPT\n \"\"\"[Deprecated]\"\"\"\n check_assertions_prompt: PromptTemplate = CHECK_ASSERTIONS_PROMPT\n \"\"\"[Deprecated]\"\"\"\n revised_summary_prompt: PromptTemplate = REVISED_SUMMARY_PROMPT\n \"\"\"[Deprecated]\"\"\"\n are_all_true_prompt: PromptTemplate = ARE_ALL_TRUE_PROMPT\n \"\"\"[Deprecated]\"\"\"\n input_key: str = \"query\" #: :meta private:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_summarization_checker/base.html"}
+{"id": "50d43427de4a-2", "text": "input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n max_checks: int = 2\n \"\"\"Maximum number of times to check the assertions. Default to double-checking.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator(pre=True)\n def raise_deprecation(cls, values: Dict) -> Dict:\n if \"llm\" in values:\n warnings.warn(\n \"Directly instantiating an LLMSummarizationCheckerChain with an llm is \"\n \"deprecated. Please instantiate with\"\n \" sequential_chain argument or using the from_llm class method.\"\n )\n if \"sequential_chain\" not in values and values[\"llm\"] is not None:\n values[\"sequential_chain\"] = _load_sequential_chain(\n values[\"llm\"],\n values.get(\"create_assertions_prompt\", CREATE_ASSERTIONS_PROMPT),\n values.get(\"check_assertions_prompt\", CHECK_ASSERTIONS_PROMPT),\n values.get(\"revised_summary_prompt\", REVISED_SUMMARY_PROMPT),\n values.get(\"are_all_true_prompt\", ARE_ALL_TRUE_PROMPT),\n verbose=values.get(\"verbose\", False),\n )\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the singular input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the singular output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n def _call(\n self,\n inputs: Dict[str, Any],", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_summarization_checker/base.html"}
+{"id": "50d43427de4a-3", "text": "def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n all_true = False\n count = 0\n output = None\n original_input = inputs[self.input_key]\n chain_input = original_input\n while not all_true and count < self.max_checks:\n output = self.sequential_chain(\n {\"summary\": chain_input}, callbacks=_run_manager.get_child()\n )\n count += 1\n if output[\"all_true\"].strip() == \"True\":\n break\n if self.verbose:\n print(output[\"revised_summary\"])\n chain_input = output[\"revised_summary\"]\n if not output:\n raise ValueError(\"No output from chain\")\n return {self.output_key: output[\"revised_summary\"].strip()}\n @property\n def _chain_type(self) -> str:\n return \"llm_summarization_checker_chain\"\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n create_assertions_prompt: PromptTemplate = CREATE_ASSERTIONS_PROMPT,\n check_assertions_prompt: PromptTemplate = CHECK_ASSERTIONS_PROMPT,\n revised_summary_prompt: PromptTemplate = REVISED_SUMMARY_PROMPT,\n are_all_true_prompt: PromptTemplate = ARE_ALL_TRUE_PROMPT,\n verbose: bool = False,\n **kwargs: Any,\n ) -> LLMSummarizationCheckerChain:\n chain = _load_sequential_chain(\n llm,\n create_assertions_prompt,\n check_assertions_prompt,\n revised_summary_prompt,", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_summarization_checker/base.html"}
+{"id": "50d43427de4a-4", "text": "create_assertions_prompt,\n check_assertions_prompt,\n revised_summary_prompt,\n are_all_true_prompt,\n verbose=verbose,\n )\n return cls(sequential_chain=chain, verbose=verbose, **kwargs)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_summarization_checker/base.html"}
+{"id": "075462df6d33-0", "text": "Source code for langchain.chains.combine_documents.base\n\"\"\"Base interface for chains combining documents.\"\"\"\nfrom abc import ABC, abstractmethod\nfrom typing import Any, Dict, List, Optional, Tuple\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.docstore.document import Document\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter, TextSplitter\ndef format_document(doc: Document, prompt: BasePromptTemplate) -> str:\n \"\"\"Format a document into a string based on a prompt template.\"\"\"\n base_info = {\"page_content\": doc.page_content}\n base_info.update(doc.metadata)\n missing_metadata = set(prompt.input_variables).difference(base_info)\n if len(missing_metadata) > 0:\n required_metadata = [\n iv for iv in prompt.input_variables if iv != \"page_content\"\n ]\n raise ValueError(\n f\"Document prompt requires documents to have metadata variables: \"\n f\"{required_metadata}. Received document with missing metadata: \"\n f\"{list(missing_metadata)}.\"\n )\n document_info = {k: base_info[k] for k in prompt.input_variables}\n return prompt.format(**document_info)\nclass BaseCombineDocumentsChain(Chain, ABC):\n \"\"\"Base interface for chains combining documents.\"\"\"\n input_key: str = \"input_documents\" #: :meta private:\n output_key: str = \"output_text\" #: :meta private:\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/base.html"}
+{"id": "075462df6d33-1", "text": "\"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n def prompt_length(self, docs: List[Document], **kwargs: Any) -> Optional[int]:\n \"\"\"Return the prompt length given the documents passed in.\n Returns None if the method does not depend on the prompt length.\n \"\"\"\n return None\n @abstractmethod\n def combine_docs(self, docs: List[Document], **kwargs: Any) -> Tuple[str, dict]:\n \"\"\"Combine documents into a single string.\"\"\"\n @abstractmethod\n async def acombine_docs(\n self, docs: List[Document], **kwargs: Any\n ) -> Tuple[str, dict]:\n \"\"\"Combine documents into a single string asynchronously.\"\"\"\n def _call(\n self,\n inputs: Dict[str, List[Document]],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n docs = inputs[self.input_key]\n # Other keys are assumed to be needed for LLM prediction\n other_keys = {k: v for k, v in inputs.items() if k != self.input_key}\n output, extra_return_dict = self.combine_docs(\n docs, callbacks=_run_manager.get_child(), **other_keys\n )\n extra_return_dict[self.output_key] = output\n return extra_return_dict\n async def _acall(\n self,\n inputs: Dict[str, List[Document]],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/base.html"}
+{"id": "075462df6d33-2", "text": ") -> Dict[str, str]:\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n docs = inputs[self.input_key]\n # Other keys are assumed to be needed for LLM prediction\n other_keys = {k: v for k, v in inputs.items() if k != self.input_key}\n output, extra_return_dict = await self.acombine_docs(\n docs, callbacks=_run_manager.get_child(), **other_keys\n )\n extra_return_dict[self.output_key] = output\n return extra_return_dict\n[docs]class AnalyzeDocumentChain(Chain):\n \"\"\"Chain that splits documents, then analyzes it in pieces.\"\"\"\n input_key: str = \"input_document\" #: :meta private:\n text_splitter: TextSplitter = Field(default_factory=RecursiveCharacterTextSplitter)\n combine_docs_chain: BaseCombineDocumentsChain\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output key.\n :meta private:\n \"\"\"\n return self.combine_docs_chain.output_keys\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n document = inputs[self.input_key]\n docs = self.text_splitter.create_documents([document])\n # Other keys are assumed to be needed for LLM prediction\n other_keys: Dict = {k: v for k, v in inputs.items() if k != self.input_key}", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/base.html"}
+{"id": "075462df6d33-3", "text": "other_keys[self.combine_docs_chain.input_key] = docs\n return self.combine_docs_chain(\n other_keys, return_only_outputs=True, callbacks=_run_manager.get_child()\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/base.html"}
+{"id": "8a9dca50824d-0", "text": "Source code for langchain.chains.constitutional_ai.base\n\"\"\"Chain for applying constitutional principles to the outputs of another chain.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.constitutional_ai.models import ConstitutionalPrinciple\nfrom langchain.chains.constitutional_ai.principles import PRINCIPLES\nfrom langchain.chains.constitutional_ai.prompts import CRITIQUE_PROMPT, REVISION_PROMPT\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts.base import BasePromptTemplate\n[docs]class ConstitutionalChain(Chain):\n \"\"\"Chain for applying constitutional principles.\n Example:\n .. code-block:: python\n from langchain.llms import OpenAI\n from langchain.chains import LLMChain, ConstitutionalChain\n from langchain.chains.constitutional_ai.models \\\n import ConstitutionalPrinciple\n llm = OpenAI()\n qa_prompt = PromptTemplate(\n template=\"Q: {question} A:\",\n input_variables=[\"question\"],\n )\n qa_chain = LLMChain(llm=llm, prompt=qa_prompt)\n constitutional_chain = ConstitutionalChain.from_llm(\n llm=llm,\n chain=qa_chain,\n constitutional_principles=[\n ConstitutionalPrinciple(\n critique_request=\"Tell if this answer is good.\",\n revision_request=\"Give a better answer.\",\n )\n ],\n )\n constitutional_chain.run(question=\"What is the meaning of life?\")\n \"\"\"\n chain: LLMChain\n constitutional_principles: List[ConstitutionalPrinciple]\n critique_chain: LLMChain", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/constitutional_ai/base.html"}
+{"id": "8a9dca50824d-1", "text": "critique_chain: LLMChain\n revision_chain: LLMChain\n return_intermediate_steps: bool = False\n[docs] @classmethod\n def get_principles(\n cls, names: Optional[List[str]] = None\n ) -> List[ConstitutionalPrinciple]:\n if names is None:\n return list(PRINCIPLES.values())\n else:\n return [PRINCIPLES[name] for name in names]\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n chain: LLMChain,\n critique_prompt: BasePromptTemplate = CRITIQUE_PROMPT,\n revision_prompt: BasePromptTemplate = REVISION_PROMPT,\n **kwargs: Any,\n ) -> \"ConstitutionalChain\":\n \"\"\"Create a chain from an LLM.\"\"\"\n critique_chain = LLMChain(llm=llm, prompt=critique_prompt)\n revision_chain = LLMChain(llm=llm, prompt=revision_prompt)\n return cls(\n chain=chain,\n critique_chain=critique_chain,\n revision_chain=revision_chain,\n **kwargs,\n )\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Defines the input keys.\"\"\"\n return self.chain.input_keys\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Defines the output keys.\"\"\"\n if self.return_intermediate_steps:\n return [\"output\", \"critiques_and_revisions\", \"initial_output\"]\n return [\"output\"]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/constitutional_ai/base.html"}
+{"id": "8a9dca50824d-2", "text": ") -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n response = self.chain.run(\n **inputs,\n callbacks=_run_manager.get_child(),\n )\n initial_response = response\n input_prompt = self.chain.prompt.format(**inputs)\n _run_manager.on_text(\n text=\"Initial response: \" + response + \"\\n\\n\",\n verbose=self.verbose,\n color=\"yellow\",\n )\n critiques_and_revisions = []\n for constitutional_principle in self.constitutional_principles:\n # Do critique\n raw_critique = self.critique_chain.run(\n input_prompt=input_prompt,\n output_from_model=response,\n critique_request=constitutional_principle.critique_request,\n callbacks=_run_manager.get_child(),\n )\n critique = self._parse_critique(\n output_string=raw_critique,\n ).strip()\n # if the critique contains \"No critique needed\", then we're done\n # in this case, initial_output is the same as output,\n # but we'll keep it for consistency\n if \"no critique needed\" in critique.lower():\n critiques_and_revisions.append((critique, \"\"))\n continue\n # Do revision\n revision = self.revision_chain.run(\n input_prompt=input_prompt,\n output_from_model=response,\n critique_request=constitutional_principle.critique_request,\n critique=critique,\n revision_request=constitutional_principle.revision_request,\n callbacks=_run_manager.get_child(),\n ).strip()\n response = revision\n critiques_and_revisions.append((critique, revision))\n _run_manager.on_text(", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/constitutional_ai/base.html"}
+{"id": "8a9dca50824d-3", "text": "_run_manager.on_text(\n text=f\"Applying {constitutional_principle.name}...\" + \"\\n\\n\",\n verbose=self.verbose,\n color=\"green\",\n )\n _run_manager.on_text(\n text=\"Critique: \" + critique + \"\\n\\n\",\n verbose=self.verbose,\n color=\"blue\",\n )\n _run_manager.on_text(\n text=\"Updated response: \" + revision + \"\\n\\n\",\n verbose=self.verbose,\n color=\"yellow\",\n )\n final_output: Dict[str, Any] = {\"output\": response}\n if self.return_intermediate_steps:\n final_output[\"initial_output\"] = initial_response\n final_output[\"critiques_and_revisions\"] = critiques_and_revisions\n return final_output\n @staticmethod\n def _parse_critique(output_string: str) -> str:\n if \"Revision request:\" not in output_string:\n return output_string\n output_string = output_string.split(\"Revision request:\")[0]\n if \"\\n\\n\" in output_string:\n output_string = output_string.split(\"\\n\\n\")[0]\n return output_string\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/chains/constitutional_ai/base.html"}
+{"id": "07ed722d47af-0", "text": "Source code for langchain.memory.token_buffer\nfrom typing import Any, Dict, List\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.schema import BaseMessage, get_buffer_string\n[docs]class ConversationTokenBufferMemory(BaseChatMemory):\n \"\"\"Buffer for storing conversation memory.\"\"\"\n human_prefix: str = \"Human\"\n ai_prefix: str = \"AI\"\n llm: BaseLanguageModel\n memory_key: str = \"history\"\n max_token_limit: int = 2000\n @property\n def buffer(self) -> List[BaseMessage]:\n \"\"\"String buffer of memory.\"\"\"\n return self.chat_memory.messages\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [self.memory_key]\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Return history buffer.\"\"\"\n buffer: Any = self.buffer\n if self.return_messages:\n final_buffer: Any = buffer\n else:\n final_buffer = get_buffer_string(\n buffer,\n human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n return {self.memory_key: final_buffer}\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this conversation to buffer. Pruned.\"\"\"\n super().save_context(inputs, outputs)\n # Prune buffer if it exceeds max token limit\n buffer = self.chat_memory.messages\n curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)\n if curr_buffer_length > self.max_token_limit:", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/token_buffer.html"}
+{"id": "07ed722d47af-1", "text": "if curr_buffer_length > self.max_token_limit:\n pruned_memory = []\n while curr_buffer_length > self.max_token_limit:\n pruned_memory.append(buffer.pop(0))\n curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/token_buffer.html"}
+{"id": "5acc4a098776-0", "text": "Source code for langchain.memory.entity\nimport logging\nfrom abc import ABC, abstractmethod\nfrom itertools import islice\nfrom typing import Any, Dict, Iterable, List, Optional\nfrom pydantic import BaseModel, Field\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.chains.llm import LLMChain\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.memory.prompt import (\n ENTITY_EXTRACTION_PROMPT,\n ENTITY_SUMMARIZATION_PROMPT,\n)\nfrom langchain.memory.utils import get_prompt_input_key\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.schema import BaseMessage, get_buffer_string\nlogger = logging.getLogger(__name__)\nclass BaseEntityStore(BaseModel, ABC):\n @abstractmethod\n def get(self, key: str, default: Optional[str] = None) -> Optional[str]:\n \"\"\"Get entity value from store.\"\"\"\n pass\n @abstractmethod\n def set(self, key: str, value: Optional[str]) -> None:\n \"\"\"Set entity value in store.\"\"\"\n pass\n @abstractmethod\n def delete(self, key: str) -> None:\n \"\"\"Delete entity value from store.\"\"\"\n pass\n @abstractmethod\n def exists(self, key: str) -> bool:\n \"\"\"Check if entity exists in store.\"\"\"\n pass\n @abstractmethod\n def clear(self) -> None:\n \"\"\"Delete all entities from store.\"\"\"\n pass\n[docs]class InMemoryEntityStore(BaseEntityStore):\n \"\"\"Basic in-memory entity store.\"\"\"\n store: Dict[str, Optional[str]] = {}\n[docs] def get(self, key: str, default: Optional[str] = None) -> Optional[str]:\n return self.store.get(key, default)", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/entity.html"}
+{"id": "5acc4a098776-1", "text": "return self.store.get(key, default)\n[docs] def set(self, key: str, value: Optional[str]) -> None:\n self.store[key] = value\n[docs] def delete(self, key: str) -> None:\n del self.store[key]\n[docs] def exists(self, key: str) -> bool:\n return key in self.store\n[docs] def clear(self) -> None:\n return self.store.clear()\n[docs]class RedisEntityStore(BaseEntityStore):\n \"\"\"Redis-backed Entity store. Entities get a TTL of 1 day by default, and\n that TTL is extended by 3 days every time the entity is read back.\n \"\"\"\n redis_client: Any\n session_id: str = \"default\"\n key_prefix: str = \"memory_store\"\n ttl: Optional[int] = 60 * 60 * 24\n recall_ttl: Optional[int] = 60 * 60 * 24 * 3\n def __init__(\n self,\n session_id: str = \"default\",\n url: str = \"redis://localhost:6379/0\",\n key_prefix: str = \"memory_store\",\n ttl: Optional[int] = 60 * 60 * 24,\n recall_ttl: Optional[int] = 60 * 60 * 24 * 3,\n *args: Any,\n **kwargs: Any,\n ):\n try:\n import redis\n except ImportError:\n raise ImportError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis`.\"\n )\n super().__init__(*args, **kwargs)\n try:\n self.redis_client = redis.Redis.from_url(url=url, decode_responses=True)", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/entity.html"}
+{"id": "5acc4a098776-2", "text": "self.redis_client = redis.Redis.from_url(url=url, decode_responses=True)\n except redis.exceptions.ConnectionError as error:\n logger.error(error)\n self.session_id = session_id\n self.key_prefix = key_prefix\n self.ttl = ttl\n self.recall_ttl = recall_ttl or ttl\n @property\n def full_key_prefix(self) -> str:\n return f\"{self.key_prefix}:{self.session_id}\"\n[docs] def get(self, key: str, default: Optional[str] = None) -> Optional[str]:\n res = (\n self.redis_client.getex(f\"{self.full_key_prefix}:{key}\", ex=self.recall_ttl)\n or default\n or \"\"\n )\n logger.debug(f\"REDIS MEM get '{self.full_key_prefix}:{key}': '{res}'\")\n return res\n[docs] def set(self, key: str, value: Optional[str]) -> None:\n if not value:\n return self.delete(key)\n self.redis_client.set(f\"{self.full_key_prefix}:{key}\", value, ex=self.ttl)\n logger.debug(\n f\"REDIS MEM set '{self.full_key_prefix}:{key}': '{value}' EX {self.ttl}\"\n )\n[docs] def delete(self, key: str) -> None:\n self.redis_client.delete(f\"{self.full_key_prefix}:{key}\")\n[docs] def exists(self, key: str) -> bool:\n return self.redis_client.exists(f\"{self.full_key_prefix}:{key}\") == 1\n[docs] def clear(self) -> None:\n # iterate a list in batches of size batch_size\n def batched(iterable: Iterable[Any], batch_size: int) -> Iterable[Any]:\n iterator = iter(iterable)", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/entity.html"}
+{"id": "5acc4a098776-3", "text": "iterator = iter(iterable)\n while batch := list(islice(iterator, batch_size)):\n yield batch\n for keybatch in batched(\n self.redis_client.scan_iter(f\"{self.full_key_prefix}:*\"), 500\n ):\n self.redis_client.delete(*keybatch)\n[docs]class SQLiteEntityStore(BaseEntityStore):\n \"\"\"SQLite-backed Entity store\"\"\"\n session_id: str = \"default\"\n table_name: str = \"memory_store\"\n def __init__(\n self,\n session_id: str = \"default\",\n db_file: str = \"entities.db\",\n table_name: str = \"memory_store\",\n *args: Any,\n **kwargs: Any,\n ):\n try:\n import sqlite3\n except ImportError:\n raise ImportError(\n \"Could not import sqlite3 python package. \"\n \"Please install it with `pip install sqlite3`.\"\n )\n super().__init__(*args, **kwargs)\n self.conn = sqlite3.connect(db_file)\n self.session_id = session_id\n self.table_name = table_name\n self._create_table_if_not_exists()\n @property\n def full_table_name(self) -> str:\n return f\"{self.table_name}_{self.session_id}\"\n def _create_table_if_not_exists(self) -> None:\n create_table_query = f\"\"\"\n CREATE TABLE IF NOT EXISTS {self.full_table_name} (\n key TEXT PRIMARY KEY,\n value TEXT\n )\n \"\"\"\n with self.conn:\n self.conn.execute(create_table_query)\n[docs] def get(self, key: str, default: Optional[str] = None) -> Optional[str]:\n query = f\"\"\"\n SELECT value", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/entity.html"}
+{"id": "5acc4a098776-4", "text": "query = f\"\"\"\n SELECT value\n FROM {self.full_table_name}\n WHERE key = ?\n \"\"\"\n cursor = self.conn.execute(query, (key,))\n result = cursor.fetchone()\n if result is not None:\n value = result[0]\n return value\n return default\n[docs] def set(self, key: str, value: Optional[str]) -> None:\n if not value:\n return self.delete(key)\n query = f\"\"\"\n INSERT OR REPLACE INTO {self.full_table_name} (key, value)\n VALUES (?, ?)\n \"\"\"\n with self.conn:\n self.conn.execute(query, (key, value))\n[docs] def delete(self, key: str) -> None:\n query = f\"\"\"\n DELETE FROM {self.full_table_name}\n WHERE key = ?\n \"\"\"\n with self.conn:\n self.conn.execute(query, (key,))\n[docs] def exists(self, key: str) -> bool:\n query = f\"\"\"\n SELECT 1\n FROM {self.full_table_name}\n WHERE key = ?\n LIMIT 1\n \"\"\"\n cursor = self.conn.execute(query, (key,))\n result = cursor.fetchone()\n return result is not None\n[docs] def clear(self) -> None:\n query = f\"\"\"\n DELETE FROM {self.full_table_name}\n \"\"\"\n with self.conn:\n self.conn.execute(query)\n[docs]class ConversationEntityMemory(BaseChatMemory):\n \"\"\"Entity extractor & summarizer to memory.\"\"\"\n human_prefix: str = \"Human\"\n ai_prefix: str = \"AI\"\n llm: BaseLanguageModel", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/entity.html"}
+{"id": "5acc4a098776-5", "text": "ai_prefix: str = \"AI\"\n llm: BaseLanguageModel\n entity_extraction_prompt: BasePromptTemplate = ENTITY_EXTRACTION_PROMPT\n entity_summarization_prompt: BasePromptTemplate = ENTITY_SUMMARIZATION_PROMPT\n entity_cache: List[str] = []\n k: int = 3\n chat_history_key: str = \"history\"\n entity_store: BaseEntityStore = Field(default_factory=InMemoryEntityStore)\n @property\n def buffer(self) -> List[BaseMessage]:\n return self.chat_memory.messages\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [\"entities\", self.chat_history_key]\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Return history buffer.\"\"\"\n chain = LLMChain(llm=self.llm, prompt=self.entity_extraction_prompt)\n if self.input_key is None:\n prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)\n else:\n prompt_input_key = self.input_key\n buffer_string = get_buffer_string(\n self.buffer[-self.k * 2 :],\n human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n output = chain.predict(\n history=buffer_string,\n input=inputs[prompt_input_key],\n )\n if output.strip() == \"NONE\":\n entities = []\n else:\n entities = [w.strip() for w in output.split(\",\")]\n entity_summaries = {}\n for entity in entities:\n entity_summaries[entity] = self.entity_store.get(entity, \"\")\n self.entity_cache = entities\n if self.return_messages:", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/entity.html"}
+{"id": "5acc4a098776-6", "text": "self.entity_cache = entities\n if self.return_messages:\n buffer: Any = self.buffer[-self.k * 2 :]\n else:\n buffer = buffer_string\n return {\n self.chat_history_key: buffer,\n \"entities\": entity_summaries,\n }\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this conversation to buffer.\"\"\"\n super().save_context(inputs, outputs)\n if self.input_key is None:\n prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)\n else:\n prompt_input_key = self.input_key\n buffer_string = get_buffer_string(\n self.buffer[-self.k * 2 :],\n human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n input_data = inputs[prompt_input_key]\n chain = LLMChain(llm=self.llm, prompt=self.entity_summarization_prompt)\n for entity in self.entity_cache:\n existing_summary = self.entity_store.get(entity, \"\")\n output = chain.predict(\n summary=existing_summary,\n entity=entity,\n history=buffer_string,\n input=input_data,\n )\n self.entity_store.set(entity, output.strip())\n[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"\n self.chat_memory.clear()\n self.entity_cache.clear()\n self.entity_store.clear()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/entity.html"}
+{"id": "d716902bc319-0", "text": "Source code for langchain.memory.vectorstore\n\"\"\"Class for a VectorStore-backed memory object.\"\"\"\nfrom typing import Any, Dict, List, Optional, Union\nfrom pydantic import Field\nfrom langchain.memory.chat_memory import BaseMemory\nfrom langchain.memory.utils import get_prompt_input_key\nfrom langchain.schema import Document\nfrom langchain.vectorstores.base import VectorStoreRetriever\n[docs]class VectorStoreRetrieverMemory(BaseMemory):\n \"\"\"Class for a VectorStore-backed memory object.\"\"\"\n retriever: VectorStoreRetriever = Field(exclude=True)\n \"\"\"VectorStoreRetriever object to connect to.\"\"\"\n memory_key: str = \"history\" #: :meta private:\n \"\"\"Key name to locate the memories in the result of load_memory_variables.\"\"\"\n input_key: Optional[str] = None\n \"\"\"Key name to index the inputs to load_memory_variables.\"\"\"\n return_docs: bool = False\n \"\"\"Whether or not to return the result of querying the database directly.\"\"\"\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"The list of keys emitted from the load_memory_variables method.\"\"\"\n return [self.memory_key]\n def _get_prompt_input_key(self, inputs: Dict[str, Any]) -> str:\n \"\"\"Get the input key for the prompt.\"\"\"\n if self.input_key is None:\n return get_prompt_input_key(inputs, self.memory_variables)\n return self.input_key\n[docs] def load_memory_variables(\n self, inputs: Dict[str, Any]\n ) -> Dict[str, Union[List[Document], str]]:\n \"\"\"Return history buffer.\"\"\"\n input_key = self._get_prompt_input_key(inputs)\n query = inputs[input_key]\n docs = self.retriever.get_relevant_documents(query)", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/vectorstore.html"}
+{"id": "d716902bc319-1", "text": "docs = self.retriever.get_relevant_documents(query)\n result: Union[List[Document], str]\n if not self.return_docs:\n result = \"\\n\".join([doc.page_content for doc in docs])\n else:\n result = docs\n return {self.memory_key: result}\n def _form_documents(\n self, inputs: Dict[str, Any], outputs: Dict[str, str]\n ) -> List[Document]:\n \"\"\"Format context from this conversation to buffer.\"\"\"\n # Each document should only include the current turn, not the chat history\n filtered_inputs = {k: v for k, v in inputs.items() if k != self.memory_key}\n texts = [\n f\"{k}: {v}\"\n for k, v in list(filtered_inputs.items()) + list(outputs.items())\n ]\n page_content = \"\\n\".join(texts)\n return [Document(page_content=page_content)]\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this conversation to buffer.\"\"\"\n documents = self._form_documents(inputs, outputs)\n self.retriever.add_documents(documents)\n[docs] def clear(self) -> None:\n \"\"\"Nothing to clear.\"\"\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/vectorstore.html"}
+{"id": "a5d0d8815534-0", "text": "Source code for langchain.memory.buffer\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import root_validator\nfrom langchain.memory.chat_memory import BaseChatMemory, BaseMemory\nfrom langchain.memory.utils import get_prompt_input_key\nfrom langchain.schema import get_buffer_string\n[docs]class ConversationBufferMemory(BaseChatMemory):\n \"\"\"Buffer for storing conversation memory.\"\"\"\n human_prefix: str = \"Human\"\n ai_prefix: str = \"AI\"\n memory_key: str = \"history\" #: :meta private:\n @property\n def buffer(self) -> Any:\n \"\"\"String buffer of memory.\"\"\"\n if self.return_messages:\n return self.chat_memory.messages\n else:\n return get_buffer_string(\n self.chat_memory.messages,\n human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [self.memory_key]\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Return history buffer.\"\"\"\n return {self.memory_key: self.buffer}\n[docs]class ConversationStringBufferMemory(BaseMemory):\n \"\"\"Buffer for storing conversation memory.\"\"\"\n human_prefix: str = \"Human\"\n ai_prefix: str = \"AI\"\n \"\"\"Prefix to use for AI generated responses.\"\"\"\n buffer: str = \"\"\n output_key: Optional[str] = None\n input_key: Optional[str] = None\n memory_key: str = \"history\" #: :meta private:\n @root_validator()\n def validate_chains(cls, values: Dict) -> Dict:", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/buffer.html"}
+{"id": "a5d0d8815534-1", "text": "def validate_chains(cls, values: Dict) -> Dict:\n \"\"\"Validate that return messages is not True.\"\"\"\n if values.get(\"return_messages\", False):\n raise ValueError(\n \"return_messages must be False for ConversationStringBufferMemory\"\n )\n return values\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [self.memory_key]\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n \"\"\"Return history buffer.\"\"\"\n return {self.memory_key: self.buffer}\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this conversation to buffer.\"\"\"\n if self.input_key is None:\n prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)\n else:\n prompt_input_key = self.input_key\n if self.output_key is None:\n if len(outputs) != 1:\n raise ValueError(f\"One output key expected, got {outputs.keys()}\")\n output_key = list(outputs.keys())[0]\n else:\n output_key = self.output_key\n human = f\"{self.human_prefix}: \" + inputs[prompt_input_key]\n ai = f\"{self.ai_prefix}: \" + outputs[output_key]\n self.buffer += \"\\n\" + \"\\n\".join([human, ai])\n[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"\n self.buffer = \"\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/buffer.html"}
+{"id": "2688281252ae-0", "text": "Source code for langchain.memory.summary\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Type\nfrom pydantic import BaseModel, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.chains.llm import LLMChain\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.memory.prompt import SUMMARY_PROMPT\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.schema import (\n BaseChatMessageHistory,\n BaseMessage,\n SystemMessage,\n get_buffer_string,\n)\nclass SummarizerMixin(BaseModel):\n human_prefix: str = \"Human\"\n ai_prefix: str = \"AI\"\n llm: BaseLanguageModel\n prompt: BasePromptTemplate = SUMMARY_PROMPT\n summary_message_cls: Type[BaseMessage] = SystemMessage\n def predict_new_summary(\n self, messages: List[BaseMessage], existing_summary: str\n ) -> str:\n new_lines = get_buffer_string(\n messages,\n human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n chain = LLMChain(llm=self.llm, prompt=self.prompt)\n return chain.predict(summary=existing_summary, new_lines=new_lines)\n[docs]class ConversationSummaryMemory(BaseChatMemory, SummarizerMixin):\n \"\"\"Conversation summarizer to memory.\"\"\"\n buffer: str = \"\"\n memory_key: str = \"history\" #: :meta private:\n[docs] @classmethod\n def from_messages(\n cls,\n llm: BaseLanguageModel,\n chat_memory: BaseChatMessageHistory,\n *,\n summarize_step: int = 2,\n **kwargs: Any,\n ) -> ConversationSummaryMemory:", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/summary.html"}
+{"id": "2688281252ae-1", "text": "**kwargs: Any,\n ) -> ConversationSummaryMemory:\n obj = cls(llm=llm, chat_memory=chat_memory, **kwargs)\n for i in range(0, len(obj.chat_memory.messages), summarize_step):\n obj.buffer = obj.predict_new_summary(\n obj.chat_memory.messages[i : i + summarize_step], obj.buffer\n )\n return obj\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [self.memory_key]\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Return history buffer.\"\"\"\n if self.return_messages:\n buffer: Any = [self.summary_message_cls(content=self.buffer)]\n else:\n buffer = self.buffer\n return {self.memory_key: buffer}\n @root_validator()\n def validate_prompt_input_variables(cls, values: Dict) -> Dict:\n \"\"\"Validate that prompt input variables are consistent.\"\"\"\n prompt_variables = values[\"prompt\"].input_variables\n expected_keys = {\"summary\", \"new_lines\"}\n if expected_keys != set(prompt_variables):\n raise ValueError(\n \"Got unexpected prompt input variables. The prompt expects \"\n f\"{prompt_variables}, but it should have {expected_keys}.\"\n )\n return values\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this conversation to buffer.\"\"\"\n super().save_context(inputs, outputs)\n self.buffer = self.predict_new_summary(\n self.chat_memory.messages[-2:], self.buffer\n )\n[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/summary.html"}
+{"id": "2688281252ae-2", "text": "[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"\n super().clear()\n self.buffer = \"\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/summary.html"}
+{"id": "9934c661e53f-0", "text": "Source code for langchain.memory.combined\nimport warnings\nfrom typing import Any, Dict, List, Set\nfrom pydantic import validator\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.schema import BaseMemory\n[docs]class CombinedMemory(BaseMemory):\n \"\"\"Class for combining multiple memories' data together.\"\"\"\n memories: List[BaseMemory]\n \"\"\"For tracking all the memories that should be accessed.\"\"\"\n @validator(\"memories\")\n def check_repeated_memory_variable(\n cls, value: List[BaseMemory]\n ) -> List[BaseMemory]:\n all_variables: Set[str] = set()\n for val in value:\n overlap = all_variables.intersection(val.memory_variables)\n if overlap:\n raise ValueError(\n f\"The same variables {overlap} are found in multiple\"\n \"memory object, which is not allowed by CombinedMemory.\"\n )\n all_variables |= set(val.memory_variables)\n return value\n @validator(\"memories\")\n def check_input_key(cls, value: List[BaseMemory]) -> List[BaseMemory]:\n \"\"\"Check that if memories are of type BaseChatMemory that input keys exist.\"\"\"\n for val in value:\n if isinstance(val, BaseChatMemory):\n if val.input_key is None:\n warnings.warn(\n \"When using CombinedMemory, \"\n \"input keys should be so the input is known. \"\n f\" Was not set on {val}\"\n )\n return value\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"All the memory variables that this instance provides.\"\"\"\n \"\"\"Collected from the all the linked memories.\"\"\"\n memory_variables = []\n for memory in self.memories:\n memory_variables.extend(memory.memory_variables)", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/combined.html"}
+{"id": "9934c661e53f-1", "text": "for memory in self.memories:\n memory_variables.extend(memory.memory_variables)\n return memory_variables\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n \"\"\"Load all vars from sub-memories.\"\"\"\n memory_data: Dict[str, Any] = {}\n # Collect vars from all sub-memories\n for memory in self.memories:\n data = memory.load_memory_variables(inputs)\n memory_data = {\n **memory_data,\n **data,\n }\n return memory_data\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this session for every memory.\"\"\"\n # Save context for all sub-memories\n for memory in self.memories:\n memory.save_context(inputs, outputs)\n[docs] def clear(self) -> None:\n \"\"\"Clear context from this session for every memory.\"\"\"\n for memory in self.memories:\n memory.clear()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/combined.html"}
+{"id": "7859f8bcaff1-0", "text": "Source code for langchain.memory.buffer_window\nfrom typing import Any, Dict, List\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.schema import BaseMessage, get_buffer_string\n[docs]class ConversationBufferWindowMemory(BaseChatMemory):\n \"\"\"Buffer for storing conversation memory.\"\"\"\n human_prefix: str = \"Human\"\n ai_prefix: str = \"AI\"\n memory_key: str = \"history\" #: :meta private:\n k: int = 5\n @property\n def buffer(self) -> List[BaseMessage]:\n \"\"\"String buffer of memory.\"\"\"\n return self.chat_memory.messages\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [self.memory_key]\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n \"\"\"Return history buffer.\"\"\"\n buffer: Any = self.buffer[-self.k * 2 :] if self.k > 0 else []\n if not self.return_messages:\n buffer = get_buffer_string(\n buffer,\n human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n return {self.memory_key: buffer}\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/buffer_window.html"}
+{"id": "87a574cff794-0", "text": "Source code for langchain.memory.simple\nfrom typing import Any, Dict, List\nfrom langchain.schema import BaseMemory\n[docs]class SimpleMemory(BaseMemory):\n \"\"\"Simple memory for storing context or other bits of information that shouldn't\n ever change between prompts.\n \"\"\"\n memories: Dict[str, Any] = dict()\n @property\n def memory_variables(self) -> List[str]:\n return list(self.memories.keys())\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n return self.memories\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Nothing should be saved or changed, my memory is set in stone.\"\"\"\n pass\n[docs] def clear(self) -> None:\n \"\"\"Nothing to clear, got a memory like a vault.\"\"\"\n pass\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/simple.html"}
+{"id": "97744e46ff89-0", "text": "Source code for langchain.memory.readonly\nfrom typing import Any, Dict, List\nfrom langchain.schema import BaseMemory\n[docs]class ReadOnlySharedMemory(BaseMemory):\n \"\"\"A memory wrapper that is read-only and cannot be changed.\"\"\"\n memory: BaseMemory\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Return memory variables.\"\"\"\n return self.memory.memory_variables\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n \"\"\"Load memory variables from memory.\"\"\"\n return self.memory.load_memory_variables(inputs)\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Nothing should be saved or changed\"\"\"\n pass\n[docs] def clear(self) -> None:\n \"\"\"Nothing to clear, got a memory like a vault.\"\"\"\n pass\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/readonly.html"}
+{"id": "7d4364b37c40-0", "text": "Source code for langchain.memory.kg\nfrom typing import Any, Dict, List, Type, Union\nfrom pydantic import Field\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.chains.llm import LLMChain\nfrom langchain.graphs import NetworkxEntityGraph\nfrom langchain.graphs.networkx_graph import KnowledgeTriple, get_entities, parse_triples\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.memory.prompt import (\n ENTITY_EXTRACTION_PROMPT,\n KNOWLEDGE_TRIPLE_EXTRACTION_PROMPT,\n)\nfrom langchain.memory.utils import get_prompt_input_key\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.schema import (\n BaseMessage,\n SystemMessage,\n get_buffer_string,\n)\n[docs]class ConversationKGMemory(BaseChatMemory):\n \"\"\"Knowledge graph memory for storing conversation memory.\n Integrates with external knowledge graph to store and retrieve\n information about knowledge triples in the conversation.\n \"\"\"\n k: int = 2\n human_prefix: str = \"Human\"\n ai_prefix: str = \"AI\"\n kg: NetworkxEntityGraph = Field(default_factory=NetworkxEntityGraph)\n knowledge_extraction_prompt: BasePromptTemplate = KNOWLEDGE_TRIPLE_EXTRACTION_PROMPT\n entity_extraction_prompt: BasePromptTemplate = ENTITY_EXTRACTION_PROMPT\n llm: BaseLanguageModel\n summary_message_cls: Type[BaseMessage] = SystemMessage\n \"\"\"Number of previous utterances to include in the context.\"\"\"\n memory_key: str = \"history\" #: :meta private:\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Return history buffer.\"\"\"\n entities = self._get_current_entities(inputs)\n summary_strings = []", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/kg.html"}
+{"id": "7d4364b37c40-1", "text": "entities = self._get_current_entities(inputs)\n summary_strings = []\n for entity in entities:\n knowledge = self.kg.get_entity_knowledge(entity)\n if knowledge:\n summary = f\"On {entity}: {'. '.join(knowledge)}.\"\n summary_strings.append(summary)\n context: Union[str, List]\n if not summary_strings:\n context = [] if self.return_messages else \"\"\n elif self.return_messages:\n context = [\n self.summary_message_cls(content=text) for text in summary_strings\n ]\n else:\n context = \"\\n\".join(summary_strings)\n return {self.memory_key: context}\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [self.memory_key]\n def _get_prompt_input_key(self, inputs: Dict[str, Any]) -> str:\n \"\"\"Get the input key for the prompt.\"\"\"\n if self.input_key is None:\n return get_prompt_input_key(inputs, self.memory_variables)\n return self.input_key\n def _get_prompt_output_key(self, outputs: Dict[str, Any]) -> str:\n \"\"\"Get the output key for the prompt.\"\"\"\n if self.output_key is None:\n if len(outputs) != 1:\n raise ValueError(f\"One output key expected, got {outputs.keys()}\")\n return list(outputs.keys())[0]\n return self.output_key\n[docs] def get_current_entities(self, input_string: str) -> List[str]:\n chain = LLMChain(llm=self.llm, prompt=self.entity_extraction_prompt)\n buffer_string = get_buffer_string(\n self.chat_memory.messages[-self.k * 2 :],\n human_prefix=self.human_prefix,", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/kg.html"}
+{"id": "7d4364b37c40-2", "text": "human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n output = chain.predict(\n history=buffer_string,\n input=input_string,\n )\n return get_entities(output)\n def _get_current_entities(self, inputs: Dict[str, Any]) -> List[str]:\n \"\"\"Get the current entities in the conversation.\"\"\"\n prompt_input_key = self._get_prompt_input_key(inputs)\n return self.get_current_entities(inputs[prompt_input_key])\n[docs] def get_knowledge_triplets(self, input_string: str) -> List[KnowledgeTriple]:\n chain = LLMChain(llm=self.llm, prompt=self.knowledge_extraction_prompt)\n buffer_string = get_buffer_string(\n self.chat_memory.messages[-self.k * 2 :],\n human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n output = chain.predict(\n history=buffer_string,\n input=input_string,\n verbose=True,\n )\n knowledge = parse_triples(output)\n return knowledge\n def _get_and_update_kg(self, inputs: Dict[str, Any]) -> None:\n \"\"\"Get and update knowledge graph from the conversation history.\"\"\"\n prompt_input_key = self._get_prompt_input_key(inputs)\n knowledge = self.get_knowledge_triplets(inputs[prompt_input_key])\n for triple in knowledge:\n self.kg.add_triple(triple)\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this conversation to buffer.\"\"\"\n super().save_context(inputs, outputs)\n self._get_and_update_kg(inputs)\n[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/kg.html"}
+{"id": "7d4364b37c40-3", "text": "[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"\n super().clear()\n self.kg.clear()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/kg.html"}
+{"id": "c03b8a97050b-0", "text": "Source code for langchain.memory.summary_buffer\nfrom typing import Any, Dict, List\nfrom pydantic import root_validator\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.memory.summary import SummarizerMixin\nfrom langchain.schema import BaseMessage, get_buffer_string\n[docs]class ConversationSummaryBufferMemory(BaseChatMemory, SummarizerMixin):\n \"\"\"Buffer with summarizer for storing conversation memory.\"\"\"\n max_token_limit: int = 2000\n moving_summary_buffer: str = \"\"\n memory_key: str = \"history\"\n @property\n def buffer(self) -> List[BaseMessage]:\n return self.chat_memory.messages\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [self.memory_key]\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Return history buffer.\"\"\"\n buffer = self.buffer\n if self.moving_summary_buffer != \"\":\n first_messages: List[BaseMessage] = [\n self.summary_message_cls(content=self.moving_summary_buffer)\n ]\n buffer = first_messages + buffer\n if self.return_messages:\n final_buffer: Any = buffer\n else:\n final_buffer = get_buffer_string(\n buffer, human_prefix=self.human_prefix, ai_prefix=self.ai_prefix\n )\n return {self.memory_key: final_buffer}\n @root_validator()\n def validate_prompt_input_variables(cls, values: Dict) -> Dict:\n \"\"\"Validate that prompt input variables are consistent.\"\"\"\n prompt_variables = values[\"prompt\"].input_variables\n expected_keys = {\"summary\", \"new_lines\"}\n if expected_keys != set(prompt_variables):\n raise ValueError(", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/summary_buffer.html"}
+{"id": "c03b8a97050b-1", "text": "if expected_keys != set(prompt_variables):\n raise ValueError(\n \"Got unexpected prompt input variables. The prompt expects \"\n f\"{prompt_variables}, but it should have {expected_keys}.\"\n )\n return values\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this conversation to buffer.\"\"\"\n super().save_context(inputs, outputs)\n self.prune()\n[docs] def prune(self) -> None:\n \"\"\"Prune buffer if it exceeds max token limit\"\"\"\n buffer = self.chat_memory.messages\n curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)\n if curr_buffer_length > self.max_token_limit:\n pruned_memory = []\n while curr_buffer_length > self.max_token_limit:\n pruned_memory.append(buffer.pop(0))\n curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)\n self.moving_summary_buffer = self.predict_new_summary(\n pruned_memory, self.moving_summary_buffer\n )\n[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"\n super().clear()\n self.moving_summary_buffer = \"\"\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/summary_buffer.html"}
+{"id": "e7676da47855-0", "text": "Source code for langchain.memory.chat_message_histories.redis\nimport json\nimport logging\nfrom typing import List, Optional\nfrom langchain.schema import (\n BaseChatMessageHistory,\n BaseMessage,\n _message_to_dict,\n messages_from_dict,\n)\nlogger = logging.getLogger(__name__)\n[docs]class RedisChatMessageHistory(BaseChatMessageHistory):\n def __init__(\n self,\n session_id: str,\n url: str = \"redis://localhost:6379/0\",\n key_prefix: str = \"message_store:\",\n ttl: Optional[int] = None,\n ):\n try:\n import redis\n except ImportError:\n raise ImportError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis`.\"\n )\n try:\n self.redis_client = redis.Redis.from_url(url=url)\n except redis.exceptions.ConnectionError as error:\n logger.error(error)\n self.session_id = session_id\n self.key_prefix = key_prefix\n self.ttl = ttl\n @property\n def key(self) -> str:\n \"\"\"Construct the record key to use\"\"\"\n return self.key_prefix + self.session_id\n @property\n def messages(self) -> List[BaseMessage]: # type: ignore\n \"\"\"Retrieve the messages from Redis\"\"\"\n _items = self.redis_client.lrange(self.key, 0, -1)\n items = [json.loads(m.decode(\"utf-8\")) for m in _items[::-1]]\n messages = messages_from_dict(items)\n return messages\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Append the message to the record in Redis\"\"\"", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/redis.html"}
+{"id": "e7676da47855-1", "text": "\"\"\"Append the message to the record in Redis\"\"\"\n self.redis_client.lpush(self.key, json.dumps(_message_to_dict(message)))\n if self.ttl:\n self.redis_client.expire(self.key, self.ttl)\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from Redis\"\"\"\n self.redis_client.delete(self.key)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/redis.html"}
+{"id": "03b9178d7d2a-0", "text": "Source code for langchain.memory.chat_message_histories.in_memory\nfrom typing import List\nfrom pydantic import BaseModel\nfrom langchain.schema import (\n BaseChatMessageHistory,\n BaseMessage,\n)\n[docs]class ChatMessageHistory(BaseChatMessageHistory, BaseModel):\n messages: List[BaseMessage] = []\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Add a self-created message to the store\"\"\"\n self.messages.append(message)\n[docs] def clear(self) -> None:\n self.messages = []\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/in_memory.html"}
+{"id": "eae6068a09dc-0", "text": "Source code for langchain.memory.chat_message_histories.cassandra\nimport json\nimport logging\nfrom typing import List\nfrom langchain.schema import (\n BaseChatMessageHistory,\n BaseMessage,\n _message_to_dict,\n messages_from_dict,\n)\nlogger = logging.getLogger(__name__)\nDEFAULT_KEYSPACE_NAME = \"chat_history\"\nDEFAULT_TABLE_NAME = \"message_store\"\nDEFAULT_USERNAME = \"cassandra\"\nDEFAULT_PASSWORD = \"cassandra\"\nDEFAULT_PORT = 9042\n[docs]class CassandraChatMessageHistory(BaseChatMessageHistory):\n \"\"\"Chat message history that stores history in Cassandra.\n Args:\n contact_points: list of ips to connect to Cassandra cluster\n session_id: arbitrary key that is used to store the messages\n of a single chat session.\n port: port to connect to Cassandra cluster\n username: username to connect to Cassandra cluster\n password: password to connect to Cassandra cluster\n keyspace_name: name of the keyspace to use\n table_name: name of the table to use\n \"\"\"\n def __init__(\n self,\n contact_points: List[str],\n session_id: str,\n port: int = DEFAULT_PORT,\n username: str = DEFAULT_USERNAME,\n password: str = DEFAULT_PASSWORD,\n keyspace_name: str = DEFAULT_KEYSPACE_NAME,\n table_name: str = DEFAULT_TABLE_NAME,\n ):\n self.contact_points = contact_points\n self.session_id = session_id\n self.port = port\n self.username = username\n self.password = password\n self.keyspace_name = keyspace_name\n self.table_name = table_name\n try:\n from cassandra import (\n AuthenticationFailed,\n OperationTimedOut,\n UnresolvableContactPoints,\n )", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/cassandra.html"}
+{"id": "eae6068a09dc-1", "text": "OperationTimedOut,\n UnresolvableContactPoints,\n )\n from cassandra.cluster import Cluster, PlainTextAuthProvider\n except ImportError:\n raise ValueError(\n \"Could not import cassandra-driver python package. \"\n \"Please install it with `pip install cassandra-driver`.\"\n )\n self.cluster: Cluster = Cluster(\n contact_points,\n port=port,\n auth_provider=PlainTextAuthProvider(\n username=self.username, password=self.password\n ),\n )\n try:\n self.session = self.cluster.connect()\n except (\n AuthenticationFailed,\n UnresolvableContactPoints,\n OperationTimedOut,\n ) as error:\n logger.error(\n \"Unable to establish connection with \\\n cassandra chat message history database\"\n )\n raise error\n self._prepare_cassandra()\n def _prepare_cassandra(self) -> None:\n \"\"\"Create the keyspace and table if they don't exist yet\"\"\"\n from cassandra import OperationTimedOut, Unavailable\n try:\n self.session.execute(\n f\"\"\"CREATE KEYSPACE IF NOT EXISTS \n {self.keyspace_name} WITH REPLICATION = \n {{ 'class' : 'SimpleStrategy', 'replication_factor' : 1 }};\"\"\"\n )\n except (OperationTimedOut, Unavailable) as error:\n logger.error(\n f\"Unable to create cassandra \\\n chat message history keyspace: {self.keyspace_name}.\"\n )\n raise error\n self.session.set_keyspace(self.keyspace_name)\n try:\n self.session.execute(\n f\"\"\"CREATE TABLE IF NOT EXISTS \n {self.table_name} (id UUID, session_id varchar,", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/cassandra.html"}
+{"id": "eae6068a09dc-2", "text": "{self.table_name} (id UUID, session_id varchar, \n history text, PRIMARY KEY ((session_id), id) );\"\"\"\n )\n except (OperationTimedOut, Unavailable) as error:\n logger.error(\n f\"Unable to create cassandra \\\n chat message history table: {self.table_name}\"\n )\n raise error\n @property\n def messages(self) -> List[BaseMessage]: # type: ignore\n \"\"\"Retrieve the messages from Cassandra\"\"\"\n from cassandra import ReadFailure, ReadTimeout, Unavailable\n try:\n rows = self.session.execute(\n f\"\"\"SELECT * FROM {self.table_name}\n WHERE session_id = '{self.session_id}' ;\"\"\"\n )\n except (Unavailable, ReadTimeout, ReadFailure) as error:\n logger.error(\"Unable to Retreive chat history messages from cassadra\")\n raise error\n if rows:\n items = [json.loads(row.history) for row in rows]\n else:\n items = []\n messages = messages_from_dict(items)\n return messages\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Append the message to the record in Cassandra\"\"\"\n import uuid\n from cassandra import Unavailable, WriteFailure, WriteTimeout\n try:\n self.session.execute(\n \"\"\"INSERT INTO message_store\n (id, session_id, history) VALUES (%s, %s, %s);\"\"\",\n (uuid.uuid4(), self.session_id, json.dumps(_message_to_dict(message))),\n )\n except (Unavailable, WriteTimeout, WriteFailure) as error:\n logger.error(\"Unable to write chat history messages to cassandra\")\n raise error", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/cassandra.html"}
+{"id": "eae6068a09dc-3", "text": "logger.error(\"Unable to write chat history messages to cassandra\")\n raise error\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from Cassandra\"\"\"\n from cassandra import OperationTimedOut, Unavailable\n try:\n self.session.execute(\n f\"DELETE FROM {self.table_name} WHERE session_id = '{self.session_id}';\"\n )\n except (Unavailable, OperationTimedOut) as error:\n logger.error(\"Unable to clear chat history messages from cassandra\")\n raise error\n def __del__(self) -> None:\n if self.session:\n self.session.shutdown()\n if self.cluster:\n self.cluster.shutdown()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/cassandra.html"}
+{"id": "a1c2c68264be-0", "text": "Source code for langchain.memory.chat_message_histories.file\nimport json\nimport logging\nfrom pathlib import Path\nfrom typing import List\nfrom langchain.schema import (\n BaseChatMessageHistory,\n BaseMessage,\n messages_from_dict,\n messages_to_dict,\n)\nlogger = logging.getLogger(__name__)\n[docs]class FileChatMessageHistory(BaseChatMessageHistory):\n \"\"\"\n Chat message history that stores history in a local file.\n Args:\n file_path: path of the local file to store the messages.\n \"\"\"\n def __init__(self, file_path: str):\n self.file_path = Path(file_path)\n if not self.file_path.exists():\n self.file_path.touch()\n self.file_path.write_text(json.dumps([]))\n @property\n def messages(self) -> List[BaseMessage]: # type: ignore\n \"\"\"Retrieve the messages from the local file\"\"\"\n items = json.loads(self.file_path.read_text())\n messages = messages_from_dict(items)\n return messages\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Append the message to the record in the local file\"\"\"\n messages = messages_to_dict(self.messages)\n messages.append(messages_to_dict([message])[0])\n self.file_path.write_text(json.dumps(messages))\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from the local file\"\"\"\n self.file_path.write_text(json.dumps([]))\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/file.html"}
+{"id": "18233249c401-0", "text": "Source code for langchain.memory.chat_message_histories.mongodb\nimport json\nimport logging\nfrom typing import List\nfrom langchain.schema import (\n BaseChatMessageHistory,\n BaseMessage,\n _message_to_dict,\n messages_from_dict,\n)\nlogger = logging.getLogger(__name__)\nDEFAULT_DBNAME = \"chat_history\"\nDEFAULT_COLLECTION_NAME = \"message_store\"\n[docs]class MongoDBChatMessageHistory(BaseChatMessageHistory):\n \"\"\"Chat message history that stores history in MongoDB.\n Args:\n connection_string: connection string to connect to MongoDB\n session_id: arbitrary key that is used to store the messages\n of a single chat session.\n database_name: name of the database to use\n collection_name: name of the collection to use\n \"\"\"\n def __init__(\n self,\n connection_string: str,\n session_id: str,\n database_name: str = DEFAULT_DBNAME,\n collection_name: str = DEFAULT_COLLECTION_NAME,\n ):\n from pymongo import MongoClient, errors\n self.connection_string = connection_string\n self.session_id = session_id\n self.database_name = database_name\n self.collection_name = collection_name\n try:\n self.client: MongoClient = MongoClient(connection_string)\n except errors.ConnectionFailure as error:\n logger.error(error)\n self.db = self.client[database_name]\n self.collection = self.db[collection_name]\n self.collection.create_index(\"SessionId\")\n @property\n def messages(self) -> List[BaseMessage]: # type: ignore\n \"\"\"Retrieve the messages from MongoDB\"\"\"\n from pymongo import errors\n try:\n cursor = self.collection.find({\"SessionId\": self.session_id})\n except errors.OperationFailure as error:\n logger.error(error)\n if cursor:", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/mongodb.html"}
+{"id": "18233249c401-1", "text": "except errors.OperationFailure as error:\n logger.error(error)\n if cursor:\n items = [json.loads(document[\"History\"]) for document in cursor]\n else:\n items = []\n messages = messages_from_dict(items)\n return messages\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Append the message to the record in MongoDB\"\"\"\n from pymongo import errors\n try:\n self.collection.insert_one(\n {\n \"SessionId\": self.session_id,\n \"History\": json.dumps(_message_to_dict(message)),\n }\n )\n except errors.WriteError as err:\n logger.error(err)\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from MongoDB\"\"\"\n from pymongo import errors\n try:\n self.collection.delete_many({\"SessionId\": self.session_id})\n except errors.WriteError as err:\n logger.error(err)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/mongodb.html"}
+{"id": "4fa91a1ffffa-0", "text": "Source code for langchain.memory.chat_message_histories.momento\nfrom __future__ import annotations\nimport json\nfrom datetime import timedelta\nfrom typing import TYPE_CHECKING, Any, Optional\nfrom langchain.schema import (\n BaseChatMessageHistory,\n BaseMessage,\n _message_to_dict,\n messages_from_dict,\n)\nfrom langchain.utils import get_from_env\nif TYPE_CHECKING:\n import momento\ndef _ensure_cache_exists(cache_client: momento.CacheClient, cache_name: str) -> None:\n \"\"\"Create cache if it doesn't exist.\n Raises:\n SdkException: Momento service or network error\n Exception: Unexpected response\n \"\"\"\n from momento.responses import CreateCache\n create_cache_response = cache_client.create_cache(cache_name)\n if isinstance(create_cache_response, CreateCache.Success) or isinstance(\n create_cache_response, CreateCache.CacheAlreadyExists\n ):\n return None\n elif isinstance(create_cache_response, CreateCache.Error):\n raise create_cache_response.inner_exception\n else:\n raise Exception(f\"Unexpected response cache creation: {create_cache_response}\")\n[docs]class MomentoChatMessageHistory(BaseChatMessageHistory):\n \"\"\"Chat message history cache that uses Momento as a backend.\n See https://gomomento.com/\"\"\"\n def __init__(\n self,\n session_id: str,\n cache_client: momento.CacheClient,\n cache_name: str,\n *,\n key_prefix: str = \"message_store:\",\n ttl: Optional[timedelta] = None,\n ensure_cache_exists: bool = True,\n ):\n \"\"\"Instantiate a chat message history cache that uses Momento as a backend.\n Note: to instantiate the cache client passed to MomentoChatMessageHistory,", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/momento.html"}
+{"id": "4fa91a1ffffa-1", "text": "Note: to instantiate the cache client passed to MomentoChatMessageHistory,\n you must have a Momento account at https://gomomento.com/.\n Args:\n session_id (str): The session ID to use for this chat session.\n cache_client (CacheClient): The Momento cache client.\n cache_name (str): The name of the cache to use to store the messages.\n key_prefix (str, optional): The prefix to apply to the cache key.\n Defaults to \"message_store:\".\n ttl (Optional[timedelta], optional): The TTL to use for the messages.\n Defaults to None, ie the default TTL of the cache will be used.\n ensure_cache_exists (bool, optional): Create the cache if it doesn't exist.\n Defaults to True.\n Raises:\n ImportError: Momento python package is not installed.\n TypeError: cache_client is not of type momento.CacheClientObject\n \"\"\"\n try:\n from momento import CacheClient\n from momento.requests import CollectionTtl\n except ImportError:\n raise ImportError(\n \"Could not import momento python package. \"\n \"Please install it with `pip install momento`.\"\n )\n if not isinstance(cache_client, CacheClient):\n raise TypeError(\"cache_client must be a momento.CacheClient object.\")\n if ensure_cache_exists:\n _ensure_cache_exists(cache_client, cache_name)\n self.key = key_prefix + session_id\n self.cache_client = cache_client\n self.cache_name = cache_name\n if ttl is not None:\n self.ttl = CollectionTtl.of(ttl)\n else:\n self.ttl = CollectionTtl.from_cache_ttl()\n[docs] @classmethod\n def from_client_params(\n cls,\n session_id: str,", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/momento.html"}
+{"id": "4fa91a1ffffa-2", "text": "def from_client_params(\n cls,\n session_id: str,\n cache_name: str,\n ttl: timedelta,\n *,\n configuration: Optional[momento.config.Configuration] = None,\n auth_token: Optional[str] = None,\n **kwargs: Any,\n ) -> MomentoChatMessageHistory:\n \"\"\"Construct cache from CacheClient parameters.\"\"\"\n try:\n from momento import CacheClient, Configurations, CredentialProvider\n except ImportError:\n raise ImportError(\n \"Could not import momento python package. \"\n \"Please install it with `pip install momento`.\"\n )\n if configuration is None:\n configuration = Configurations.Laptop.v1()\n auth_token = auth_token or get_from_env(\"auth_token\", \"MOMENTO_AUTH_TOKEN\")\n credentials = CredentialProvider.from_string(auth_token)\n cache_client = CacheClient(configuration, credentials, default_ttl=ttl)\n return cls(session_id, cache_client, cache_name, ttl=ttl, **kwargs)\n @property\n def messages(self) -> list[BaseMessage]: # type: ignore[override]\n \"\"\"Retrieve the messages from Momento.\n Raises:\n SdkException: Momento service or network error\n Exception: Unexpected response\n Returns:\n list[BaseMessage]: List of cached messages\n \"\"\"\n from momento.responses import CacheListFetch\n fetch_response = self.cache_client.list_fetch(self.cache_name, self.key)\n if isinstance(fetch_response, CacheListFetch.Hit):\n items = [json.loads(m) for m in fetch_response.value_list_string]\n return messages_from_dict(items)\n elif isinstance(fetch_response, CacheListFetch.Miss):\n return []\n elif isinstance(fetch_response, CacheListFetch.Error):", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/momento.html"}
+{"id": "4fa91a1ffffa-3", "text": "return []\n elif isinstance(fetch_response, CacheListFetch.Error):\n raise fetch_response.inner_exception\n else:\n raise Exception(f\"Unexpected response: {fetch_response}\")\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Store a message in the cache.\n Args:\n message (BaseMessage): The message object to store.\n Raises:\n SdkException: Momento service or network error.\n Exception: Unexpected response.\n \"\"\"\n from momento.responses import CacheListPushBack\n item = json.dumps(_message_to_dict(message))\n push_response = self.cache_client.list_push_back(\n self.cache_name, self.key, item, ttl=self.ttl\n )\n if isinstance(push_response, CacheListPushBack.Success):\n return None\n elif isinstance(push_response, CacheListPushBack.Error):\n raise push_response.inner_exception\n else:\n raise Exception(f\"Unexpected response: {push_response}\")\n[docs] def clear(self) -> None:\n \"\"\"Remove the session's messages from the cache.\n Raises:\n SdkException: Momento service or network error.\n Exception: Unexpected response.\n \"\"\"\n from momento.responses import CacheDelete\n delete_response = self.cache_client.delete(self.cache_name, self.key)\n if isinstance(delete_response, CacheDelete.Success):\n return None\n elif isinstance(delete_response, CacheDelete.Error):\n raise delete_response.inner_exception\n else:\n raise Exception(f\"Unexpected response: {delete_response}\")\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/momento.html"}
+{"id": "302daf0307b7-0", "text": "Source code for langchain.memory.chat_message_histories.cosmos_db\n\"\"\"Azure CosmosDB Memory History.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom types import TracebackType\nfrom typing import TYPE_CHECKING, Any, List, Optional, Type\nfrom langchain.schema import (\n BaseChatMessageHistory,\n BaseMessage,\n messages_from_dict,\n messages_to_dict,\n)\nlogger = logging.getLogger(__name__)\nif TYPE_CHECKING:\n from azure.cosmos import ContainerProxy\n[docs]class CosmosDBChatMessageHistory(BaseChatMessageHistory):\n \"\"\"Chat history backed by Azure CosmosDB.\"\"\"\n def __init__(\n self,\n cosmos_endpoint: str,\n cosmos_database: str,\n cosmos_container: str,\n session_id: str,\n user_id: str,\n credential: Any = None,\n connection_string: Optional[str] = None,\n ttl: Optional[int] = None,\n cosmos_client_kwargs: Optional[dict] = None,\n ):\n \"\"\"\n Initializes a new instance of the CosmosDBChatMessageHistory class.\n Make sure to call prepare_cosmos or use the context manager to make\n sure your database is ready.\n Either a credential or a connection string must be provided.\n :param cosmos_endpoint: The connection endpoint for the Azure Cosmos DB account.\n :param cosmos_database: The name of the database to use.\n :param cosmos_container: The name of the container to use.\n :param session_id: The session ID to use, can be overwritten while loading.\n :param user_id: The user ID to use, can be overwritten while loading.\n :param credential: The credential to use to authenticate to Azure Cosmos DB.\n :param connection_string: The connection string to use to authenticate.", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/cosmos_db.html"}
+{"id": "302daf0307b7-1", "text": ":param connection_string: The connection string to use to authenticate.\n :param ttl: The time to live (in seconds) to use for documents in the container.\n :param cosmos_client_kwargs: Additional kwargs to pass to the CosmosClient.\n \"\"\"\n self.cosmos_endpoint = cosmos_endpoint\n self.cosmos_database = cosmos_database\n self.cosmos_container = cosmos_container\n self.credential = credential\n self.conn_string = connection_string\n self.session_id = session_id\n self.user_id = user_id\n self.ttl = ttl\n self.messages: List[BaseMessage] = []\n try:\n from azure.cosmos import ( # pylint: disable=import-outside-toplevel # noqa: E501\n CosmosClient,\n )\n except ImportError as exc:\n raise ImportError(\n \"You must install the azure-cosmos package to use the CosmosDBChatMessageHistory.\" # noqa: E501\n ) from exc\n if self.credential:\n self._client = CosmosClient(\n url=self.cosmos_endpoint,\n credential=self.credential,\n **cosmos_client_kwargs or {},\n )\n elif self.conn_string:\n self._client = CosmosClient.from_connection_string(\n conn_str=self.conn_string,\n **cosmos_client_kwargs or {},\n )\n else:\n raise ValueError(\"Either a connection string or a credential must be set.\")\n self._container: Optional[ContainerProxy] = None\n[docs] def prepare_cosmos(self) -> None:\n \"\"\"Prepare the CosmosDB client.\n Use this function or the context manager to make sure your database is ready.\n \"\"\"\n try:\n from azure.cosmos import ( # pylint: disable=import-outside-toplevel # noqa: E501", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/cosmos_db.html"}
+{"id": "302daf0307b7-2", "text": "PartitionKey,\n )\n except ImportError as exc:\n raise ImportError(\n \"You must install the azure-cosmos package to use the CosmosDBChatMessageHistory.\" # noqa: E501\n ) from exc\n database = self._client.create_database_if_not_exists(self.cosmos_database)\n self._container = database.create_container_if_not_exists(\n self.cosmos_container,\n partition_key=PartitionKey(\"/user_id\"),\n default_ttl=self.ttl,\n )\n self.load_messages()\n def __enter__(self) -> \"CosmosDBChatMessageHistory\":\n \"\"\"Context manager entry point.\"\"\"\n self._client.__enter__()\n self.prepare_cosmos()\n return self\n def __exit__(\n self,\n exc_type: Optional[Type[BaseException]],\n exc_val: Optional[BaseException],\n traceback: Optional[TracebackType],\n ) -> None:\n \"\"\"Context manager exit\"\"\"\n self.upsert_messages()\n self._client.__exit__(exc_type, exc_val, traceback)\n[docs] def load_messages(self) -> None:\n \"\"\"Retrieve the messages from Cosmos\"\"\"\n if not self._container:\n raise ValueError(\"Container not initialized\")\n try:\n from azure.cosmos.exceptions import ( # pylint: disable=import-outside-toplevel # noqa: E501\n CosmosHttpResponseError,\n )\n except ImportError as exc:\n raise ImportError(\n \"You must install the azure-cosmos package to use the CosmosDBChatMessageHistory.\" # noqa: E501\n ) from exc\n try:\n item = self._container.read_item(\n item=self.session_id, partition_key=self.user_id\n )\n except CosmosHttpResponseError:", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/cosmos_db.html"}
+{"id": "302daf0307b7-3", "text": ")\n except CosmosHttpResponseError:\n logger.info(\"no session found\")\n return\n if \"messages\" in item and len(item[\"messages\"]) > 0:\n self.messages = messages_from_dict(item[\"messages\"])\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Add a self-created message to the store\"\"\"\n self.messages.append(message)\n self.upsert_messages()\n[docs] def upsert_messages(self) -> None:\n \"\"\"Update the cosmosdb item.\"\"\"\n if not self._container:\n raise ValueError(\"Container not initialized\")\n self._container.upsert_item(\n body={\n \"id\": self.session_id,\n \"user_id\": self.user_id,\n \"messages\": messages_to_dict(self.messages),\n }\n )\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from this memory and cosmos.\"\"\"\n self.messages = []\n if self._container:\n self._container.delete_item(\n item=self.session_id, partition_key=self.user_id\n )\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/cosmos_db.html"}
+{"id": "d8e18f006e0a-0", "text": "Source code for langchain.memory.chat_message_histories.dynamodb\nimport logging\nfrom typing import List, Optional\nfrom langchain.schema import (\n BaseChatMessageHistory,\n BaseMessage,\n _message_to_dict,\n messages_from_dict,\n messages_to_dict,\n)\nlogger = logging.getLogger(__name__)\n[docs]class DynamoDBChatMessageHistory(BaseChatMessageHistory):\n \"\"\"Chat message history that stores history in AWS DynamoDB.\n This class expects that a DynamoDB table with name `table_name`\n and a partition Key of `SessionId` is present.\n Args:\n table_name: name of the DynamoDB table\n session_id: arbitrary key that is used to store the messages\n of a single chat session.\n endpoint_url: URL of the AWS endpoint to connect to. This argument\n is optional and useful for test purposes, like using Localstack.\n If you plan to use AWS cloud service, you normally don't have to\n worry about setting the endpoint_url.\n \"\"\"\n def __init__(\n self, table_name: str, session_id: str, endpoint_url: Optional[str] = None\n ):\n import boto3\n if endpoint_url:\n client = boto3.resource(\"dynamodb\", endpoint_url=endpoint_url)\n else:\n client = boto3.resource(\"dynamodb\")\n self.table = client.Table(table_name)\n self.session_id = session_id\n @property\n def messages(self) -> List[BaseMessage]: # type: ignore\n \"\"\"Retrieve the messages from DynamoDB\"\"\"\n from botocore.exceptions import ClientError\n try:\n response = self.table.get_item(Key={\"SessionId\": self.session_id})\n except ClientError as error:", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/dynamodb.html"}
+{"id": "d8e18f006e0a-1", "text": "except ClientError as error:\n if error.response[\"Error\"][\"Code\"] == \"ResourceNotFoundException\":\n logger.warning(\"No record found with session id: %s\", self.session_id)\n else:\n logger.error(error)\n if response and \"Item\" in response:\n items = response[\"Item\"][\"History\"]\n else:\n items = []\n messages = messages_from_dict(items)\n return messages\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Append the message to the record in DynamoDB\"\"\"\n from botocore.exceptions import ClientError\n messages = messages_to_dict(self.messages)\n _message = _message_to_dict(message)\n messages.append(_message)\n try:\n self.table.put_item(\n Item={\"SessionId\": self.session_id, \"History\": messages}\n )\n except ClientError as err:\n logger.error(err)\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from DynamoDB\"\"\"\n from botocore.exceptions import ClientError\n try:\n self.table.delete_item(Key={\"SessionId\": self.session_id})\n except ClientError as err:\n logger.error(err)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/dynamodb.html"}
+{"id": "1788e5617a56-0", "text": "Source code for langchain.memory.chat_message_histories.postgres\nimport json\nimport logging\nfrom typing import List\nfrom langchain.schema import (\n BaseChatMessageHistory,\n BaseMessage,\n _message_to_dict,\n messages_from_dict,\n)\nlogger = logging.getLogger(__name__)\nDEFAULT_CONNECTION_STRING = \"postgresql://postgres:mypassword@localhost/chat_history\"\n[docs]class PostgresChatMessageHistory(BaseChatMessageHistory):\n def __init__(\n self,\n session_id: str,\n connection_string: str = DEFAULT_CONNECTION_STRING,\n table_name: str = \"message_store\",\n ):\n import psycopg\n from psycopg.rows import dict_row\n try:\n self.connection = psycopg.connect(connection_string)\n self.cursor = self.connection.cursor(row_factory=dict_row)\n except psycopg.OperationalError as error:\n logger.error(error)\n self.session_id = session_id\n self.table_name = table_name\n self._create_table_if_not_exists()\n def _create_table_if_not_exists(self) -> None:\n create_table_query = f\"\"\"CREATE TABLE IF NOT EXISTS {self.table_name} (\n id SERIAL PRIMARY KEY,\n session_id TEXT NOT NULL,\n message JSONB NOT NULL\n );\"\"\"\n self.cursor.execute(create_table_query)\n self.connection.commit()\n @property\n def messages(self) -> List[BaseMessage]: # type: ignore\n \"\"\"Retrieve the messages from PostgreSQL\"\"\"\n query = f\"SELECT message FROM {self.table_name} WHERE session_id = %s;\"\n self.cursor.execute(query, (self.session_id,))\n items = [record[\"message\"] for record in self.cursor.fetchall()]\n messages = messages_from_dict(items)\n return messages", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/postgres.html"}
+{"id": "1788e5617a56-1", "text": "messages = messages_from_dict(items)\n return messages\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Append the message to the record in PostgreSQL\"\"\"\n from psycopg import sql\n query = sql.SQL(\"INSERT INTO {} (session_id, message) VALUES (%s, %s);\").format(\n sql.Identifier(self.table_name)\n )\n self.cursor.execute(\n query, (self.session_id, json.dumps(_message_to_dict(message)))\n )\n self.connection.commit()\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from PostgreSQL\"\"\"\n query = f\"DELETE FROM {self.table_name} WHERE session_id = %s;\"\n self.cursor.execute(query, (self.session_id,))\n self.connection.commit()\n def __del__(self) -> None:\n if self.cursor:\n self.cursor.close()\n if self.connection:\n self.connection.close()\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/postgres.html"}
+{"id": "29d93a7a718e-0", "text": ".rst\n.pdf\nIndexes\nIndexes#\nIndexes refer to ways to structure documents so that LLMs can best interact with them.\nLangChain has a number of modules that help you load, structure, store, and retrieve documents.\nDocstore\nText Splitter\nDocument Loaders\nVector Stores\nRetrievers\nDocument Compressors\nDocument Transformers\nprevious\nEmbeddings\nnext\nDocstore\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/indexes.html"}
+{"id": "69af32dbe667-0", "text": ".rst\n.pdf\nModels\nModels#\nLangChain provides interfaces and integrations for a number of different types of models.\nLLMs\nChat Models\nEmbeddings\nprevious\nAPI References\nnext\nChat Models\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/models.html"}
+{"id": "45336e00c0a5-0", "text": ".rst\n.pdf\nPrompts\nPrompts#\nThe reference guides here all relate to objects for working with Prompts.\nPromptTemplates\nExample Selector\nOutput Parsers\nprevious\nHow to serialize prompts\nnext\nPromptTemplates\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/prompts.html"}
+{"id": "7d66d15c4122-0", "text": ".rst\n.pdf\nAgents\nAgents#\nReference guide for Agents and associated abstractions.\nAgents\nTools\nAgent Toolkits\nprevious\nMemory\nnext\nAgents\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/agents.html"}
+{"id": "2cfcb2537b9f-0", "text": ".md\n.pdf\nInstallation\n Contents \nOfficial Releases\nInstalling from source\nInstallation#\nOfficial Releases#\nLangChain is available on PyPi, so to it is easily installable with:\npip install langchain\nThat will install the bare minimum requirements of LangChain.\nA lot of the value of LangChain comes when integrating it with various model providers, datastores, etc.\nBy default, the dependencies needed to do that are NOT installed.\nHowever, there are two other ways to install LangChain that do bring in those dependencies.\nTo install modules needed for the common LLM providers, run:\npip install langchain[llms]\nTo install all modules needed for all integrations, run:\npip install langchain[all]\nNote that if you are using zsh, you\u2019ll need to quote square brackets when passing them as an argument to a command, for example:\npip install 'langchain[all]'\nInstalling from source#\nIf you want to install from source, you can do so by cloning the repo and running:\npip install -e .\nprevious\nSQL Question Answering Benchmarking: Chinook\nnext\nAPI References\n Contents\n \nOfficial Releases\nInstalling from source\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/installation.html"}
+{"id": "290e16503ce6-0", "text": ".rst\n.pdf\nMemory\nMemory#\nclass langchain.memory.CassandraChatMessageHistory(contact_points: List[str], session_id: str, port: int = 9042, username: str = 'cassandra', password: str = 'cassandra', keyspace_name: str = 'chat_history', table_name: str = 'message_store')[source]#\nChat message history that stores history in Cassandra.\nParameters\ncontact_points \u2013 list of ips to connect to Cassandra cluster\nsession_id \u2013 arbitrary key that is used to store the messages\nof a single chat session.\nport \u2013 port to connect to Cassandra cluster\nusername \u2013 username to connect to Cassandra cluster\npassword \u2013 password to connect to Cassandra cluster\nkeyspace_name \u2013 name of the keyspace to use\ntable_name \u2013 name of the table to use\nadd_message(message: langchain.schema.BaseMessage) \u2192 None[source]#\nAppend the message to the record in Cassandra\nclear() \u2192 None[source]#\nClear session memory from Cassandra\nproperty messages: List[langchain.schema.BaseMessage]#\nRetrieve the messages from Cassandra\npydantic model langchain.memory.ChatMessageHistory[source]#\nfield messages: List[langchain.schema.BaseMessage] = []#\nadd_message(message: langchain.schema.BaseMessage) \u2192 None[source]#\nAdd a self-created message to the store\nclear() \u2192 None[source]#\nRemove all messages from the store\npydantic model langchain.memory.CombinedMemory[source]#\nClass for combining multiple memories\u2019 data together.\nValidators\ncheck_input_key \u00bb memories\ncheck_repeated_memory_variable \u00bb memories\nfield memories: List[langchain.schema.BaseMemory] [Required]#\nFor tracking all the memories that should be accessed.\nclear() \u2192 None[source]#\nClear context from this session for every memory.", "source": "https://python.langchain.com/en/latest/reference/modules/memory.html"}
+{"id": "290e16503ce6-1", "text": "clear() \u2192 None[source]#\nClear context from this session for every memory.\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, str][source]#\nLoad all vars from sub-memories.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]#\nSave context from this session for every memory.\nproperty memory_variables: List[str]#\nAll the memory variables that this instance provides.\npydantic model langchain.memory.ConversationBufferMemory[source]#\nBuffer for storing conversation memory.\nfield ai_prefix: str = 'AI'#\nfield human_prefix: str = 'Human'#\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, Any][source]#\nReturn history buffer.\nproperty buffer: Any#\nString buffer of memory.\npydantic model langchain.memory.ConversationBufferWindowMemory[source]#\nBuffer for storing conversation memory.\nfield ai_prefix: str = 'AI'#\nfield human_prefix: str = 'Human'#\nfield k: int = 5#\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, str][source]#\nReturn history buffer.\nproperty buffer: List[langchain.schema.BaseMessage]#\nString buffer of memory.\npydantic model langchain.memory.ConversationEntityMemory[source]#\nEntity extractor & summarizer to memory.\nfield ai_prefix: str = 'AI'#\nfield chat_history_key: str = 'history'#\nfield entity_cache: List[str] = []#", "source": "https://python.langchain.com/en/latest/reference/modules/memory.html"}
+{"id": "290e16503ce6-2", "text": "field entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\\n\\nThe conversation history is provided just in case of a coreference (e.g. \"What do you know about him\" where \"him\" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\\n\\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: how\\'s it going today?\\nAI: \"It\\'s going great! How about you?\"\\nPerson #1: good! busy working on Langchain. lots to do.\\nAI: \"That sounds like a lot of work! What kind of things are you doing to make Langchain better?\"\\nLast line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\\nOutput: Langchain\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: how\\'s it going today?\\nAI: \"It\\'s going great! How about you?\"\\nPerson #1: good! busy working on Langchain. lots to do.\\nAI: \"That sounds like a lot of work! What kind of things are you doing to make Langchain better?\"\\nLast line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the", "source": "https://python.langchain.com/en/latest/reference/modules/memory.html"}
+{"id": "290e16503ce6-3", "text": "line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\\'m working with Person #2.\\nOutput: Langchain, Person #2\\nEND OF EXAMPLE\\n\\nConversation history (for reference only):\\n{history}\\nLast line of conversation (for extraction):\\nHuman: {input}\\n\\nOutput:', template_format='f-string', validate_template=True)#", "source": "https://python.langchain.com/en/latest/reference/modules/memory.html"}
+{"id": "290e16503ce6-4", "text": "field entity_store: langchain.memory.entity.BaseEntityStore [Optional]#\nfield entity_summarization_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['entity', 'summary', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant helping a human keep track of facts about relevant people, places, and concepts in their life. Update the summary of the provided entity in the \"Entity\" section based on the last line of your conversation with the human. If you are writing the summary for the first time, return a single sentence.\\nThe update should only include facts that are relayed in the last line of conversation about the provided entity, and should only contain facts about the provided entity.\\n\\nIf there is no new information about the provided entity or the information is not worth noting (not an important or relevant fact to remember long-term), return the existing summary unchanged.\\n\\nFull conversation history (for context):\\n{history}\\n\\nEntity to summarize:\\n{entity}\\n\\nExisting summary of {entity}:\\n{summary}\\n\\nLast line of conversation:\\nHuman: {input}\\nUpdated summary:', template_format='f-string', validate_template=True)#\nfield human_prefix: str = 'Human'#\nfield k: int = 3#\nfield llm: langchain.base_language.BaseLanguageModel [Required]#\nclear() \u2192 None[source]#\nClear memory contents.\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, Any][source]#\nReturn history buffer.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]#\nSave context from this conversation to buffer.\nproperty buffer: List[langchain.schema.BaseMessage]#\npydantic model langchain.memory.ConversationKGMemory[source]#\nKnowledge graph memory for storing conversation memory.", "source": "https://python.langchain.com/en/latest/reference/modules/memory.html"}
+{"id": "290e16503ce6-5", "text": "Knowledge graph memory for storing conversation memory.\nIntegrates with external knowledge graph to store and retrieve\ninformation about knowledge triples in the conversation.\nfield ai_prefix: str = 'AI'#", "source": "https://python.langchain.com/en/latest/reference/modules/memory.html"}
+{"id": "290e16503ce6-6", "text": "field entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\\n\\nThe conversation history is provided just in case of a coreference (e.g. \"What do you know about him\" where \"him\" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\\n\\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: how\\'s it going today?\\nAI: \"It\\'s going great! How about you?\"\\nPerson #1: good! busy working on Langchain. lots to do.\\nAI: \"That sounds like a lot of work! What kind of things are you doing to make Langchain better?\"\\nLast line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\\nOutput: Langchain\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: how\\'s it going today?\\nAI: \"It\\'s going great! How about you?\"\\nPerson #1: good! busy working on Langchain. lots to do.\\nAI: \"That sounds like a lot of work! What kind of things are you doing to make Langchain better?\"\\nLast line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the", "source": "https://python.langchain.com/en/latest/reference/modules/memory.html"}
+{"id": "290e16503ce6-7", "text": "line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\\'m working with Person #2.\\nOutput: Langchain, Person #2\\nEND OF EXAMPLE\\n\\nConversation history (for reference only):\\n{history}\\nLast line of conversation (for extraction):\\nHuman: {input}\\n\\nOutput:', template_format='f-string', validate_template=True)#", "source": "https://python.langchain.com/en/latest/reference/modules/memory.html"}
+{"id": "290e16503ce6-8", "text": "field human_prefix: str = 'Human'#\nfield k: int = 2#\nfield kg: langchain.graphs.networkx_graph.NetworkxEntityGraph [Optional]#", "source": "https://python.langchain.com/en/latest/reference/modules/memory.html"}
+{"id": "290e16503ce6-9", "text": "field knowledge_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template=\"You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the last line of conversation. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: Did you hear aliens landed in Area 51?\\nAI: No, I didn't hear that. What do you know about Area 51?\\nPerson #1: It's a secret military base in Nevada.\\nAI: What do you know about Nevada?\\nLast line of conversation:\\nPerson #1: It's a state in the US. It's also the number 1 producer of gold in the US.\\n\\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: Hello.\\nAI: Hi! How are you?\\nPerson #1: I'm good. How are you?\\nAI: I'm good too.\\nLast line of conversation:\\nPerson #1: I'm going to the store.\\n\\nOutput: NONE\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: What do you know about Descartes?\\nAI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th", "source": "https://python.langchain.com/en/latest/reference/modules/memory.html"}
+{"id": "290e16503ce6-10", "text": "Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\\nPerson #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\\nAI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is baked bean pie.\\nLast line of conversation:\\nPerson #1: Oh huh. I know Descartes likes to drive antique scooters and play the mandolin.\\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\\nEND OF EXAMPLE\\n\\nConversation history (for reference only):\\n{history}\\nLast line of conversation (for extraction):\\nHuman: {input}\\n\\nOutput:\", template_format='f-string', validate_template=True)#", "source": "https://python.langchain.com/en/latest/reference/modules/memory.html"}
+{"id": "290e16503ce6-11", "text": "field llm: langchain.base_language.BaseLanguageModel [Required]#\nfield summary_message_cls: Type[langchain.schema.BaseMessage] = #\nNumber of previous utterances to include in the context.\nclear() \u2192 None[source]#\nClear memory contents.\nget_current_entities(input_string: str) \u2192 List[str][source]#\nget_knowledge_triplets(input_string: str) \u2192 List[langchain.graphs.networkx_graph.KnowledgeTriple][source]#\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, Any][source]#\nReturn history buffer.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]#\nSave context from this conversation to buffer.\npydantic model langchain.memory.ConversationStringBufferMemory[source]#\nBuffer for storing conversation memory.\nfield ai_prefix: str = 'AI'#\nPrefix to use for AI generated responses.\nfield buffer: str = ''#\nfield human_prefix: str = 'Human'#\nfield input_key: Optional[str] = None#\nfield output_key: Optional[str] = None#\nclear() \u2192 None[source]#\nClear memory contents.\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, str][source]#\nReturn history buffer.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]#\nSave context from this conversation to buffer.\nproperty memory_variables: List[str]#\nWill always return list of memory variables.\n:meta private:\npydantic model langchain.memory.ConversationSummaryBufferMemory[source]#\nBuffer with summarizer for storing conversation memory.\nfield max_token_limit: int = 2000#\nfield memory_key: str = 'history'#\nfield moving_summary_buffer: str = ''#", "source": "https://python.langchain.com/en/latest/reference/modules/memory.html"}
+{"id": "290e16503ce6-12", "text": "field memory_key: str = 'history'#\nfield moving_summary_buffer: str = ''#\nclear() \u2192 None[source]#\nClear memory contents.\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, Any][source]#\nReturn history buffer.\nprune() \u2192 None[source]#\nPrune buffer if it exceeds max token limit\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]#\nSave context from this conversation to buffer.\nproperty buffer: List[langchain.schema.BaseMessage]#\npydantic model langchain.memory.ConversationSummaryMemory[source]#\nConversation summarizer to memory.\nfield buffer: str = ''#\nclear() \u2192 None[source]#\nClear memory contents.\nclassmethod from_messages(llm: langchain.base_language.BaseLanguageModel, chat_memory: langchain.schema.BaseChatMessageHistory, *, summarize_step: int = 2, **kwargs: Any) \u2192 langchain.memory.summary.ConversationSummaryMemory[source]#\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, Any][source]#\nReturn history buffer.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]#\nSave context from this conversation to buffer.\npydantic model langchain.memory.ConversationTokenBufferMemory[source]#\nBuffer for storing conversation memory.\nfield ai_prefix: str = 'AI'#\nfield human_prefix: str = 'Human'#\nfield llm: langchain.base_language.BaseLanguageModel [Required]#\nfield max_token_limit: int = 2000#\nfield memory_key: str = 'history'#\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, Any][source]#\nReturn history buffer.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/memory.html"}
+{"id": "290e16503ce6-13", "text": "Save context from this conversation to buffer. Pruned.\nproperty buffer: List[langchain.schema.BaseMessage]#\nString buffer of memory.\nclass langchain.memory.CosmosDBChatMessageHistory(cosmos_endpoint: str, cosmos_database: str, cosmos_container: str, session_id: str, user_id: str, credential: Any = None, connection_string: Optional[str] = None, ttl: Optional[int] = None, cosmos_client_kwargs: Optional[dict] = None)[source]#\nChat history backed by Azure CosmosDB.\nadd_message(message: langchain.schema.BaseMessage) \u2192 None[source]#\nAdd a self-created message to the store\nclear() \u2192 None[source]#\nClear session memory from this memory and cosmos.\nload_messages() \u2192 None[source]#\nRetrieve the messages from Cosmos\nprepare_cosmos() \u2192 None[source]#\nPrepare the CosmosDB client.\nUse this function or the context manager to make sure your database is ready.\nupsert_messages() \u2192 None[source]#\nUpdate the cosmosdb item.\nclass langchain.memory.DynamoDBChatMessageHistory(table_name: str, session_id: str, endpoint_url: Optional[str] = None)[source]#\nChat message history that stores history in AWS DynamoDB.\nThis class expects that a DynamoDB table with name table_name\nand a partition Key of SessionId is present.\nParameters\ntable_name \u2013 name of the DynamoDB table\nsession_id \u2013 arbitrary key that is used to store the messages\nof a single chat session.\nendpoint_url \u2013 URL of the AWS endpoint to connect to. This argument\nis optional and useful for test purposes, like using Localstack.\nIf you plan to use AWS cloud service, you normally don\u2019t have to\nworry about setting the endpoint_url.\nadd_message(message: langchain.schema.BaseMessage) \u2192 None[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/memory.html"}
+{"id": "290e16503ce6-14", "text": "add_message(message: langchain.schema.BaseMessage) \u2192 None[source]#\nAppend the message to the record in DynamoDB\nclear() \u2192 None[source]#\nClear session memory from DynamoDB\nproperty messages: List[langchain.schema.BaseMessage]#\nRetrieve the messages from DynamoDB\nclass langchain.memory.FileChatMessageHistory(file_path: str)[source]#\nChat message history that stores history in a local file.\nParameters\nfile_path \u2013 path of the local file to store the messages.\nadd_message(message: langchain.schema.BaseMessage) \u2192 None[source]#\nAppend the message to the record in the local file\nclear() \u2192 None[source]#\nClear session memory from the local file\nproperty messages: List[langchain.schema.BaseMessage]#\nRetrieve the messages from the local file\npydantic model langchain.memory.InMemoryEntityStore[source]#\nBasic in-memory entity store.\nfield store: Dict[str, Optional[str]] = {}#\nclear() \u2192 None[source]#\nDelete all entities from store.\ndelete(key: str) \u2192 None[source]#\nDelete entity value from store.\nexists(key: str) \u2192 bool[source]#\nCheck if entity exists in store.\nget(key: str, default: Optional[str] = None) \u2192 Optional[str][source]#\nGet entity value from store.\nset(key: str, value: Optional[str]) \u2192 None[source]#\nSet entity value in store.\nclass langchain.memory.MomentoChatMessageHistory(session_id: str, cache_client: momento.CacheClient, cache_name: str, *, key_prefix: str = 'message_store:', ttl: Optional[timedelta] = None, ensure_cache_exists: bool = True)[source]#\nChat message history cache that uses Momento as a backend.\nSee https://gomomento.com/", "source": "https://python.langchain.com/en/latest/reference/modules/memory.html"}
+{"id": "290e16503ce6-15", "text": "See https://gomomento.com/\nadd_message(message: langchain.schema.BaseMessage) \u2192 None[source]#\nStore a message in the cache.\nParameters\nmessage (BaseMessage) \u2013 The message object to store.\nRaises\nSdkException \u2013 Momento service or network error.\nException \u2013 Unexpected response.\nclear() \u2192 None[source]#\nRemove the session\u2019s messages from the cache.\nRaises\nSdkException \u2013 Momento service or network error.\nException \u2013 Unexpected response.\nclassmethod from_client_params(session_id: str, cache_name: str, ttl: timedelta, *, configuration: Optional[momento.config.Configuration] = None, auth_token: Optional[str] = None, **kwargs: Any) \u2192 MomentoChatMessageHistory[source]#\nConstruct cache from CacheClient parameters.\nproperty messages: list[langchain.schema.BaseMessage]#\nRetrieve the messages from Momento.\nRaises\nSdkException \u2013 Momento service or network error\nException \u2013 Unexpected response\nReturns\nList of cached messages\nReturn type\nlist[BaseMessage]\nclass langchain.memory.MongoDBChatMessageHistory(connection_string: str, session_id: str, database_name: str = 'chat_history', collection_name: str = 'message_store')[source]#\nChat message history that stores history in MongoDB.\nParameters\nconnection_string \u2013 connection string to connect to MongoDB\nsession_id \u2013 arbitrary key that is used to store the messages\nof a single chat session.\ndatabase_name \u2013 name of the database to use\ncollection_name \u2013 name of the collection to use\nadd_message(message: langchain.schema.BaseMessage) \u2192 None[source]#\nAppend the message to the record in MongoDB\nclear() \u2192 None[source]#\nClear session memory from MongoDB\nproperty messages: List[langchain.schema.BaseMessage]#\nRetrieve the messages from MongoDB", "source": "https://python.langchain.com/en/latest/reference/modules/memory.html"}
+{"id": "290e16503ce6-16", "text": "property messages: List[langchain.schema.BaseMessage]#\nRetrieve the messages from MongoDB\nclass langchain.memory.PostgresChatMessageHistory(session_id: str, connection_string: str = 'postgresql://postgres:mypassword@localhost/chat_history', table_name: str = 'message_store')[source]#\nadd_message(message: langchain.schema.BaseMessage) \u2192 None[source]#\nAppend the message to the record in PostgreSQL\nclear() \u2192 None[source]#\nClear session memory from PostgreSQL\nproperty messages: List[langchain.schema.BaseMessage]#\nRetrieve the messages from PostgreSQL\npydantic model langchain.memory.ReadOnlySharedMemory[source]#\nA memory wrapper that is read-only and cannot be changed.\nfield memory: langchain.schema.BaseMemory [Required]#\nclear() \u2192 None[source]#\nNothing to clear, got a memory like a vault.\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, str][source]#\nLoad memory variables from memory.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]#\nNothing should be saved or changed\nproperty memory_variables: List[str]#\nReturn memory variables.\nclass langchain.memory.RedisChatMessageHistory(session_id: str, url: str = 'redis://localhost:6379/0', key_prefix: str = 'message_store:', ttl: Optional[int] = None)[source]#\nadd_message(message: langchain.schema.BaseMessage) \u2192 None[source]#\nAppend the message to the record in Redis\nclear() \u2192 None[source]#\nClear session memory from Redis\nproperty key: str#\nConstruct the record key to use\nproperty messages: List[langchain.schema.BaseMessage]#\nRetrieve the messages from Redis\npydantic model langchain.memory.RedisEntityStore[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/memory.html"}
+{"id": "290e16503ce6-17", "text": "Retrieve the messages from Redis\npydantic model langchain.memory.RedisEntityStore[source]#\nRedis-backed Entity store. Entities get a TTL of 1 day by default, and\nthat TTL is extended by 3 days every time the entity is read back.\nfield key_prefix: str = 'memory_store'#\nfield recall_ttl: Optional[int] = 259200#\nfield redis_client: Any = None#\nfield session_id: str = 'default'#\nfield ttl: Optional[int] = 86400#\nclear() \u2192 None[source]#\nDelete all entities from store.\ndelete(key: str) \u2192 None[source]#\nDelete entity value from store.\nexists(key: str) \u2192 bool[source]#\nCheck if entity exists in store.\nget(key: str, default: Optional[str] = None) \u2192 Optional[str][source]#\nGet entity value from store.\nset(key: str, value: Optional[str]) \u2192 None[source]#\nSet entity value in store.\nproperty full_key_prefix: str#\npydantic model langchain.memory.SQLiteEntityStore[source]#\nSQLite-backed Entity store\nfield session_id: str = 'default'#\nfield table_name: str = 'memory_store'#\nclear() \u2192 None[source]#\nDelete all entities from store.\ndelete(key: str) \u2192 None[source]#\nDelete entity value from store.\nexists(key: str) \u2192 bool[source]#\nCheck if entity exists in store.\nget(key: str, default: Optional[str] = None) \u2192 Optional[str][source]#\nGet entity value from store.\nset(key: str, value: Optional[str]) \u2192 None[source]#\nSet entity value in store.\nproperty full_table_name: str#\npydantic model langchain.memory.SimpleMemory[source]#\nSimple memory for storing context or other bits of information that shouldn\u2019t", "source": "https://python.langchain.com/en/latest/reference/modules/memory.html"}
+{"id": "290e16503ce6-18", "text": "Simple memory for storing context or other bits of information that shouldn\u2019t\never change between prompts.\nfield memories: Dict[str, Any] = {}#\nclear() \u2192 None[source]#\nNothing to clear, got a memory like a vault.\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, str][source]#\nReturn key-value pairs given the text input to the chain.\nIf None, return all memories\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]#\nNothing should be saved or changed, my memory is set in stone.\nproperty memory_variables: List[str]#\nInput keys this memory class will load dynamically.\npydantic model langchain.memory.VectorStoreRetrieverMemory[source]#\nClass for a VectorStore-backed memory object.\nfield input_key: Optional[str] = None#\nKey name to index the inputs to load_memory_variables.\nfield memory_key: str = 'history'#\nKey name to locate the memories in the result of load_memory_variables.\nfield retriever: langchain.vectorstores.base.VectorStoreRetriever [Required]#\nVectorStoreRetriever object to connect to.\nfield return_docs: bool = False#\nWhether or not to return the result of querying the database directly.\nclear() \u2192 None[source]#\nNothing to clear.\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, Union[List[langchain.schema.Document], str]][source]#\nReturn history buffer.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]#\nSave context from this conversation to buffer.\nproperty memory_variables: List[str]#\nThe list of keys emitted from the load_memory_variables method.\nprevious\nDocument Transformers\nnext\nAgents\nBy Harrison Chase", "source": "https://python.langchain.com/en/latest/reference/modules/memory.html"}
+{"id": "290e16503ce6-19", "text": "previous\nDocument Transformers\nnext\nAgents\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/memory.html"}
+{"id": "15bd39437f78-0", "text": ".rst\n.pdf\nUtilities\nUtilities#\nGeneral utilities.\npydantic model langchain.utilities.ApifyWrapper[source]#\nWrapper around Apify.\nTo use, you should have the apify-client python package installed,\nand the environment variable APIFY_API_TOKEN set with your API key, or pass\napify_api_token as a named parameter to the constructor.\nfield apify_client: Any = None#\nfield apify_client_async: Any = None#\nasync acall_actor(actor_id: str, run_input: Dict, dataset_mapping_function: Callable[[Dict], langchain.schema.Document], *, build: Optional[str] = None, memory_mbytes: Optional[int] = None, timeout_secs: Optional[int] = None) \u2192 langchain.document_loaders.apify_dataset.ApifyDatasetLoader[source]#\nRun an Actor on the Apify platform and wait for results to be ready.\nParameters\nactor_id (str) \u2013 The ID or name of the Actor on the Apify platform.\nrun_input (Dict) \u2013 The input object of the Actor that you\u2019re trying to run.\ndataset_mapping_function (Callable) \u2013 A function that takes a single\ndictionary (an Apify dataset item) and converts it to\nan instance of the Document class.\nbuild (str, optional) \u2013 Optionally specifies the actor build to run.\nIt can be either a build tag or build number.\nmemory_mbytes (int, optional) \u2013 Optional memory limit for the run,\nin megabytes.\ntimeout_secs (int, optional) \u2013 Optional timeout for the run, in seconds.\nReturns\nA loader that will fetch the records from theActor run\u2019s default dataset.\nReturn type\nApifyDatasetLoader", "source": "https://python.langchain.com/en/latest/reference/modules/utilities.html"}
+{"id": "15bd39437f78-1", "text": "Return type\nApifyDatasetLoader\ncall_actor(actor_id: str, run_input: Dict, dataset_mapping_function: Callable[[Dict], langchain.schema.Document], *, build: Optional[str] = None, memory_mbytes: Optional[int] = None, timeout_secs: Optional[int] = None) \u2192 langchain.document_loaders.apify_dataset.ApifyDatasetLoader[source]#\nRun an Actor on the Apify platform and wait for results to be ready.\nParameters\nactor_id (str) \u2013 The ID or name of the Actor on the Apify platform.\nrun_input (Dict) \u2013 The input object of the Actor that you\u2019re trying to run.\ndataset_mapping_function (Callable) \u2013 A function that takes a single\ndictionary (an Apify dataset item) and converts it to an\ninstance of the Document class.\nbuild (str, optional) \u2013 Optionally specifies the actor build to run.\nIt can be either a build tag or build number.\nmemory_mbytes (int, optional) \u2013 Optional memory limit for the run,\nin megabytes.\ntimeout_secs (int, optional) \u2013 Optional timeout for the run, in seconds.\nReturns\nA loader that will fetch the records from theActor run\u2019s default dataset.\nReturn type\nApifyDatasetLoader\npydantic model langchain.utilities.ArxivAPIWrapper[source]#\nWrapper around ArxivAPI.\nTo use, you should have the arxiv python package installed.\nhttps://lukasschwab.me/arxiv.py/index.html\nThis wrapper will use the Arxiv API to conduct searches and\nfetch document summaries. By default, it will return the document summaries\nof the top-k results.\nIt limits the Document content by doc_content_chars_max.\nSet doc_content_chars_max=None if you don\u2019t want to limit the content size.\nParameters", "source": "https://python.langchain.com/en/latest/reference/modules/utilities.html"}
+{"id": "15bd39437f78-2", "text": "Set doc_content_chars_max=None if you don\u2019t want to limit the content size.\nParameters\ntop_k_results \u2013 number of the top-scored document used for the arxiv tool\nARXIV_MAX_QUERY_LENGTH \u2013 the cut limit on the query used for the arxiv tool.\nload_max_docs \u2013 a limit to the number of loaded documents\nload_all_available_meta \u2013 \nif True: the metadata of the loaded Documents gets all available meta info(see https://lukasschwab.me/arxiv.py/index.html#Result),\nif False: the metadata gets only the most informative fields.\nfield arxiv_exceptions: Any = None#\nfield doc_content_chars_max: int = 4000#\nfield load_all_available_meta: bool = False#\nfield load_max_docs: int = 100#\nfield top_k_results: int = 3#\nload(query: str) \u2192 List[langchain.schema.Document][source]#\nRun Arxiv search and get the article texts plus the article meta information.\nSee https://lukasschwab.me/arxiv.py/index.html#Search\nReturns: a list of documents with the document.page_content in text format\nrun(query: str) \u2192 str[source]#\nRun Arxiv search and get the article meta information.\nSee https://lukasschwab.me/arxiv.py/index.html#Search\nSee https://lukasschwab.me/arxiv.py/index.html#Result\nIt uses only the most informative fields of article meta information.\nclass langchain.utilities.BashProcess(strip_newlines: bool = False, return_err_output: bool = False, persistent: bool = False)[source]#\nExecutes bash commands and returns the output.\nprocess_output(output: str, command: str) \u2192 str[source]#\nrun(commands: Union[str, List[str]]) \u2192 str[source]#\nRun commands and return final output.", "source": "https://python.langchain.com/en/latest/reference/modules/utilities.html"}
+{"id": "15bd39437f78-3", "text": "Run commands and return final output.\npydantic model langchain.utilities.BingSearchAPIWrapper[source]#\nWrapper for Bing Search API.\nIn order to set this up, follow instructions at:\nhttps://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e\nfield bing_search_url: str [Required]#\nfield bing_subscription_key: str [Required]#\nfield k: int = 10#\nresults(query: str, num_results: int) \u2192 List[Dict][source]#\nRun query through BingSearch and return metadata.\nParameters\nquery \u2013 The query to search for.\nnum_results \u2013 The number of results to return.\nReturns\nsnippet - The description of the result.\ntitle - The title of the result.\nlink - The link to the result.\nReturn type\nA list of dictionaries with the following keys\nrun(query: str) \u2192 str[source]#\nRun query through BingSearch and parse result.\npydantic model langchain.utilities.DuckDuckGoSearchAPIWrapper[source]#\nWrapper for DuckDuckGo Search API.\nFree and does not require any setup\nfield k: int = 10#\nfield max_results: int = 5#\nfield region: Optional[str] = 'wt-wt'#\nfield safesearch: str = 'moderate'#\nfield time: Optional[str] = 'y'#\nget_snippets(query: str) \u2192 List[str][source]#\nRun query through DuckDuckGo and return concatenated results.\nresults(query: str, num_results: int) \u2192 List[Dict[str, str]][source]#\nRun query through DuckDuckGo and return metadata.\nParameters\nquery \u2013 The query to search for.\nnum_results \u2013 The number of results to return.\nReturns", "source": "https://python.langchain.com/en/latest/reference/modules/utilities.html"}
+{"id": "15bd39437f78-4", "text": "num_results \u2013 The number of results to return.\nReturns\nsnippet - The description of the result.\ntitle - The title of the result.\nlink - The link to the result.\nReturn type\nA list of dictionaries with the following keys\nrun(query: str) \u2192 str[source]#\npydantic model langchain.utilities.GooglePlacesAPIWrapper[source]#\nWrapper around Google Places API.\nTo use, you should have the googlemaps python package installed,an API key for the google maps platform,\nand the enviroment variable \u2018\u2019GPLACES_API_KEY\u2019\u2019\nset with your API key , or pass \u2018gplaces_api_key\u2019\nas a named parameter to the constructor.\nBy default, this will return the all the results on the input query.You can use the top_k_results argument to limit the number of results.\nExample\nfrom langchain import GooglePlacesAPIWrapper\ngplaceapi = GooglePlacesAPIWrapper()\nfield gplaces_api_key: Optional[str] = None#\nfield top_k_results: Optional[int] = None#\nfetch_place_details(place_id: str) \u2192 Optional[str][source]#\nformat_place_details(place_details: Dict[str, Any]) \u2192 Optional[str][source]#\nrun(query: str) \u2192 str[source]#\nRun Places search and get k number of places that exists that match.\npydantic model langchain.utilities.GoogleSearchAPIWrapper[source]#\nWrapper for Google Search API.\nAdapted from: Instructions adapted from https://stackoverflow.com/questions/\n37083058/\nprogrammatically-searching-google-in-python-using-custom-search\nTODO: DOCS for using it\n1. Install google-api-python-client\n- If you don\u2019t already have a Google account, sign up.\n- If you have never created a Google APIs Console project,\nread the Managing Projects page and create a project in the Google API Console.", "source": "https://python.langchain.com/en/latest/reference/modules/utilities.html"}
+{"id": "15bd39437f78-5", "text": "read the Managing Projects page and create a project in the Google API Console.\n- Install the library using pip install google-api-python-client\nThe current version of the library is 2.70.0 at this time\n2. To create an API key:\n- Navigate to the APIs & Services\u2192Credentials panel in Cloud Console.\n- Select Create credentials, then select API key from the drop-down menu.\n- The API key created dialog box displays your newly created key.\n- You now have an API_KEY\n3. Setup Custom Search Engine so you can search the entire web\n- Create a custom search engine in this link.\n- In Sites to search, add any valid URL (i.e. www.stackoverflow.com).\n- That\u2019s all you have to fill up, the rest doesn\u2019t matter.\nIn the left-side menu, click Edit search engine \u2192 {your search engine name}\n\u2192 Setup Set Search the entire web to ON. Remove the URL you added from\nthe list of Sites to search.\n- Under Search engine ID you\u2019ll find the search-engine-ID.\n4. Enable the Custom Search API\n- Navigate to the APIs & Services\u2192Dashboard panel in Cloud Console.\n- Click Enable APIs and Services.\n- Search for Custom Search API and click on it.\n- Click Enable.\nURL for it: https://console.cloud.google.com/apis/library/customsearch.googleapis\n.com\nfield google_api_key: Optional[str] = None#\nfield google_cse_id: Optional[str] = None#\nfield k: int = 10#\nfield siterestrict: bool = False#\nresults(query: str, num_results: int) \u2192 List[Dict][source]#\nRun query through GoogleSearch and return metadata.\nParameters\nquery \u2013 The query to search for.\nnum_results \u2013 The number of results to return.\nReturns\nsnippet - The description of the result.", "source": "https://python.langchain.com/en/latest/reference/modules/utilities.html"}
+{"id": "15bd39437f78-6", "text": "Returns\nsnippet - The description of the result.\ntitle - The title of the result.\nlink - The link to the result.\nReturn type\nA list of dictionaries with the following keys\nrun(query: str) \u2192 str[source]#\nRun query through GoogleSearch and parse result.\npydantic model langchain.utilities.GoogleSerperAPIWrapper[source]#\nWrapper around the Serper.dev Google Search API.\nYou can create a free API key at https://serper.dev.\nTo use, you should have the environment variable SERPER_API_KEY\nset with your API key, or pass serper_api_key as a named parameter\nto the constructor.\nExample\nfrom langchain import GoogleSerperAPIWrapper\ngoogle_serper = GoogleSerperAPIWrapper()\nfield aiosession: Optional[aiohttp.client.ClientSession] = None#\nfield gl: str = 'us'#\nfield hl: str = 'en'#\nfield k: int = 10#\nfield serper_api_key: Optional[str] = None#\nfield tbs: Optional[str] = None#\nfield type: Literal['news', 'search', 'places', 'images'] = 'search'#\nasync aresults(query: str, **kwargs: Any) \u2192 Dict[source]#\nRun query through GoogleSearch.\nasync arun(query: str, **kwargs: Any) \u2192 str[source]#\nRun query through GoogleSearch and parse result async.\nresults(query: str, **kwargs: Any) \u2192 Dict[source]#\nRun query through GoogleSearch.\nrun(query: str, **kwargs: Any) \u2192 str[source]#\nRun query through GoogleSearch and parse result.\npydantic model langchain.utilities.GraphQLAPIWrapper[source]#\nWrapper around GraphQL API.\nTo use, you should have the gql python package installed.", "source": "https://python.langchain.com/en/latest/reference/modules/utilities.html"}
+{"id": "15bd39437f78-7", "text": "Wrapper around GraphQL API.\nTo use, you should have the gql python package installed.\nThis wrapper will use the GraphQL API to conduct queries.\nfield custom_headers: Optional[Dict[str, str]] = None#\nfield graphql_endpoint: str [Required]#\nrun(query: str) \u2192 str[source]#\nRun a GraphQL query and get the results.\npydantic model langchain.utilities.LambdaWrapper[source]#\nWrapper for AWS Lambda SDK.\nDocs for using:\npip install boto3\nCreate a lambda function using the AWS Console or CLI\nRun aws configure and enter your AWS credentials\nfield awslambda_tool_description: Optional[str] = None#\nfield awslambda_tool_name: Optional[str] = None#\nfield function_name: Optional[str] = None#\nrun(query: str) \u2192 str[source]#\nInvoke Lambda function and parse result.\npydantic model langchain.utilities.MetaphorSearchAPIWrapper[source]#\nWrapper for Metaphor Search API.\nfield k: int = 10#\nfield metaphor_api_key: str [Required]#\nresults(query: str, num_results: int) \u2192 List[Dict][source]#\nRun query through Metaphor Search and return metadata.\nParameters\nquery \u2013 The query to search for.\nnum_results \u2013 The number of results to return.\nReturns\ntitle - The title of the\nurl - The url\nauthor - Author of the content, if applicable. Otherwise, None.\ndate_created - Estimated date created,\nin YYYY-MM-DD format. Otherwise, None.\nReturn type\nA list of dictionaries with the following keys\nasync results_async(query: str, num_results: int) \u2192 List[Dict][source]#\nGet results from the Metaphor Search API asynchronously.\npydantic model langchain.utilities.OpenWeatherMapAPIWrapper[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/utilities.html"}
+{"id": "15bd39437f78-8", "text": "pydantic model langchain.utilities.OpenWeatherMapAPIWrapper[source]#\nWrapper for OpenWeatherMap API using PyOWM.\nDocs for using:\nGo to OpenWeatherMap and sign up for an API key\nSave your API KEY into OPENWEATHERMAP_API_KEY env variable\npip install pyowm\nfield openweathermap_api_key: Optional[str] = None#\nfield owm: Any = None#\nrun(location: str) \u2192 str[source]#\nGet the current weather information for a specified location.\npydantic model langchain.utilities.PowerBIDataset[source]#\nCreate PowerBI engine from dataset ID and credential or token.\nUse either the credential or a supplied token to authenticate.\nIf both are supplied the credential is used to generate a token.\nThe impersonated_user_name is the UPN of a user to be impersonated.\nIf the model is not RLS enabled, this will be ignored.\nValidators\nfix_table_names \u00bb table_names\ntoken_or_credential_present \u00bb all fields\nfield aiosession: Optional[aiohttp.ClientSession] = None#\nfield credential: Optional[TokenCredential] = None#\nfield dataset_id: str [Required]#\nfield group_id: Optional[str] = None#\nfield impersonated_user_name: Optional[str] = None#\nfield sample_rows_in_table_info: int = 1#\nConstraints\nexclusiveMinimum = 0\nmaximum = 10\nfield schemas: Dict[str, str] [Optional]#\nfield table_names: List[str] [Required]#\nfield token: Optional[str] = None#\nasync aget_table_info(table_names: Optional[Union[List[str], str]] = None) \u2192 str[source]#\nGet information about specified tables.\nasync arun(command: str) \u2192 Any[source]#\nExecute a DAX command and return the result asynchronously.", "source": "https://python.langchain.com/en/latest/reference/modules/utilities.html"}
+{"id": "15bd39437f78-9", "text": "Execute a DAX command and return the result asynchronously.\nget_schemas() \u2192 str[source]#\nGet the available schema\u2019s.\nget_table_info(table_names: Optional[Union[List[str], str]] = None) \u2192 str[source]#\nGet information about specified tables.\nget_table_names() \u2192 Iterable[str][source]#\nGet names of tables available.\nrun(command: str) \u2192 Any[source]#\nExecute a DAX command and return a json representing the results.\nproperty headers: Dict[str, str]#\nGet the token.\nproperty request_url: str#\nGet the request url.\nproperty table_info: str#\nInformation about all tables in the database.\npydantic model langchain.utilities.PubMedAPIWrapper[source]#\nWrapper around PubMed API.\nThis wrapper will use the PubMed API to conduct searches and fetch\ndocument summaries. By default, it will return the document summaries\nof the top-k results of an input search.\nParameters\ntop_k_results \u2013 number of the top-scored document used for the PubMed tool\nload_max_docs \u2013 a limit to the number of loaded documents\nload_all_available_meta \u2013 \nif True: the metadata of the loaded Documents gets all available meta info(see https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch)\nif False: the metadata gets only the most informative fields.\nfield doc_content_chars_max: int = 2000#\nfield email: str = 'your_email@example.com'#\nfield load_all_available_meta: bool = False#\nfield load_max_docs: int = 25#\nfield top_k_results: int = 3#\nload(query: str) \u2192 List[dict][source]#\nSearch PubMed for documents matching the query.\nReturn a list of dictionaries containing the document metadata.", "source": "https://python.langchain.com/en/latest/reference/modules/utilities.html"}
+{"id": "15bd39437f78-10", "text": "Search PubMed for documents matching the query.\nReturn a list of dictionaries containing the document metadata.\nload_docs(query: str) \u2192 List[langchain.schema.Document][source]#\nretrieve_article(uid: str, webenv: str) \u2192 dict[source]#\nrun(query: str) \u2192 str[source]#\nRun PubMed search and get the article meta information.\nSee https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch\nIt uses only the most informative fields of article meta information.\npydantic model langchain.utilities.PythonREPL[source]#\nSimulates a standalone Python REPL.\nfield globals: Optional[Dict] [Optional] (alias '_globals')#\nfield locals: Optional[Dict] [Optional] (alias '_locals')#\nrun(command: str) \u2192 str[source]#\nRun command with own globals/locals and returns anything printed.\npydantic model langchain.utilities.SearxSearchWrapper[source]#\nWrapper for Searx API.\nTo use you need to provide the searx host by passing the named parameter\nsearx_host or exporting the environment variable SEARX_HOST.\nIn some situations you might want to disable SSL verification, for example\nif you are running searx locally. You can do this by passing the named parameter\nunsecure. You can also pass the host url scheme as http to disable SSL.\nExample\nfrom langchain.utilities import SearxSearchWrapper\nsearx = SearxSearchWrapper(searx_host=\"http://localhost:8888\")\nExample with SSL disabled:from langchain.utilities import SearxSearchWrapper\n# note the unsecure parameter is not needed if you pass the url scheme as\n# http\nsearx = SearxSearchWrapper(searx_host=\"http://localhost:8888\",\n unsecure=True)\nValidators", "source": "https://python.langchain.com/en/latest/reference/modules/utilities.html"}
+{"id": "15bd39437f78-11", "text": "unsecure=True)\nValidators\ndisable_ssl_warnings \u00bb unsecure\nvalidate_params \u00bb all fields\nfield aiosession: Optional[Any] = None#\nfield categories: Optional[List[str]] = []#\nfield engines: Optional[List[str]] = []#\nfield headers: Optional[dict] = None#\nfield k: int = 10#\nfield params: dict [Optional]#\nfield query_suffix: Optional[str] = ''#\nfield searx_host: str = ''#\nfield unsecure: bool = False#\nasync aresults(query: str, num_results: int, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) \u2192 List[Dict][source]#\nAsynchronously query with json results.\nUses aiohttp. See results for more info.\nasync arun(query: str, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) \u2192 str[source]#\nAsynchronously version of run.\nresults(query: str, num_results: int, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) \u2192 List[Dict][source]#\nRun query through Searx API and returns the results with metadata.\nParameters\nquery \u2013 The query to search for.\nquery_suffix \u2013 Extra suffix appended to the query.\nnum_results \u2013 Limit the number of results to return.\nengines \u2013 List of engines to use for the query.\ncategories \u2013 List of categories to use for the query.\n**kwargs \u2013 extra parameters to pass to the searx API.\nReturns\n{snippet: The description of the result.\ntitle: The title of the result.\nlink: The link to the result.", "source": "https://python.langchain.com/en/latest/reference/modules/utilities.html"}
+{"id": "15bd39437f78-12", "text": "title: The title of the result.\nlink: The link to the result.\nengines: The engines used for the result.\ncategory: Searx category of the result.\n}\nReturn type\nDict with the following keys\nrun(query: str, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) \u2192 str[source]#\nRun query through Searx API and parse results.\nYou can pass any other params to the searx query API.\nParameters\nquery \u2013 The query to search for.\nquery_suffix \u2013 Extra suffix appended to the query.\nengines \u2013 List of engines to use for the query.\ncategories \u2013 List of categories to use for the query.\n**kwargs \u2013 extra parameters to pass to the searx API.\nReturns\nThe result of the query.\nReturn type\nstr\nRaises\nValueError \u2013 If an error occured with the query.\nExample\nThis will make a query to the qwant engine:\nfrom langchain.utilities import SearxSearchWrapper\nsearx = SearxSearchWrapper(searx_host=\"http://my.searx.host\")\nsearx.run(\"what is the weather in France ?\", engine=\"qwant\")\n# the same result can be achieved using the `!` syntax of searx\n# to select the engine using `query_suffix`\nsearx.run(\"what is the weather in France ?\", query_suffix=\"!qwant\")\npydantic model langchain.utilities.SerpAPIWrapper[source]#\nWrapper around SerpAPI.\nTo use, you should have the google-search-results python package installed,\nand the environment variable SERPAPI_API_KEY set with your API key, or pass\nserpapi_api_key as a named parameter to the constructor.\nExample", "source": "https://python.langchain.com/en/latest/reference/modules/utilities.html"}
+{"id": "15bd39437f78-13", "text": "serpapi_api_key as a named parameter to the constructor.\nExample\nfrom langchain import SerpAPIWrapper\nserpapi = SerpAPIWrapper()\nfield aiosession: Optional[aiohttp.client.ClientSession] = None#\nfield params: dict = {'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}#\nfield serpapi_api_key: Optional[str] = None#\nasync aresults(query: str) \u2192 dict[source]#\nUse aiohttp to run query through SerpAPI and return the results async.\nasync arun(query: str, **kwargs: Any) \u2192 str[source]#\nRun query through SerpAPI and parse result async.\nget_params(query: str) \u2192 Dict[str, str][source]#\nGet parameters for SerpAPI.\nresults(query: str) \u2192 dict[source]#\nRun query through SerpAPI and return the raw result.\nrun(query: str, **kwargs: Any) \u2192 str[source]#\nRun query through SerpAPI and parse result.\nclass langchain.utilities.SparkSQL(spark_session: Optional[SparkSession] = None, catalog: Optional[str] = None, schema: Optional[str] = None, ignore_tables: Optional[List[str]] = None, include_tables: Optional[List[str]] = None, sample_rows_in_table_info: int = 3)[source]#\nclassmethod from_uri(database_uri: str, engine_args: Optional[dict] = None, **kwargs: Any) \u2192 langchain.utilities.spark_sql.SparkSQL[source]#\nCreating a remote Spark Session via Spark connect.\nFor example: SparkSQL.from_uri(\u201csc://localhost:15002\u201d)\nget_table_info(table_names: Optional[List[str]] = None) \u2192 str[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/utilities.html"}
+{"id": "15bd39437f78-14", "text": "get_table_info(table_names: Optional[List[str]] = None) \u2192 str[source]#\nget_table_info_no_throw(table_names: Optional[List[str]] = None) \u2192 str[source]#\nGet information about specified tables.\nFollows best practices as specified in: Rajkumar et al, 2022\n(https://arxiv.org/abs/2204.00498)\nIf sample_rows_in_table_info, the specified number of sample rows will be\nappended to each table description. This can increase performance as\ndemonstrated in the paper.\nget_usable_table_names() \u2192 Iterable[str][source]#\nGet names of tables available.\nrun(command: str, fetch: str = 'all') \u2192 str[source]#\nrun_no_throw(command: str, fetch: str = 'all') \u2192 str[source]#\nExecute a SQL command and return a string representing the results.\nIf the statement returns rows, a string of the results is returned.\nIf the statement returns no rows, an empty string is returned.\nIf the statement throws an error, the error message is returned.\npydantic model langchain.utilities.TextRequestsWrapper[source]#\nLightweight wrapper around requests library.\nThe main purpose of this wrapper is to always return a text output.\nfield aiosession: Optional[aiohttp.client.ClientSession] = None#\nfield headers: Optional[Dict[str, str]] = None#\nasync adelete(url: str, **kwargs: Any) \u2192 str[source]#\nDELETE the URL and return the text asynchronously.\nasync aget(url: str, **kwargs: Any) \u2192 str[source]#\nGET the URL and return the text asynchronously.\nasync apatch(url: str, data: Dict[str, Any], **kwargs: Any) \u2192 str[source]#\nPATCH the URL and return the text asynchronously.", "source": "https://python.langchain.com/en/latest/reference/modules/utilities.html"}
+{"id": "15bd39437f78-15", "text": "PATCH the URL and return the text asynchronously.\nasync apost(url: str, data: Dict[str, Any], **kwargs: Any) \u2192 str[source]#\nPOST to the URL and return the text asynchronously.\nasync aput(url: str, data: Dict[str, Any], **kwargs: Any) \u2192 str[source]#\nPUT the URL and return the text asynchronously.\ndelete(url: str, **kwargs: Any) \u2192 str[source]#\nDELETE the URL and return the text.\nget(url: str, **kwargs: Any) \u2192 str[source]#\nGET the URL and return the text.\npatch(url: str, data: Dict[str, Any], **kwargs: Any) \u2192 str[source]#\nPATCH the URL and return the text.\npost(url: str, data: Dict[str, Any], **kwargs: Any) \u2192 str[source]#\nPOST to the URL and return the text.\nput(url: str, data: Dict[str, Any], **kwargs: Any) \u2192 str[source]#\nPUT the URL and return the text.\nproperty requests: langchain.requests.Requests#\npydantic model langchain.utilities.TwilioAPIWrapper[source]#\nSms Client using Twilio.\nTo use, you should have the twilio python package installed,\nand the environment variables TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN, and\nTWILIO_FROM_NUMBER, or pass account_sid, auth_token, and from_number as\nnamed parameters to the constructor.\nExample\nfrom langchain.utilities.twilio import TwilioAPIWrapper\ntwilio = TwilioAPIWrapper(\n account_sid=\"ACxxx\",\n auth_token=\"xxx\",\n from_number=\"+10123456789\"\n)\ntwilio.run('test', '+12484345508')\nfield account_sid: Optional[str] = None#\nTwilio account string identifier.", "source": "https://python.langchain.com/en/latest/reference/modules/utilities.html"}
+{"id": "15bd39437f78-16", "text": "field account_sid: Optional[str] = None#\nTwilio account string identifier.\nfield auth_token: Optional[str] = None#\nTwilio auth token.\nfield from_number: Optional[str] = None#\nA Twilio phone number in [E.164](https://www.twilio.com/docs/glossary/what-e164)\nformat, an\n[alphanumeric sender ID](https://www.twilio.com/docs/sms/send-messages#use-an-alphanumeric-sender-id),\nor a [Channel Endpoint address](https://www.twilio.com/docs/sms/channels#channel-addresses)\nthat is enabled for the type of message you want to send. Phone numbers or\n[short codes](https://www.twilio.com/docs/sms/api/short-code) purchased from\nTwilio also work here. You cannot, for example, spoof messages from a private\ncell phone number. If you are using messaging_service_sid, this parameter\nmust be empty.\nrun(body: str, to: str) \u2192 str[source]#\nRun body through Twilio and respond with message sid.\nParameters\nbody \u2013 The text of the message you want to send. Can be up to 1,600\ncharacters in length.\nto \u2013 The destination phone number in\n[E.164](https://www.twilio.com/docs/glossary/what-e164) format for\nSMS/MMS or\n[Channel user address](https://www.twilio.com/docs/sms/channels#channel-addresses)\nfor other 3rd-party channels.\npydantic model langchain.utilities.WikipediaAPIWrapper[source]#\nWrapper around WikipediaAPI.\nTo use, you should have the wikipedia python package installed.\nThis wrapper will use the Wikipedia API to conduct searches and\nfetch page summaries. By default, it will return the page summaries\nof the top-k results.", "source": "https://python.langchain.com/en/latest/reference/modules/utilities.html"}
+{"id": "15bd39437f78-17", "text": "of the top-k results.\nIt limits the Document content by doc_content_chars_max.\nfield doc_content_chars_max: int = 4000#\nfield lang: str = 'en'#\nfield load_all_available_meta: bool = False#\nfield top_k_results: int = 3#\nload(query: str) \u2192 List[langchain.schema.Document][source]#\nRun Wikipedia search and get the article text plus the meta information.\nSee\nReturns: a list of documents.\nrun(query: str) \u2192 str[source]#\nRun Wikipedia search and get page summaries.\npydantic model langchain.utilities.WolframAlphaAPIWrapper[source]#\nWrapper for Wolfram Alpha.\nDocs for using:\nGo to wolfram alpha and sign up for a developer account\nCreate an app and get your APP ID\nSave your APP ID into WOLFRAM_ALPHA_APPID env variable\npip install wolframalpha\nfield wolfram_alpha_appid: Optional[str] = None#\nrun(query: str) \u2192 str[source]#\nRun query through WolframAlpha and parse result.\nprevious\nAgent Toolkits\nnext\nExperimental Modules\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/utilities.html"}
+{"id": "6c4c047a51a9-0", "text": ".rst\n.pdf\nSearxNG Search\n Contents \nQuick Start\nSearching\nEngine Parameters\nSearch Tips\nSearxNG Search#\nUtility for using SearxNG meta search API.\nSearxNG is a privacy-friendly free metasearch engine that aggregates results from\nmultiple search engines and databases and\nsupports the OpenSearch\nspecification.\nMore details on the installation instructions here.\nFor the search API refer to https://docs.searxng.org/dev/search_api.html\nQuick Start#\nIn order to use this utility you need to provide the searx host. This can be done\nby passing the named parameter searx_host\nor exporting the environment variable SEARX_HOST.\nNote: this is the only required parameter.\nThen create a searx search instance like this:\nfrom langchain.utilities import SearxSearchWrapper\n# when the host starts with `http` SSL is disabled and the connection\n# is assumed to be on a private network\nsearx_host='http://self.hosted'\nsearch = SearxSearchWrapper(searx_host=searx_host)\nYou can now use the search instance to query the searx API.\nSearching#\nUse the run() and\nresults() methods to query the searx API.\nOther methods are available for convenience.\nSearxResults is a convenience wrapper around the raw json result.\nExample usage of the run method to make a search:\ns.run(query=\"what is the best search engine?\")\nEngine Parameters#\nYou can pass any accepted searx search API parameters to the\nSearxSearchWrapper instance.\nIn the following example we are using the\nengines and the language parameters:\n# assuming the searx host is set as above or exported as an env variable", "source": "https://python.langchain.com/en/latest/reference/modules/searx_search.html"}
+{"id": "6c4c047a51a9-1", "text": "# assuming the searx host is set as above or exported as an env variable\ns = SearxSearchWrapper(engines=['google', 'bing'],\n language='es')\nSearch Tips#\nSearx offers a special\nsearch syntax\nthat can also be used instead of passing engine parameters.\nFor example the following query:\ns = SearxSearchWrapper(\"langchain library\", engines=['github'])\n# can also be written as:\ns = SearxSearchWrapper(\"langchain library !github\")\n# or even:\ns = SearxSearchWrapper(\"langchain library !gh\")\nIn some situations you might want to pass an extra string to the search query.\nFor example when the run() method is called by an agent. The search suffix can\nalso be used as a way to pass extra parameters to searx or the underlying search\nengines.\n# select the github engine and pass the search suffix\ns = SearchWrapper(\"langchain library\", query_suffix=\"!gh\")\ns = SearchWrapper(\"langchain library\")\n# select github the conventional google search syntax\ns.run(\"large language models\", query_suffix=\"site:github.com\")\nNOTE: A search suffix can be defined on both the instance and the method level.\nThe resulting query will be the concatenation of the two with the former taking\nprecedence.\nSee SearxNG Configured Engines and\nSearxNG Search Syntax\nfor more details.\nNotes\nThis wrapper is based on the SearxNG fork searxng/searxng which is\nbetter maintained than the original Searx project and offers more features.\nPublic searxNG instances often use a rate limiter for API usage, so you might want to\nuse a self hosted instance and disable the rate limiter.", "source": "https://python.langchain.com/en/latest/reference/modules/searx_search.html"}
+{"id": "6c4c047a51a9-2", "text": "use a self hosted instance and disable the rate limiter.\nIf you are self-hosting an instance you can customize the rate limiter for your\nown network as described here.\nFor a list of public SearxNG instances see https://searx.space/\nclass langchain.utilities.searx_search.SearxResults(data: str)[source]#\nDict like wrapper around search api results.\nproperty answers: Any#\nHelper accessor on the json result.\npydantic model langchain.utilities.searx_search.SearxSearchWrapper[source]#\nWrapper for Searx API.\nTo use you need to provide the searx host by passing the named parameter\nsearx_host or exporting the environment variable SEARX_HOST.\nIn some situations you might want to disable SSL verification, for example\nif you are running searx locally. You can do this by passing the named parameter\nunsecure. You can also pass the host url scheme as http to disable SSL.\nExample\nfrom langchain.utilities import SearxSearchWrapper\nsearx = SearxSearchWrapper(searx_host=\"http://localhost:8888\")\nExample with SSL disabled:from langchain.utilities import SearxSearchWrapper\n# note the unsecure parameter is not needed if you pass the url scheme as\n# http\nsearx = SearxSearchWrapper(searx_host=\"http://localhost:8888\",\n unsecure=True)\nValidators\ndisable_ssl_warnings \u00bb unsecure\nvalidate_params \u00bb all fields\nfield aiosession: Optional[Any] = None#\nfield categories: Optional[List[str]] = []#\nfield engines: Optional[List[str]] = []#\nfield headers: Optional[dict] = None#\nfield k: int = 10#\nfield params: dict [Optional]#\nfield query_suffix: Optional[str] = ''#", "source": "https://python.langchain.com/en/latest/reference/modules/searx_search.html"}
+{"id": "6c4c047a51a9-3", "text": "field params: dict [Optional]#\nfield query_suffix: Optional[str] = ''#\nfield searx_host: str = ''#\nfield unsecure: bool = False#\nasync aresults(query: str, num_results: int, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) \u2192 List[Dict][source]#\nAsynchronously query with json results.\nUses aiohttp. See results for more info.\nasync arun(query: str, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) \u2192 str[source]#\nAsynchronously version of run.\nresults(query: str, num_results: int, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) \u2192 List[Dict][source]#\nRun query through Searx API and returns the results with metadata.\nParameters\nquery \u2013 The query to search for.\nquery_suffix \u2013 Extra suffix appended to the query.\nnum_results \u2013 Limit the number of results to return.\nengines \u2013 List of engines to use for the query.\ncategories \u2013 List of categories to use for the query.\n**kwargs \u2013 extra parameters to pass to the searx API.\nReturns\n{snippet: The description of the result.\ntitle: The title of the result.\nlink: The link to the result.\nengines: The engines used for the result.\ncategory: Searx category of the result.\n}\nReturn type\nDict with the following keys\nrun(query: str, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) \u2192 str[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/searx_search.html"}
+{"id": "6c4c047a51a9-4", "text": "Run query through Searx API and parse results.\nYou can pass any other params to the searx query API.\nParameters\nquery \u2013 The query to search for.\nquery_suffix \u2013 Extra suffix appended to the query.\nengines \u2013 List of engines to use for the query.\ncategories \u2013 List of categories to use for the query.\n**kwargs \u2013 extra parameters to pass to the searx API.\nReturns\nThe result of the query.\nReturn type\nstr\nRaises\nValueError \u2013 If an error occured with the query.\nExample\nThis will make a query to the qwant engine:\nfrom langchain.utilities import SearxSearchWrapper\nsearx = SearxSearchWrapper(searx_host=\"http://my.searx.host\")\nsearx.run(\"what is the weather in France ?\", engine=\"qwant\")\n# the same result can be achieved using the `!` syntax of searx\n# to select the engine using `query_suffix`\nsearx.run(\"what is the weather in France ?\", query_suffix=\"!qwant\")\n Contents\n \nQuick Start\nSearching\nEngine Parameters\nSearch Tips\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/searx_search.html"}
+{"id": "7487ca26a34d-0", "text": ".rst\n.pdf\nEmbeddings\nEmbeddings#\nWrappers around embedding modules.\npydantic model langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding[source]#\nWrapper for Aleph Alpha\u2019s Asymmetric Embeddings\nAA provides you with an endpoint to embed a document and a query.\nThe models were optimized to make the embeddings of documents and\nthe query for a document as similar as possible.\nTo learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/\nExample\nfrom aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding\nembeddings = AlephAlphaSymmetricSemanticEmbedding()\ndocument = \"This is a content of the document\"\nquery = \"What is the content of the document?\"\ndoc_result = embeddings.embed_documents([document])\nquery_result = embeddings.embed_query(query)\nfield aleph_alpha_api_key: Optional[str] = None#\nAPI key for Aleph Alpha API.\nfield compress_to_size: Optional[int] = 128#\nShould the returned embeddings come back as an original 5120-dim vector,\nor should it be compressed to 128-dim.\nfield contextual_control_threshold: Optional[int] = None#\nAttention control parameters only apply to those tokens that have\nexplicitly been set in the request.\nfield control_log_additive: Optional[bool] = True#\nApply controls on prompt items by adding the log(control_factor)\nto attention scores.\nfield hosting: Optional[str] = 'https://api.aleph-alpha.com'#\nOptional parameter that specifies which datacenters may process the request.\nfield model: Optional[str] = 'luminous-base'#\nModel name to use.\nfield normalize: Optional[bool] = True#\nShould returned embeddings be normalized\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]#", "source": "https://python.langchain.com/en/latest/reference/modules/embeddings.html"}
+{"id": "7487ca26a34d-1", "text": "embed_documents(texts: List[str]) \u2192 List[List[float]][source]#\nCall out to Aleph Alpha\u2019s asymmetric Document endpoint.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]#\nCall out to Aleph Alpha\u2019s asymmetric, query embedding endpoint\n:param text: The text to embed.\nReturns\nEmbeddings for the text.\npydantic model langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding[source]#\nThe symmetric version of the Aleph Alpha\u2019s semantic embeddings.\nThe main difference is that here, both the documents and\nqueries are embedded with a SemanticRepresentation.Symmetric\n.. rubric:: Example\nfrom aleph_alpha import AlephAlphaSymmetricSemanticEmbedding\nembeddings = AlephAlphaAsymmetricSemanticEmbedding()\ntext = \"This is a test text\"\ndoc_result = embeddings.embed_documents([text])\nquery_result = embeddings.embed_query(text)\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]#\nCall out to Aleph Alpha\u2019s Document endpoint.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]#\nCall out to Aleph Alpha\u2019s asymmetric, query embedding endpoint\n:param text: The text to embed.\nReturns\nEmbeddings for the text.\npydantic model langchain.embeddings.BedrockEmbeddings[source]#\nEmbeddings provider to invoke Bedrock embedding models.\nTo authenticate, the AWS client uses the following methods to\nautomatically load credentials:\nhttps://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\nIf a specific credential profile should be used, you must pass", "source": "https://python.langchain.com/en/latest/reference/modules/embeddings.html"}
+{"id": "7487ca26a34d-2", "text": "If a specific credential profile should be used, you must pass\nthe name of the profile from the ~/.aws/credentials file that is to be used.\nMake sure the credentials / roles used have the required policies to\naccess the Bedrock service.\nfield credentials_profile_name: Optional[str] = None#\nThe name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\nhas either access keys or role information specified.\nIf not specified, the default credential profile or, if on an EC2 instance,\ncredentials from IMDS will be used.\nSee: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\nfield model_id: str = 'amazon.titan-e1t-medium'#\nId of the model to call, e.g., amazon.titan-e1t-medium, this is\nequivalent to the modelId property in the list-foundation-models api\nfield model_kwargs: Optional[Dict] = None#\nKey word arguments to pass to the model.\nfield region_name: Optional[str] = None#\nThe aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable\nor region specified in ~/.aws/config in case it is not provided here.\nembed_documents(texts: List[str], chunk_size: int = 1) \u2192 List[List[float]][source]#\nCompute doc embeddings using a Bedrock model.\nParameters\ntexts \u2013 The list of texts to embed.\nchunk_size \u2013 Bedrock currently only allows single string\ninputs, so chunk size is always 1. This input is here\nonly for compatibility with the embeddings interface.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]#\nCompute query embeddings using a Bedrock model.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.", "source": "https://python.langchain.com/en/latest/reference/modules/embeddings.html"}
+{"id": "7487ca26a34d-3", "text": "Parameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\npydantic model langchain.embeddings.CohereEmbeddings[source]#\nWrapper around Cohere embedding models.\nTo use, you should have the cohere python package installed, and the\nenvironment variable COHERE_API_KEY set with your API key or pass it\nas a named parameter to the constructor.\nExample\nfrom langchain.embeddings import CohereEmbeddings\ncohere = CohereEmbeddings(\n model=\"embed-english-light-v2.0\", cohere_api_key=\"my-api-key\"\n)\nfield model: str = 'embed-english-v2.0'#\nModel name to use.\nfield truncate: Optional[str] = None#\nTruncate embeddings that are too long from start or end (\u201cNONE\u201d|\u201dSTART\u201d|\u201dEND\u201d)\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]#\nCall out to Cohere\u2019s embedding endpoint.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]#\nCall out to Cohere\u2019s embedding endpoint.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\npydantic model langchain.embeddings.DeepInfraEmbeddings[source]#\nWrapper around Deep Infra\u2019s embedding inference service.\nTo use, you should have the\nenvironment variable DEEPINFRA_API_TOKEN set with your API token, or pass\nit as a named parameter to the constructor.\nThere are multiple embeddings models available,\nsee https://deepinfra.com/models?type=embeddings.\nExample\nfrom langchain.embeddings import DeepInfraEmbeddings\ndeepinfra_emb = DeepInfraEmbeddings(", "source": "https://python.langchain.com/en/latest/reference/modules/embeddings.html"}
+{"id": "7487ca26a34d-4", "text": "deepinfra_emb = DeepInfraEmbeddings(\n model_id=\"sentence-transformers/clip-ViT-B-32\",\n deepinfra_api_token=\"my-api-key\"\n)\nr1 = deepinfra_emb.embed_documents(\n [\n \"Alpha is the first letter of Greek alphabet\",\n \"Beta is the second letter of Greek alphabet\",\n ]\n)\nr2 = deepinfra_emb.embed_query(\n \"What is the second letter of Greek alphabet\"\n)\nfield embed_instruction: str = 'passage: '#\nInstruction used to embed documents.\nfield model_id: str = 'sentence-transformers/clip-ViT-B-32'#\nEmbeddings model to use.\nfield model_kwargs: Optional[dict] = None#\nOther model keyword args\nfield normalize: bool = False#\nwhether to normalize the computed embeddings\nfield query_instruction: str = 'query: '#\nInstruction used to embed the query.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]#\nEmbed documents using a Deep Infra deployed embedding model.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]#\nEmbed a query using a Deep Infra deployed embedding model.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nclass langchain.embeddings.ElasticsearchEmbeddings(client: MlClient, model_id: str, *, input_field: str = 'text_field')[source]#\nWrapper around Elasticsearch embedding models.\nThis class provides an interface to generate embeddings using a model deployed\nin an Elasticsearch cluster. It requires an Elasticsearch connection object\nand the model_id of the model deployed in the cluster.\nIn Elasticsearch you need to have an embedding model loaded and deployed.", "source": "https://python.langchain.com/en/latest/reference/modules/embeddings.html"}
+{"id": "7487ca26a34d-5", "text": "In Elasticsearch you need to have an embedding model loaded and deployed.\n- https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-trained-model.html\n- https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-deploy-models.html\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]#\nGenerate embeddings for a list of documents.\nParameters\ntexts (List[str]) \u2013 A list of document text strings to generate embeddings\nfor.\nReturns\nA list of embeddings, one for each document in the inputlist.\nReturn type\nList[List[float]]\nembed_query(text: str) \u2192 List[float][source]#\nGenerate an embedding for a single query text.\nParameters\ntext (str) \u2013 The query text to generate an embedding for.\nReturns\nThe embedding for the input query text.\nReturn type\nList[float]\nclassmethod from_credentials(model_id: str, *, es_cloud_id: Optional[str] = None, es_user: Optional[str] = None, es_password: Optional[str] = None, input_field: str = 'text_field') \u2192 langchain.embeddings.elasticsearch.ElasticsearchEmbeddings[source]#\nInstantiate embeddings from Elasticsearch credentials.\nParameters\nmodel_id (str) \u2013 The model_id of the model deployed in the Elasticsearch\ncluster.\ninput_field (str) \u2013 The name of the key for the input text field in the\ndocument. Defaults to \u2018text_field\u2019.\nes_cloud_id \u2013 (str, optional): The Elasticsearch cloud ID to connect to.\nes_user \u2013 (str, optional): Elasticsearch username.\nes_password \u2013 (str, optional): Elasticsearch password.\nExample\nfrom langchain.embeddings import ElasticsearchEmbeddings\n# Define the model ID and input field name (if different from default)\nmodel_id = \"your_model_id\"\n# Optional, only if different from 'text_field'", "source": "https://python.langchain.com/en/latest/reference/modules/embeddings.html"}
+{"id": "7487ca26a34d-6", "text": "# Optional, only if different from 'text_field'\ninput_field = \"your_input_field\"\n# Credentials can be passed in two ways. Either set the env vars\n# ES_CLOUD_ID, ES_USER, ES_PASSWORD and they will be automatically\n# pulled in, or pass them in directly as kwargs.\nembeddings = ElasticsearchEmbeddings.from_credentials(\n model_id,\n input_field=input_field,\n # es_cloud_id=\"foo\",\n # es_user=\"bar\",\n # es_password=\"baz\",\n)\ndocuments = [\n \"This is an example document.\",\n \"Another example document to generate embeddings for.\",\n]\nembeddings_generator.embed_documents(documents)\nclassmethod from_es_connection(model_id: str, es_connection: Elasticsearch, input_field: str = 'text_field') \u2192 ElasticsearchEmbeddings[source]#\nInstantiate embeddings from an existing Elasticsearch connection.\nThis method provides a way to create an instance of the ElasticsearchEmbeddings\nclass using an existing Elasticsearch connection. The connection object is used\nto create an MlClient, which is then used to initialize the\nElasticsearchEmbeddings instance.\nArgs:\nmodel_id (str): The model_id of the model deployed in the Elasticsearch cluster.\nes_connection (elasticsearch.Elasticsearch): An existing Elasticsearch\nconnection object. input_field (str, optional): The name of the key for the\ninput text field in the document. Defaults to \u2018text_field\u2019.\nReturns:\nElasticsearchEmbeddings: An instance of the ElasticsearchEmbeddings class.\nExample\nfrom elasticsearch import Elasticsearch\nfrom langchain.embeddings import ElasticsearchEmbeddings\n# Define the model ID and input field name (if different from default)\nmodel_id = \"your_model_id\"\n# Optional, only if different from 'text_field'\ninput_field = \"your_input_field\"\n# Create Elasticsearch connection", "source": "https://python.langchain.com/en/latest/reference/modules/embeddings.html"}
+{"id": "7487ca26a34d-7", "text": "input_field = \"your_input_field\"\n# Create Elasticsearch connection\nes_connection = Elasticsearch(\n hosts=[\"localhost:9200\"], http_auth=(\"user\", \"password\")\n)\n# Instantiate ElasticsearchEmbeddings using the existing connection\nembeddings = ElasticsearchEmbeddings.from_es_connection(\n model_id,\n es_connection,\n input_field=input_field,\n)\ndocuments = [\n \"This is an example document.\",\n \"Another example document to generate embeddings for.\",\n]\nembeddings_generator.embed_documents(documents)\npydantic model langchain.embeddings.FakeEmbeddings[source]#\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]#\nEmbed search docs.\nembed_query(text: str) \u2192 List[float][source]#\nEmbed query text.\npydantic model langchain.embeddings.HuggingFaceEmbeddings[source]#\nWrapper around sentence_transformers embedding models.\nTo use, you should have the sentence_transformers python package installed.\nExample\nfrom langchain.embeddings import HuggingFaceEmbeddings\nmodel_name = \"sentence-transformers/all-mpnet-base-v2\"\nmodel_kwargs = {'device': 'cpu'}\nencode_kwargs = {'normalize_embeddings': False}\nhf = HuggingFaceEmbeddings(\n model_name=model_name,\n model_kwargs=model_kwargs,\n encode_kwargs=encode_kwargs\n)\nfield cache_folder: Optional[str] = None#\nPath to store models.\nCan be also set by SENTENCE_TRANSFORMERS_HOME environment variable.\nfield encode_kwargs: Dict[str, Any] [Optional]#\nKey word arguments to pass when calling the encode method of the model.\nfield model_kwargs: Dict[str, Any] [Optional]#\nKey word arguments to pass to the model.\nfield model_name: str = 'sentence-transformers/all-mpnet-base-v2'#", "source": "https://python.langchain.com/en/latest/reference/modules/embeddings.html"}
+{"id": "7487ca26a34d-8", "text": "field model_name: str = 'sentence-transformers/all-mpnet-base-v2'#\nModel name to use.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]#\nCompute doc embeddings using a HuggingFace transformer model.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]#\nCompute query embeddings using a HuggingFace transformer model.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\npydantic model langchain.embeddings.HuggingFaceHubEmbeddings[source]#\nWrapper around HuggingFaceHub embedding models.\nTo use, you should have the huggingface_hub python package installed, and the\nenvironment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass\nit as a named parameter to the constructor.\nExample\nfrom langchain.embeddings import HuggingFaceHubEmbeddings\nrepo_id = \"sentence-transformers/all-mpnet-base-v2\"\nhf = HuggingFaceHubEmbeddings(\n repo_id=repo_id,\n task=\"feature-extraction\",\n huggingfacehub_api_token=\"my-api-key\",\n)\nfield model_kwargs: Optional[dict] = None#\nKey word arguments to pass to the model.\nfield repo_id: str = 'sentence-transformers/all-mpnet-base-v2'#\nModel name to use.\nfield task: Optional[str] = 'feature-extraction'#\nTask to call the model with.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]#\nCall out to HuggingFaceHub\u2019s embedding endpoint for embedding search docs.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.", "source": "https://python.langchain.com/en/latest/reference/modules/embeddings.html"}
+{"id": "7487ca26a34d-9", "text": "Returns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]#\nCall out to HuggingFaceHub\u2019s embedding endpoint for embedding query text.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\npydantic model langchain.embeddings.HuggingFaceInstructEmbeddings[source]#\nWrapper around sentence_transformers embedding models.\nTo use, you should have the sentence_transformers\nand InstructorEmbedding python packages installed.\nExample\nfrom langchain.embeddings import HuggingFaceInstructEmbeddings\nmodel_name = \"hkunlp/instructor-large\"\nmodel_kwargs = {'device': 'cpu'}\nencode_kwargs = {'normalize_embeddings': True}\nhf = HuggingFaceInstructEmbeddings(\n model_name=model_name,\n model_kwargs=model_kwargs,\n encode_kwargs=encode_kwargs\n)\nfield cache_folder: Optional[str] = None#\nPath to store models.\nCan be also set by SENTENCE_TRANSFORMERS_HOME environment variable.\nfield embed_instruction: str = 'Represent the document for retrieval: '#\nInstruction to use for embedding documents.\nfield encode_kwargs: Dict[str, Any] [Optional]#\nKey word arguments to pass when calling the encode method of the model.\nfield model_kwargs: Dict[str, Any] [Optional]#\nKey word arguments to pass to the model.\nfield model_name: str = 'hkunlp/instructor-large'#\nModel name to use.\nfield query_instruction: str = 'Represent the question for retrieving supporting documents: '#\nInstruction to use for embedding query.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]#\nCompute doc embeddings using a HuggingFace instruct model.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.", "source": "https://python.langchain.com/en/latest/reference/modules/embeddings.html"}
+{"id": "7487ca26a34d-10", "text": "Returns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]#\nCompute query embeddings using a HuggingFace instruct model.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\npydantic model langchain.embeddings.LlamaCppEmbeddings[source]#\nWrapper around llama.cpp embedding models.\nTo use, you should have the llama-cpp-python library installed, and provide the\npath to the Llama model as a named parameter to the constructor.\nCheck out: abetlen/llama-cpp-python\nExample\nfrom langchain.embeddings import LlamaCppEmbeddings\nllama = LlamaCppEmbeddings(model_path=\"/path/to/model.bin\")\nfield f16_kv: bool = False#\nUse half-precision for key/value cache.\nfield logits_all: bool = False#\nReturn logits for all tokens, not just the last token.\nfield n_batch: Optional[int] = 8#\nNumber of tokens to process in parallel.\nShould be a number between 1 and n_ctx.\nfield n_ctx: int = 512#\nToken context window.\nfield n_gpu_layers: Optional[int] = None#\nNumber of layers to be loaded into gpu memory. Default None.\nfield n_parts: int = -1#\nNumber of parts to split the model into.\nIf -1, the number of parts is automatically determined.\nfield n_threads: Optional[int] = None#\nNumber of threads to use. If None, the number\nof threads is automatically determined.\nfield seed: int = -1#\nSeed. If -1, a random seed is used.\nfield use_mlock: bool = False#\nForce system to keep model in RAM.\nfield vocab_only: bool = False#\nOnly load the vocabulary, no weights.", "source": "https://python.langchain.com/en/latest/reference/modules/embeddings.html"}
+{"id": "7487ca26a34d-11", "text": "field vocab_only: bool = False#\nOnly load the vocabulary, no weights.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]#\nEmbed a list of documents using the Llama model.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]#\nEmbed a query using the Llama model.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\npydantic model langchain.embeddings.MiniMaxEmbeddings[source]#\nWrapper around MiniMax\u2019s embedding inference service.\nTo use, you should have the environment variable MINIMAX_GROUP_ID and\nMINIMAX_API_KEY set with your API token, or pass it as a named parameter to\nthe constructor.\nExample\nfrom langchain.embeddings import MiniMaxEmbeddings\nembeddings = MiniMaxEmbeddings()\nquery_text = \"This is a test query.\"\nquery_result = embeddings.embed_query(query_text)\ndocument_text = \"This is a test document.\"\ndocument_result = embeddings.embed_documents([document_text])\nfield embed_type_db: str = 'db'#\nFor embed_documents\nfield embed_type_query: str = 'query'#\nFor embed_query\nfield endpoint_url: str = 'https://api.minimax.chat/v1/embeddings'#\nEndpoint URL to use.\nfield minimax_api_key: Optional[str] = None#\nAPI Key for MiniMax API.\nfield minimax_group_id: Optional[str] = None#\nGroup ID for MiniMax API.\nfield model: str = 'embo-01'#\nEmbeddings model name to use.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]#\nEmbed documents using a MiniMax embedding endpoint.\nParameters", "source": "https://python.langchain.com/en/latest/reference/modules/embeddings.html"}
+{"id": "7487ca26a34d-12", "text": "Embed documents using a MiniMax embedding endpoint.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]#\nEmbed a query using a MiniMax embedding endpoint.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\npydantic model langchain.embeddings.ModelScopeEmbeddings[source]#\nWrapper around modelscope_hub embedding models.\nTo use, you should have the modelscope python package installed.\nExample\nfrom langchain.embeddings import ModelScopeEmbeddings\nmodel_id = \"damo/nlp_corom_sentence-embedding_english-base\"\nembed = ModelScopeEmbeddings(model_id=model_id)\nfield model_id: str = 'damo/nlp_corom_sentence-embedding_english-base'#\nModel name to use.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]#\nCompute doc embeddings using a modelscope embedding model.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]#\nCompute query embeddings using a modelscope embedding model.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\npydantic model langchain.embeddings.MosaicMLInstructorEmbeddings[source]#\nWrapper around MosaicML\u2019s embedding inference service.\nTo use, you should have the\nenvironment variable MOSAICML_API_TOKEN set with your API token, or pass\nit as a named parameter to the constructor.\nExample\nfrom langchain.llms import MosaicMLInstructorEmbeddings\nendpoint_url = (\n \"https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict\"\n)", "source": "https://python.langchain.com/en/latest/reference/modules/embeddings.html"}
+{"id": "7487ca26a34d-13", "text": ")\nmosaic_llm = MosaicMLInstructorEmbeddings(\n endpoint_url=endpoint_url,\n mosaicml_api_token=\"my-api-key\"\n)\nfield embed_instruction: str = 'Represent the document for retrieval: '#\nInstruction used to embed documents.\nfield endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict'#\nEndpoint URL to use.\nfield query_instruction: str = 'Represent the question for retrieving supporting documents: '#\nInstruction used to embed the query.\nfield retry_sleep: float = 1.0#\nHow long to try sleeping for if a rate limit is encountered\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]#\nEmbed documents using a MosaicML deployed instructor embedding model.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]#\nEmbed a query using a MosaicML deployed instructor embedding model.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\npydantic model langchain.embeddings.OpenAIEmbeddings[source]#\nWrapper around OpenAI embedding models.\nTo use, you should have the openai python package installed, and the\nenvironment variable OPENAI_API_KEY set with your API key or pass it\nas a named parameter to the constructor.\nExample\nfrom langchain.embeddings import OpenAIEmbeddings\nopenai = OpenAIEmbeddings(openai_api_key=\"my-api-key\")\nIn order to use the library with Microsoft Azure endpoints, you need to set\nthe OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION.\nThe OPENAI_API_TYPE must be set to \u2018azure\u2019 and the others correspond to\nthe properties of your endpoint.", "source": "https://python.langchain.com/en/latest/reference/modules/embeddings.html"}
+{"id": "7487ca26a34d-14", "text": "the properties of your endpoint.\nIn addition, the deployment name must be passed as the model parameter.\nExample\nimport os\nos.environ[\"OPENAI_API_TYPE\"] = \"azure\"\nos.environ[\"OPENAI_API_BASE\"] = \"https://\nfield endpoint_name: str = ''#", "source": "https://python.langchain.com/en/latest/reference/modules/embeddings.html"}
+{"id": "7487ca26a34d-16", "text": "field endpoint_name: str = ''#\nThe name of the endpoint from the deployed Sagemaker model.\nMust be unique within an AWS Region.\nfield model_kwargs: Optional[Dict] = None#\nKey word arguments to pass to the model.\nfield region_name: str = ''#\nThe aws region where the Sagemaker model is deployed, eg. us-west-2.\nembed_documents(texts: List[str], chunk_size: int = 64) \u2192 List[List[float]][source]#\nCompute doc embeddings using a SageMaker Inference Endpoint.\nParameters\ntexts \u2013 The list of texts to embed.\nchunk_size \u2013 The chunk size defines how many input texts will\nbe grouped together as request. If None, will use the\nchunk size specified by the class.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]#\nCompute query embeddings using a SageMaker inference endpoint.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\npydantic model langchain.embeddings.SelfHostedEmbeddings[source]#\nRuns custom embedding models on self-hosted remote hardware.\nSupported hardware includes auto-launched instances on AWS, GCP, Azure,\nand Lambda, as well as servers specified\nby IP address and SSH credentials (such as on-prem, or another\ncloud like Paperspace, Coreweave, etc.).\nTo use, you should have the runhouse python package installed.\nExample using a model load function:from langchain.embeddings import SelfHostedEmbeddings\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\nimport runhouse as rh\ngpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\ndef get_pipeline():\n model_id = \"facebook/bart-large\"", "source": "https://python.langchain.com/en/latest/reference/modules/embeddings.html"}
+{"id": "7487ca26a34d-17", "text": "def get_pipeline():\n model_id = \"facebook/bart-large\"\n tokenizer = AutoTokenizer.from_pretrained(model_id)\n model = AutoModelForCausalLM.from_pretrained(model_id)\n return pipeline(\"feature-extraction\", model=model, tokenizer=tokenizer)\nembeddings = SelfHostedEmbeddings(\n model_load_fn=get_pipeline,\n hardware=gpu\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n)\nExample passing in a pipeline path:from langchain.embeddings import SelfHostedHFEmbeddings\nimport runhouse as rh\nfrom transformers import pipeline\ngpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\npipeline = pipeline(model=\"bert-base-uncased\", task=\"feature-extraction\")\nrh.blob(pickle.dumps(pipeline),\n path=\"models/pipeline.pkl\").save().to(gpu, path=\"models\")\nembeddings = SelfHostedHFEmbeddings.from_pipeline(\n pipeline=\"models/pipeline.pkl\",\n hardware=gpu,\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n)\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield inference_fn: Callable = #\nInference function to extract the embeddings on the remote hardware.\nfield inference_kwargs: Any = None#\nAny kwargs to pass to the model\u2019s inference function.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]#\nCompute doc embeddings using a HuggingFace transformer model.\nParameters\ntexts \u2013 The list of texts to embed.s\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]#\nCompute query embeddings using a HuggingFace transformer model.\nParameters\ntext \u2013 The text to embed.\nReturns", "source": "https://python.langchain.com/en/latest/reference/modules/embeddings.html"}
+{"id": "7487ca26a34d-18", "text": "Parameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\npydantic model langchain.embeddings.SelfHostedHuggingFaceEmbeddings[source]#\nRuns sentence_transformers embedding models on self-hosted remote hardware.\nSupported hardware includes auto-launched instances on AWS, GCP, Azure,\nand Lambda, as well as servers specified\nby IP address and SSH credentials (such as on-prem, or another cloud\nlike Paperspace, Coreweave, etc.).\nTo use, you should have the runhouse python package installed.\nExample\nfrom langchain.embeddings import SelfHostedHuggingFaceEmbeddings\nimport runhouse as rh\nmodel_name = \"sentence-transformers/all-mpnet-base-v2\"\ngpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\nhf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu)\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield hardware: Any = None#\nRemote hardware to send the inference function to.\nfield inference_fn: Callable = #\nInference function to extract the embeddings.\nfield load_fn_kwargs: Optional[dict] = None#\nKey word arguments to pass to the model load function.\nfield model_id: str = 'sentence-transformers/all-mpnet-base-v2'#\nModel name to use.\nfield model_load_fn: Callable = #\nFunction to load the model remotely on the server.\nfield model_reqs: List[str] = ['./', 'sentence_transformers', 'torch']#\nRequirements to install on hardware to inference the model.\npydantic model langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings[source]#\nRuns InstructorEmbedding embedding models on self-hosted remote hardware.", "source": "https://python.langchain.com/en/latest/reference/modules/embeddings.html"}
+{"id": "7487ca26a34d-19", "text": "Runs InstructorEmbedding embedding models on self-hosted remote hardware.\nSupported hardware includes auto-launched instances on AWS, GCP, Azure,\nand Lambda, as well as servers specified\nby IP address and SSH credentials (such as on-prem, or another\ncloud like Paperspace, Coreweave, etc.).\nTo use, you should have the runhouse python package installed.\nExample\nfrom langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings\nimport runhouse as rh\nmodel_name = \"hkunlp/instructor-large\"\ngpu = rh.cluster(name='rh-a10x', instance_type='A100:1')\nhf = SelfHostedHuggingFaceInstructEmbeddings(\n model_name=model_name, hardware=gpu)\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield embed_instruction: str = 'Represent the document for retrieval: '#\nInstruction to use for embedding documents.\nfield model_id: str = 'hkunlp/instructor-large'#\nModel name to use.\nfield model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch']#\nRequirements to install on hardware to inference the model.\nfield query_instruction: str = 'Represent the question for retrieving supporting documents: '#\nInstruction to use for embedding query.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]#\nCompute doc embeddings using a HuggingFace instruct model.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]#\nCompute query embeddings using a HuggingFace instruct model.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nlangchain.embeddings.SentenceTransformerEmbeddings#", "source": "https://python.langchain.com/en/latest/reference/modules/embeddings.html"}
+{"id": "7487ca26a34d-20", "text": "Returns\nEmbeddings for the text.\nlangchain.embeddings.SentenceTransformerEmbeddings#\nalias of langchain.embeddings.huggingface.HuggingFaceEmbeddings\npydantic model langchain.embeddings.TensorflowHubEmbeddings[source]#\nWrapper around tensorflow_hub embedding models.\nTo use, you should have the tensorflow_text python package installed.\nExample\nfrom langchain.embeddings import TensorflowHubEmbeddings\nurl = \"https://tfhub.dev/google/universal-sentence-encoder-multilingual/3\"\ntf = TensorflowHubEmbeddings(model_url=url)\nfield model_url: str = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3'#\nModel name to use.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]#\nCompute doc embeddings using a TensorflowHub embedding model.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]#\nCompute query embeddings using a TensorflowHub embedding model.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nprevious\nChat Models\nnext\nIndexes\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/embeddings.html"}
+{"id": "6a450e95639a-0", "text": ".rst\n.pdf\nDocument Loaders\nDocument Loaders#\nAll different types of document loaders.\nclass langchain.document_loaders.AZLyricsLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#\nLoader that loads AZLyrics webpages.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad webpage.\nclass langchain.document_loaders.AirbyteJSONLoader(file_path: str)[source]#\nLoader that loads local airbyte json files.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad file.\nclass langchain.document_loaders.AirtableLoader(api_token: str, table_id: str, base_id: str)[source]#\nLoader that loads local airbyte json files.\nlazy_load() \u2192 Iterator[langchain.schema.Document][source]#\nLoad Table.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad Table.\npydantic model langchain.document_loaders.ApifyDatasetLoader[source]#\nLogic for loading documents from Apify datasets.\nfield apify_client: Any = None#\nfield dataset_id: str [Required]#\nThe ID of the dataset on the Apify platform.\nfield dataset_mapping_function: Callable[[Dict], langchain.schema.Document] [Required]#\nA custom function that takes a single dictionary (an Apify dataset item)\nand converts it to an instance of the Document class.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\nclass langchain.document_loaders.ArxivLoader(query: str, load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False)[source]#\nLoads a query result from arxiv.org into a list of Documents.\nEach document represents one Document.", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-1", "text": "Each document represents one Document.\nThe loader converts the original PDF format into the text.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad data into document objects.\nclass langchain.document_loaders.AzureBlobStorageContainerLoader(conn_str: str, container: str, prefix: str = '')[source]#\nLoading logic for loading documents from Azure Blob Storage.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\nclass langchain.document_loaders.AzureBlobStorageFileLoader(conn_str: str, container: str, blob_name: str)[source]#\nLoading logic for loading documents from Azure Blob Storage.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\nclass langchain.document_loaders.BSHTMLLoader(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '')[source]#\nLoader that uses beautiful soup to parse HTML files.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad data into document objects.\nclass langchain.document_loaders.BibtexLoader(file_path: str, *, parser: Optional[langchain.utilities.bibtex.BibtexparserWrapper] = None, max_docs: Optional[int] = None, max_content_chars: Optional[int] = 4000, load_extra_metadata: bool = False, file_pattern: str = '[^:]+\\\\.pdf')[source]#\nLoads a bibtex file into a list of Documents.\nEach document represents one entry from the bibtex file.\nIf a PDF file is present in the file bibtex field, the original PDF\nis loaded into the document text. If no such file entry is present,\nthe abstract field is used instead.\nlazy_load() \u2192 Iterator[langchain.schema.Document][source]#", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-2", "text": "lazy_load() \u2192 Iterator[langchain.schema.Document][source]#\nLoad bibtex file using bibtexparser and get the article texts plus the\narticle metadata.\nSee https://bibtexparser.readthedocs.io/en/master/\nReturns\na list of documents with the document.page_content in text format\nload() \u2192 List[langchain.schema.Document][source]#\nLoad bibtex file documents from the given bibtex file path.\nSee https://bibtexparser.readthedocs.io/en/master/\nParameters\nfile_path \u2013 the path to the bibtex file\nReturns\na list of documents with the document.page_content in text format\nclass langchain.document_loaders.BigQueryLoader(query: str, project: Optional[str] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None, credentials: Optional[Credentials] = None)[source]#\nLoads a query result from BigQuery into a list of documents.\nEach document represents one row of the result. The page_content_columns\nare written into the page_content of the document. The metadata_columns\nare written into the metadata of the document. By default, all columns\nare written into the page_content and none into the metadata.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad data into document objects.\nclass langchain.document_loaders.BiliBiliLoader(video_urls: List[str])[source]#\nLoader that loads bilibili transcripts.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad from bilibili url.\nclass langchain.document_loaders.BlackboardLoader(blackboard_course_url: str, bbrouter: str, load_all_recursively: bool = True, basic_auth: Optional[Tuple[str, str]] = None, cookies: Optional[dict] = None)[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-3", "text": "Loader that loads all documents from a Blackboard course.\nThis loader is not compatible with all Blackboard courses. It is only\ncompatible with courses that use the new Blackboard interface.\nTo use this loader, you must have the BbRouter cookie. You can get this\ncookie by logging into the course and then copying the value of the\nBbRouter cookie from the browser\u2019s developer tools.\nExample\nfrom langchain.document_loaders import BlackboardLoader\nloader = BlackboardLoader(\n blackboard_course_url=\"https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1\",\n bbrouter=\"expires:12345...\",\n)\ndocuments = loader.load()\nbase_url: str#\ncheck_bs4() \u2192 None[source]#\nCheck if BeautifulSoup4 is installed.\nRaises\nImportError \u2013 If BeautifulSoup4 is not installed.\ndownload(path: str) \u2192 None[source]#\nDownload a file from a url.\nParameters\npath \u2013 Path to the file.\nfolder_path: str#\nload() \u2192 List[langchain.schema.Document][source]#\nLoad data into document objects.\nReturns\nList of documents.\nload_all_recursively: bool#\nparse_filename(url: str) \u2192 str[source]#\nParse the filename from a url.\nParameters\nurl \u2013 Url to parse the filename from.\nReturns\nThe filename.\nclass langchain.document_loaders.BlockchainDocumentLoader(contract_address: str, blockchainType: langchain.document_loaders.blockchain.BlockchainType = BlockchainType.ETH_MAINNET, api_key: str = 'docs-demo', startToken: str = '', get_all_tokens: bool = False, max_execution_time: Optional[int] = None)[source]#\nLoads elements from a blockchain smart contract into Langchain documents.", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-4", "text": "Loads elements from a blockchain smart contract into Langchain documents.\nThe supported blockchains are: Ethereum mainnet, Ethereum Goerli testnet,\nPolygon mainnet, and Polygon Mumbai testnet.\nIf no BlockchainType is specified, the default is Ethereum mainnet.\nThe Loader uses the Alchemy API to interact with the blockchain.\nALCHEMY_API_KEY environment variable must be set to use this loader.\nThe API returns 100 NFTs per request and can be paginated using the\nstartToken parameter.\nIf get_all_tokens is set to True, the loader will get all tokens\non the contract. Note that for contracts with a large number of tokens,\nthis may take a long time (e.g. 10k tokens is 100 requests).\nDefault value is false for this reason.\nThe max_execution_time (sec) can be set to limit the execution time\nof the loader.\nFuture versions of this loader can:\nSupport additional Alchemy APIs (e.g. getTransactions, etc.)\nSupport additional blockain APIs (e.g. Infura, Opensea, etc.)\nload() \u2192 List[langchain.schema.Document][source]#\nLoad data into document objects.\nclass langchain.document_loaders.CSVLoader(file_path: str, source_column: Optional[str] = None, csv_args: Optional[Dict] = None, encoding: Optional[str] = None)[source]#\nLoads a CSV file into a list of documents.\nEach document represents one row of the CSV file. Every row is converted into a\nkey/value pair and outputted to a new line in the document\u2019s page_content.\nThe source for each document loaded from csv is set to the value of the\nfile_path argument for all doucments by default.\nYou can override this by setting the source_column argument to the\nname of a column in the CSV file.", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-5", "text": "name of a column in the CSV file.\nThe source of each document will then be set to the value of the column\nwith the name specified in source_column.\nOutput Example:column1: value1\ncolumn2: value2\ncolumn3: value3\nload() \u2192 List[langchain.schema.Document][source]#\nLoad data into document objects.\nclass langchain.document_loaders.ChatGPTLoader(log_file: str, num_logs: int = - 1)[source]#\nLoader that loads conversations from exported ChatGPT data.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad data into document objects.\nclass langchain.document_loaders.CoNLLULoader(file_path: str)[source]#\nLoad CoNLL-U files.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad from file path.\nclass langchain.document_loaders.CollegeConfidentialLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#\nLoader that loads College Confidential webpages.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad webpage.\nclass langchain.document_loaders.ConfluenceLoader(url: str, api_key: Optional[str] = None, username: Optional[str] = None, oauth2: Optional[dict] = None, token: Optional[str] = None, cloud: Optional[bool] = True, number_of_retries: Optional[int] = 3, min_retry_seconds: Optional[int] = 2, max_retry_seconds: Optional[int] = 10, confluence_kwargs: Optional[dict] = None)[source]#\nLoad Confluence pages. Port of https://llamahub.ai/l/confluence\nThis currently supports username/api_key, Oauth2 login or personal access token\nauthentication.", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-6", "text": "This currently supports username/api_key, Oauth2 login or personal access token\nauthentication.\nSpecify a list page_ids and/or space_key to load in the corresponding pages into\nDocument objects, if both are specified the union of both sets will be returned.\nYou can also specify a boolean include_attachments to include attachments, this\nis set to False by default, if set to True all attachments will be downloaded and\nConfluenceReader will extract the text from the attachments and add it to the\nDocument object. Currently supported attachment types are: PDF, PNG, JPEG/JPG,\nSVG, Word and Excel.\nHint: space_key and page_id can both be found in the URL of a page in Confluence\n- https://yoursite.atlassian.com/wiki/spaces//pages/\nExample\nfrom langchain.document_loaders import ConfluenceLoader\nloader = ConfluenceLoader(\n url=\"https://yoursite.atlassian.com/wiki\",\n username=\"me\",\n api_key=\"12345\"\n)\ndocuments = loader.load(space_key=\"SPACE\",limit=50)\nParameters\nurl (str) \u2013 _description_\napi_key (str, optional) \u2013 _description_, defaults to None\nusername (str, optional) \u2013 _description_, defaults to None\noauth2 (dict, optional) \u2013 _description_, defaults to {}\ntoken (str, optional) \u2013 _description_, defaults to None\ncloud (bool, optional) \u2013 _description_, defaults to True\nnumber_of_retries (Optional[int], optional) \u2013 How many times to retry, defaults to 3\nmin_retry_seconds (Optional[int], optional) \u2013 defaults to 2\nmax_retry_seconds (Optional[int], optional) \u2013 defaults to 10\nconfluence_kwargs (dict, optional) \u2013 additional kwargs to initialize confluence with\nRaises\nValueError \u2013 Errors while validating input", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-7", "text": "Raises\nValueError \u2013 Errors while validating input\nImportError \u2013 Required dependencies not installed.\nis_public_page(page: dict) \u2192 bool[source]#\nCheck if a page is publicly accessible.\nload(space_key: Optional[str] = None, page_ids: Optional[List[str]] = None, label: Optional[str] = None, cql: Optional[str] = None, include_restricted_content: bool = False, include_archived_content: bool = False, include_attachments: bool = False, include_comments: bool = False, limit: Optional[int] = 50, max_pages: Optional[int] = 1000, ocr_languages: Optional[str] = None) \u2192 List[langchain.schema.Document][source]#\nParameters\nspace_key (Optional[str], optional) \u2013 Space key retrieved from a confluence URL, defaults to None\npage_ids (Optional[List[str]], optional) \u2013 List of specific page IDs to load, defaults to None\nlabel (Optional[str], optional) \u2013 Get all pages with this label, defaults to None\ncql (Optional[str], optional) \u2013 CQL Expression, defaults to None\ninclude_restricted_content (bool, optional) \u2013 defaults to False\ninclude_archived_content (bool, optional) \u2013 Whether to include archived content,\ndefaults to False\ninclude_attachments (bool, optional) \u2013 defaults to False\ninclude_comments (bool, optional) \u2013 defaults to False\nlimit (int, optional) \u2013 Maximum number of pages to retrieve per request, defaults to 50\nmax_pages (int, optional) \u2013 Maximum number of pages to retrieve in total, defaults 1000\nocr_languages (str, optional) \u2013 The languages to use for the Tesseract agent. To use a\nlanguage, you\u2019ll first need to install the appropriate\nTesseract language pack.\nRaises\nValueError \u2013 _description_\nImportError \u2013 _description_\nReturns\n_description_\nReturn type", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-8", "text": "ImportError \u2013 _description_\nReturns\n_description_\nReturn type\nList[Document]\npaginate_request(retrieval_method: Callable, **kwargs: Any) \u2192 List[source]#\nPaginate the various methods to retrieve groups of pages.\nUnfortunately, due to page size, sometimes the Confluence API\ndoesn\u2019t match the limit value. If limit is >100 confluence\nseems to cap the response to 100. Also, due to the Atlassian Python\npackage, we don\u2019t get the \u201cnext\u201d values from the \u201c_links\u201d key because\nthey only return the value from the results key. So here, the pagination\nstarts from 0 and goes until the max_pages, getting the limit number\nof pages with each request. We have to manually check if there\nare more docs based on the length of the returned list of pages, rather than\njust checking for the presence of a next key in the response like this page\nwould have you do:\nhttps://developer.atlassian.com/server/confluence/pagination-in-the-rest-api/\nParameters\nretrieval_method (callable) \u2013 Function used to retrieve docs\nReturns\nList of documents\nReturn type\nList\nprocess_attachment(page_id: str, ocr_languages: Optional[str] = None) \u2192 List[str][source]#\nprocess_doc(link: str) \u2192 str[source]#\nprocess_image(link: str, ocr_languages: Optional[str] = None) \u2192 str[source]#\nprocess_page(page: dict, include_attachments: bool, include_comments: bool, ocr_languages: Optional[str] = None) \u2192 langchain.schema.Document[source]#\nprocess_pages(pages: List[dict], include_restricted_content: bool, include_attachments: bool, include_comments: bool, ocr_languages: Optional[str] = None) \u2192 List[langchain.schema.Document][source]#", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-9", "text": "Process a list of pages into a list of documents.\nprocess_pdf(link: str, ocr_languages: Optional[str] = None) \u2192 str[source]#\nprocess_svg(link: str, ocr_languages: Optional[str] = None) \u2192 str[source]#\nprocess_xls(link: str) \u2192 str[source]#\nstatic validate_init_args(url: Optional[str] = None, api_key: Optional[str] = None, username: Optional[str] = None, oauth2: Optional[dict] = None, token: Optional[str] = None) \u2192 Optional[List][source]#\nValidates proper combinations of init arguments\nclass langchain.document_loaders.DataFrameLoader(data_frame: Any, page_content_column: str = 'text')[source]#\nLoad Pandas DataFrames.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad from the dataframe.\nclass langchain.document_loaders.DiffbotLoader(api_token: str, urls: List[str], continue_on_failure: bool = True)[source]#\nLoader that loads Diffbot file json.\nload() \u2192 List[langchain.schema.Document][source]#\nExtract text from Diffbot on all the URLs and return Document instances\nclass langchain.document_loaders.DirectoryLoader(path: str, glob: str = '**/[!.]*', silent_errors: bool = False, load_hidden: bool = False, loader_cls: typing.Union[typing.Type[langchain.document_loaders.unstructured.UnstructuredFileLoader], typing.Type[langchain.document_loaders.text.TextLoader], typing.Type[langchain.document_loaders.html_bs.BSHTMLLoader]] = , loader_kwargs: typing.Optional[dict] = None, recursive: bool = False, show_progress: bool = False, use_multithreading: bool = False, max_concurrency: int = 4)[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-10", "text": "Loading logic for loading documents from a directory.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\nload_file(item: pathlib.Path, path: pathlib.Path, docs: List[langchain.schema.Document], pbar: Optional[Any]) \u2192 None[source]#\nclass langchain.document_loaders.DiscordChatLoader(chat_log: pd.DataFrame, user_id_col: str = 'ID')[source]#\nLoad Discord chat logs.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad all chat messages.\npydantic model langchain.document_loaders.DocugamiLoader[source]#\nLoader that loads processed docs from Docugami.\nTo use, you should have the lxml python package installed.\nfield access_token: Optional[str] = None#\nfield api: str = 'https://api.docugami.com/v1preview1'#\nfield docset_id: Optional[str] = None#\nfield document_ids: Optional[Sequence[str]] = None#\nfield file_paths: Optional[Sequence[Union[pathlib.Path, str]]] = None#\nfield min_chunk_size: int = 32#\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\nclass langchain.document_loaders.Docx2txtLoader(file_path: str)[source]#\nLoads a DOCX with docx2txt and chunks at character level.\nDefaults to check for local file, but if the file is a web path, it will download it\nto a temporary file, and use that, then clean up the temporary file after completion\nload() \u2192 List[langchain.schema.Document][source]#\nLoad given path as single page.", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-11", "text": "Load given path as single page.\nclass langchain.document_loaders.DuckDBLoader(query: str, database: str = ':memory:', read_only: bool = False, config: Optional[Dict[str, str]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]#\nLoads a query result from DuckDB into a list of documents.\nEach document represents one row of the result. The page_content_columns\nare written into the page_content of the document. The metadata_columns\nare written into the metadata of the document. By default, all columns\nare written into the page_content and none into the metadata.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad data into document objects.\nclass langchain.document_loaders.EverNoteLoader(file_path: str, load_single_document: bool = True)[source]#\nEverNote Loader.\nLoads an EverNote notebook export file e.g. my_notebook.enex into Documents.\nInstructions on producing this file can be found at\nhttps://help.evernote.com/hc/en-us/articles/209005557-Export-notes-and-notebooks-as-ENEX-or-HTML\nCurrently only the plain text in the note is extracted and stored as the contents\nof the Document, any non content metadata (e.g. \u2018author\u2019, \u2018created\u2019, \u2018updated\u2019 etc.\nbut not \u2018content-raw\u2019 or \u2018resource\u2019) tags on the note will be extracted and stored\nas metadata on the Document.\nParameters\nfile_path (str) \u2013 The path to the notebook export with a .enex extension\nload_single_document (bool) \u2013 Whether or not to concatenate the content of all\nnotes into a single long Document.\nTrue (If this is set to) \u2013 the \u2018source\u2019 which contains the file name of the export.", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-12", "text": "load() \u2192 List[langchain.schema.Document][source]#\nLoad documents from EverNote export file.\nclass langchain.document_loaders.FacebookChatLoader(path: str)[source]#\nLoader that loads Facebook messages json directory dump.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\nclass langchain.document_loaders.FaunaLoader(query: str, page_content_field: str, secret: str, metadata_fields: Optional[Sequence[str]] = None)[source]#\nquery#\nThe FQL query string to execute.\nType\nstr\npage_content_field#\nThe field that contains the content of each page.\nType\nstr\nsecret#\nThe secret key for authenticating to FaunaDB.\nType\nstr\nmetadata_fields#\nOptional list of field names to include in metadata.\nType\nOptional[Sequence[str]]\nlazy_load() \u2192 Iterator[langchain.schema.Document][source]#\nA lazy loader for document content.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad data into document objects.\nclass langchain.document_loaders.FigmaFileLoader(access_token: str, ids: str, key: str)[source]#\nLoader that loads Figma file json.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad file\nclass langchain.document_loaders.GCSDirectoryLoader(project_name: str, bucket: str, prefix: str = '')[source]#\nLoading logic for loading documents from GCS.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\nclass langchain.document_loaders.GCSFileLoader(project_name: str, bucket: str, blob: str)[source]#\nLoading logic for loading documents from GCS.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-13", "text": "load() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\npydantic model langchain.document_loaders.GitHubIssuesLoader[source]#\nValidators\nvalidate_environment \u00bb all fields\nvalidate_since \u00bb since\nfield assignee: Optional[str] = None#\nFilter on assigned user. Pass \u2018none\u2019 for no user and \u2018*\u2019 for any user.\nfield creator: Optional[str] = None#\nFilter on the user that created the issue.\nfield direction: Optional[Literal['asc', 'desc']] = None#\nThe direction to sort the results by. Can be one of: \u2018asc\u2019, \u2018desc\u2019.\nfield include_prs: bool = True#\nIf True include Pull Requests in results, otherwise ignore them.\nfield labels: Optional[List[str]] = None#\nLabel names to filter one. Example: bug,ui,@high.\nfield mentioned: Optional[str] = None#\nFilter on a user that\u2019s mentioned in the issue.\nfield milestone: Optional[Union[int, Literal['*', 'none']]] = None#\nIf integer is passed, it should be a milestone\u2019s number field.\nIf the string \u2018*\u2019 is passed, issues with any milestone are accepted.\nIf the string \u2018none\u2019 is passed, issues without milestones are returned.\nfield since: Optional[str] = None#\nOnly show notifications updated after the given time.\nThis is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.\nfield sort: Optional[Literal['created', 'updated', 'comments']] = None#\nWhat to sort results by. Can be one of: \u2018created\u2019, \u2018updated\u2019, \u2018comments\u2019.\nDefault is \u2018created\u2019.\nfield state: Optional[Literal['open', 'closed', 'all']] = None#", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-14", "text": "field state: Optional[Literal['open', 'closed', 'all']] = None#\nFilter on issue state. Can be one of: \u2018open\u2019, \u2018closed\u2019, \u2018all\u2019.\nlazy_load() \u2192 Iterator[langchain.schema.Document][source]#\nGet issues of a GitHub repository.\nReturns\npage_content\nmetadata\nurl\ntitle\ncreator\ncreated_at\nlast_update_time\nclosed_time\nnumber of comments\nstate\nlabels\nassignee\nassignees\nmilestone\nlocked\nnumber\nis_pull_request\nReturn type\nA list of Documents with attributes\nload() \u2192 List[langchain.schema.Document][source]#\nGet issues of a GitHub repository.\nReturns\npage_content\nmetadata\nurl\ntitle\ncreator\ncreated_at\nlast_update_time\nclosed_time\nnumber of comments\nstate\nlabels\nassignee\nassignees\nmilestone\nlocked\nnumber\nis_pull_request\nReturn type\nA list of Documents with attributes\nparse_issue(issue: dict) \u2192 langchain.schema.Document[source]#\nCreate Document objects from a list of GitHub issues.\nproperty query_params: str#\nproperty url: str#\nclass langchain.document_loaders.GitLoader(repo_path: str, clone_url: Optional[str] = None, branch: Optional[str] = 'main', file_filter: Optional[Callable[[str], bool]] = None)[source]#\nLoads files from a Git repository into a list of documents.\nRepository can be local on disk available at repo_path,\nor remote at clone_url that will be cloned to repo_path.\nCurrently supports only text files.\nEach document represents one file in the repository. The path points to\nthe local Git repository, and the branch specifies the branch to load\nfiles from. By default, it loads from the main branch.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad data into document objects.", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-15", "text": "Load data into document objects.\nclass langchain.document_loaders.GitbookLoader(web_page: str, load_all_paths: bool = False, base_url: Optional[str] = None, content_selector: str = 'main')[source]#\nLoad GitBook data.\nload from either a single page, or\nload all (relative) paths in the navbar.\nload() \u2192 List[langchain.schema.Document][source]#\nFetch text from one single GitBook page.\nclass langchain.document_loaders.GoogleApiClient(credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json'), service_account_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json'), token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json'))[source]#\nA Generic Google Api Client.\nTo use, you should have the google_auth_oauthlib,youtube_transcript_api,google\npython package installed.\nAs the google api expects credentials you need to set up a google account and\nregister your Service. \u201chttps://developers.google.com/docs/api/quickstart/python\u201d\nExample\nfrom langchain.document_loaders import GoogleApiClient\ngoogle_api_client = GoogleApiClient(\n service_account_path=Path(\"path_to_your_sec_file.json\")\n)\ncredentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')#\nservice_account_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')#\ntoken_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')#\nclassmethod validate_channel_or_videoIds_is_set(values: Dict[str, Any]) \u2192 Dict[str, Any][source]#\nValidate that either folder_id or document_ids is set, but not both.", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-16", "text": "Validate that either folder_id or document_ids is set, but not both.\nclass langchain.document_loaders.GoogleApiYoutubeLoader(google_api_client: langchain.document_loaders.youtube.GoogleApiClient, channel_name: Optional[str] = None, video_ids: Optional[List[str]] = None, add_video_info: bool = True, captions_language: str = 'en', continue_on_failure: bool = False)[source]#\nLoader that loads all Videos from a Channel\nTo use, you should have the googleapiclient,youtube_transcript_api\npython package installed.\nAs the service needs a google_api_client, you first have to initialize\nthe GoogleApiClient.\nAdditionally you have to either provide a channel name or a list of videoids\n\u201chttps://developers.google.com/docs/api/quickstart/python\u201d\nExample\nfrom langchain.document_loaders import GoogleApiClient\nfrom langchain.document_loaders import GoogleApiYoutubeLoader\ngoogle_api_client = GoogleApiClient(\n service_account_path=Path(\"path_to_your_sec_file.json\")\n)\nloader = GoogleApiYoutubeLoader(\n google_api_client=google_api_client,\n channel_name = \"CodeAesthetic\"\n)\nload.load()\nadd_video_info: bool = True#\ncaptions_language: str = 'en'#\nchannel_name: Optional[str] = None#\ncontinue_on_failure: bool = False#\ngoogle_api_client: langchain.document_loaders.youtube.GoogleApiClient#\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\nclassmethod validate_channel_or_videoIds_is_set(values: Dict[str, Any]) \u2192 Dict[str, Any][source]#\nValidate that either folder_id or document_ids is set, but not both.\nvideo_ids: Optional[List[str]] = None#\npydantic model langchain.document_loaders.GoogleDriveLoader[source]#\nLoader that loads Google Docs from Google Drive.", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-17", "text": "Loader that loads Google Docs from Google Drive.\nValidators\nvalidate_credentials_path \u00bb credentials_path\nvalidate_inputs \u00bb all fields\nfield credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')#\nfield document_ids: Optional[List[str]] = None#\nfield file_ids: Optional[List[str]] = None#\nfield file_types: Optional[Sequence[str]] = None#\nfield folder_id: Optional[str] = None#\nfield load_trashed_files: bool = False#\nfield recursive: bool = False#\nfield service_account_key: pathlib.Path = PosixPath('/home/docs/.credentials/keys.json')#\nfield token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')#\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\nclass langchain.document_loaders.GutenbergLoader(file_path: str)[source]#\nLoader that uses urllib to load .txt web files.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad file.\nclass langchain.document_loaders.HNLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#\nLoad Hacker News data from either main page results or the comments page.\nload() \u2192 List[langchain.schema.Document][source]#\nGet important HN webpage information.\nComponents are:\ntitle\ncontent\nsource url,\ntime of post\nauthor of the post\nnumber of comments\nrank of the post\nload_comments(soup_info: Any) \u2192 List[langchain.schema.Document][source]#\nLoad comments from a HN post.\nload_results(soup: Any) \u2192 List[langchain.schema.Document][source]#\nLoad items from an HN page.", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-18", "text": "Load items from an HN page.\nclass langchain.document_loaders.HuggingFaceDatasetLoader(path: str, page_content_column: str = 'text', name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None, cache_dir: Optional[str] = None, keep_in_memory: Optional[bool] = None, save_infos: bool = False, use_auth_token: Optional[Union[bool, str]] = None, num_proc: Optional[int] = None)[source]#\nLoading logic for loading documents from the Hugging Face Hub.\nlazy_load() \u2192 Iterator[langchain.schema.Document][source]#\nLoad documents lazily.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\nclass langchain.document_loaders.IFixitLoader(web_path: str)[source]#\nLoad iFixit repair guides, device wikis and answers.\niFixit is the largest, open repair community on the web. The site contains nearly\n100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is\nlicensed under CC-BY.\nThis loader will allow you to download the text of a repair guide, text of Q&A\u2019s\nand wikis from devices on iFixit using their open APIs and web scraping.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad data into document objects.\nload_device(url_override: Optional[str] = None, include_guides: bool = True) \u2192 List[langchain.schema.Document][source]#\nload_guide(url_override: Optional[str] = None) \u2192 List[langchain.schema.Document][source]#", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-19", "text": "load_questions_and_answers(url_override: Optional[str] = None) \u2192 List[langchain.schema.Document][source]#\nstatic load_suggestions(query: str = '', doc_type: str = 'all') \u2192 List[langchain.schema.Document][source]#\nclass langchain.document_loaders.IMSDbLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#\nLoader that loads IMSDb webpages.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad webpage.\nclass langchain.document_loaders.ImageCaptionLoader(path_images: Union[str, List[str]], blip_processor: str = 'Salesforce/blip-image-captioning-base', blip_model: str = 'Salesforce/blip-image-captioning-base')[source]#\nLoader that loads the captions of an image\nload() \u2192 List[langchain.schema.Document][source]#\nLoad from a list of image files\nclass langchain.document_loaders.IuguLoader(resource: str, api_token: Optional[str] = None)[source]#\nload() \u2192 List[langchain.schema.Document][source]#\nLoad data into document objects.\nclass langchain.document_loaders.JSONLoader(file_path: Union[str, pathlib.Path], jq_schema: str, content_key: Optional[str] = None, metadata_func: Optional[Callable[[Dict, Dict], Dict]] = None, text_content: bool = True)[source]#\nLoads a JSON file and references a jq schema provided to load the text into\ndocuments.\nExample\n[{\u201ctext\u201d: \u2026}, {\u201ctext\u201d: \u2026}, {\u201ctext\u201d: \u2026}] -> schema = .[].text\n{\u201ckey\u201d: [{\u201ctext\u201d: \u2026}, {\u201ctext\u201d: \u2026}, {\u201ctext\u201d: \u2026}]} -> schema = .key[].text\n[\u201c\u201d, \u201c\u201d, \u201c\u201d] -> schema = .[]", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-20", "text": "[\u201c\u201d, \u201c\u201d, \u201c\u201d] -> schema = .[]\nload() \u2192 List[langchain.schema.Document][source]#\nLoad and return documents from the JSON file.\nclass langchain.document_loaders.JoplinLoader(access_token: Optional[str] = None, port: int = 41184, host: str = 'localhost')[source]#\nLoader that fetches notes from Joplin.\nIn order to use this loader, you need to have Joplin running with the\nWeb Clipper enabled (look for \u201cWeb Clipper\u201d in the app settings).\nTo get the access token, you need to go to the Web Clipper options and\nunder \u201cAdvanced Options\u201d you will find the access token.\nYou can find more information about the Web Clipper service here:\nhttps://joplinapp.org/clipper/\nlazy_load() \u2192 Iterator[langchain.schema.Document][source]#\nA lazy loader for document content.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad data into document objects.\nclass langchain.document_loaders.MWDumpLoader(file_path: str, encoding: Optional[str] = 'utf8')[source]#\nLoad MediaWiki dump from XML file\n.. rubric:: Example\nfrom langchain.document_loaders import MWDumpLoader\nloader = MWDumpLoader(\n file_path=\"myWiki.xml\",\n encoding=\"utf8\"\n)\ndocs = loader.load()\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\ntext_splitter = RecursiveCharacterTextSplitter(\n chunk_size=1000, chunk_overlap=0\n)\ntexts = text_splitter.split_documents(docs)\nParameters\nfile_path (str) \u2013 XML local file path\nencoding (str, optional) \u2013 Charset encoding, defaults to \u201cutf8\u201d", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-21", "text": "encoding (str, optional) \u2013 Charset encoding, defaults to \u201cutf8\u201d\nload() \u2192 List[langchain.schema.Document][source]#\nLoad from file path.\nclass langchain.document_loaders.MastodonTootsLoader(mastodon_accounts: Sequence[str], number_toots: Optional[int] = 100, exclude_replies: bool = False, access_token: Optional[str] = None, api_base_url: str = 'https://mastodon.social')[source]#\nMastodon toots loader.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad toots into documents.\nclass langchain.document_loaders.MathpixPDFLoader(file_path: str, processed_file_format: str = 'mmd', max_wait_time_seconds: int = 500, should_clean_pdf: bool = False, **kwargs: Any)[source]#\nclean_pdf(contents: str) \u2192 str[source]#\nproperty data: dict#\nget_processed_pdf(pdf_id: str) \u2192 str[source]#\nproperty headers: dict#\nload() \u2192 List[langchain.schema.Document][source]#\nLoad data into document objects.\nsend_pdf() \u2192 str[source]#\nproperty url: str#\nwait_for_processing(pdf_id: str) \u2192 None[source]#\nclass langchain.document_loaders.MaxComputeLoader(query: str, api_wrapper: langchain.utilities.max_compute.MaxComputeAPIWrapper, *, page_content_columns: Optional[Sequence[str]] = None, metadata_columns: Optional[Sequence[str]] = None)[source]#\nLoads a query result from Alibaba Cloud MaxCompute table into documents.\nclassmethod from_params(query: str, endpoint: str, project: str, *, access_id: Optional[str] = None, secret_access_key: Optional[str] = None, **kwargs: Any) \u2192 langchain.document_loaders.max_compute.MaxComputeLoader[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-22", "text": "Convenience constructor that builds the MaxCompute API wrapper fromgiven parameters.\nParameters\nquery \u2013 SQL query to execute.\nendpoint \u2013 MaxCompute endpoint.\nproject \u2013 A project is a basic organizational unit of MaxCompute, which is\nsimilar to a database.\naccess_id \u2013 MaxCompute access ID. Should be passed in directly or set as the\nenvironment variable MAX_COMPUTE_ACCESS_ID.\nsecret_access_key \u2013 MaxCompute secret access key. Should be passed in\ndirectly or set as the environment variable\nMAX_COMPUTE_SECRET_ACCESS_KEY.\nlazy_load() \u2192 Iterator[langchain.schema.Document][source]#\nA lazy loader for document content.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad data into document objects.\nclass langchain.document_loaders.ModernTreasuryLoader(resource: str, organization_id: Optional[str] = None, api_key: Optional[str] = None)[source]#\nload() \u2192 List[langchain.schema.Document][source]#\nLoad data into document objects.\nclass langchain.document_loaders.NotebookLoader(path: str, include_outputs: bool = False, max_output_length: int = 10, remove_newline: bool = False, traceback: bool = False)[source]#\nLoader that loads .ipynb notebook files.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\nclass langchain.document_loaders.NotionDBLoader(integration_token: str, database_id: str, request_timeout_sec: Optional[int] = 10)[source]#\nNotion DB Loader.\nReads content from pages within a Noton Database.\n:param integration_token: Notion integration token.\n:type integration_token: str\n:param database_id: Notion database id.\n:type database_id: str\n:param request_timeout_sec: Timeout for Notion requests in seconds.\n:type request_timeout_sec: int", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-23", "text": ":type request_timeout_sec: int\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents from the Notion database.\n:returns: List of documents.\n:rtype: List[Document]\nload_page(page_id: str) \u2192 langchain.schema.Document[source]#\nRead a page.\nclass langchain.document_loaders.NotionDirectoryLoader(path: str)[source]#\nLoader that loads Notion directory dump.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\nclass langchain.document_loaders.ObsidianLoader(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]#\nLoader that loads Obsidian files from disk.\nFRONT_MATTER_REGEX = re.compile('^---\\\\n(.*?)\\\\n---\\\\n', re.MULTILINE|re.DOTALL)#\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\npydantic model langchain.document_loaders.OneDriveFileLoader[source]#\nfield file: File [Required]#\nload() \u2192 List[langchain.schema.Document][source]#\nLoad Documents\npydantic model langchain.document_loaders.OneDriveLoader[source]#\nfield auth_with_token: bool = False#\nfield drive_id: str [Required]#\nfield folder_path: Optional[str] = None#\nfield object_ids: Optional[List[str]] = None#\nfield settings: langchain.document_loaders.onedrive._OneDriveSettings [Optional]#\nload() \u2192 List[langchain.schema.Document][source]#\nLoads all supported document files from the specified OneDrive drive a\nnd returns a list of Document objects.\nReturns\nA list of Document objects\nrepresenting the loaded documents.\nReturn type\nList[Document]\nRaises\nValueError \u2013 If the specified drive ID", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-24", "text": "Return type\nList[Document]\nRaises\nValueError \u2013 If the specified drive ID\ndoes not correspond to a drive in the OneDrive storage. \u2013 \nclass langchain.document_loaders.OnlinePDFLoader(file_path: str)[source]#\nLoader that loads online PDFs.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\nclass langchain.document_loaders.OutlookMessageLoader(file_path: str)[source]#\nLoader that loads Outlook Message files using extract_msg.\nTeamMsgExtractor/msg-extractor\nload() \u2192 List[langchain.schema.Document][source]#\nLoad data into document objects.\nclass langchain.document_loaders.PDFMinerLoader(file_path: str)[source]#\nLoader that uses PDFMiner to load PDF files.\nlazy_load() \u2192 Iterator[langchain.schema.Document][source]#\nLazily lod documents.\nload() \u2192 List[langchain.schema.Document][source]#\nEagerly load the content.\nclass langchain.document_loaders.PDFMinerPDFasHTMLLoader(file_path: str)[source]#\nLoader that uses PDFMiner to load PDF files as HTML content.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad file.\nclass langchain.document_loaders.PDFPlumberLoader(file_path: str, text_kwargs: Optional[Mapping[str, Any]] = None)[source]#\nLoader that uses pdfplumber to load PDF files.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad file.\nlangchain.document_loaders.PagedPDFSplitter#\nalias of langchain.document_loaders.pdf.PyPDFLoader", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-25", "text": "alias of langchain.document_loaders.pdf.PyPDFLoader\nclass langchain.document_loaders.PlaywrightURLLoader(urls: List[str], continue_on_failure: bool = True, headless: bool = True, remove_selectors: Optional[List[str]] = None)[source]#\nLoader that uses Playwright and to load a page and unstructured to load the html.\nThis is useful for loading pages that require javascript to render.\nurls#\nList of URLs to load.\nType\nList[str]\ncontinue_on_failure#\nIf True, continue loading other URLs on failure.\nType\nbool\nheadless#\nIf True, the browser will run in headless mode.\nType\nbool\nload() \u2192 List[langchain.schema.Document][source]#\nLoad the specified URLs using Playwright and create Document instances.\nReturns\nA list of Document instances with loaded content.\nReturn type\nList[Document]\nclass langchain.document_loaders.PsychicLoader(api_key: str, connector_id: str, connection_id: str)[source]#\nLoader that loads documents from Psychic.dev.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\nclass langchain.document_loaders.PyMuPDFLoader(file_path: str)[source]#\nLoader that uses PyMuPDF to load PDF files.\nload(**kwargs: Optional[Any]) \u2192 List[langchain.schema.Document][source]#\nLoad file.\nclass langchain.document_loaders.PyPDFDirectoryLoader(path: str, glob: str = '**/[!.]*.pdf', silent_errors: bool = False, load_hidden: bool = False, recursive: bool = False)[source]#\nLoads a directory with PDF files with pypdf and chunks at character level.\nLoader also stores page numbers in metadatas.\nload() \u2192 List[langchain.schema.Document][source]#", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-26", "text": "load() \u2192 List[langchain.schema.Document][source]#\nLoad data into document objects.\nclass langchain.document_loaders.PyPDFLoader(file_path: str)[source]#\nLoads a PDF with pypdf and chunks at character level.\nLoader also stores page numbers in metadatas.\nlazy_load() \u2192 Iterator[langchain.schema.Document][source]#\nLazy load given path as pages.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad given path as pages.\nclass langchain.document_loaders.PyPDFium2Loader(file_path: str)[source]#\nLoads a PDF with pypdfium2 and chunks at character level.\nlazy_load() \u2192 Iterator[langchain.schema.Document][source]#\nLazy load given path as pages.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad given path as pages.\nclass langchain.document_loaders.PySparkDataFrameLoader(spark_session: Optional[SparkSession] = None, df: Optional[Any] = None, page_content_column: str = 'text', fraction_of_memory: float = 0.1)[source]#\nLoad PySpark DataFrames\nget_num_rows() \u2192 Tuple[int, int][source]#\nGets the amount of \u201cfeasible\u201d rows for the DataFrame\nlazy_load() \u2192 Iterator[langchain.schema.Document][source]#\nA lazy loader for document content.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad from the dataframe.\nclass langchain.document_loaders.PythonLoader(file_path: str)[source]#\nLoad Python files, respecting any non-default encoding if specified.", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-27", "text": "Load Python files, respecting any non-default encoding if specified.\nclass langchain.document_loaders.ReadTheDocsLoader(path: Union[str, pathlib.Path], encoding: Optional[str] = None, errors: Optional[str] = None, custom_html_tag: Optional[Tuple[str, dict]] = None, **kwargs: Optional[Any])[source]#\nLoader that loads ReadTheDocs documentation directory dump.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\nclass langchain.document_loaders.RedditPostsLoader(client_id: str, client_secret: str, user_agent: str, search_queries: Sequence[str], mode: str, categories: Sequence[str] = ['new'], number_posts: Optional[int] = 10)[source]#\nReddit posts loader.\nRead posts on a subreddit.\nFirst you need to go to\nhttps://www.reddit.com/prefs/apps/\nand create your application\nload() \u2192 List[langchain.schema.Document][source]#\nLoad reddits.\nclass langchain.document_loaders.RoamLoader(path: str)[source]#\nLoader that loads Roam files from disk.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\nclass langchain.document_loaders.S3DirectoryLoader(bucket: str, prefix: str = '')[source]#\nLoading logic for loading documents from s3.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\nclass langchain.document_loaders.S3FileLoader(bucket: str, key: str)[source]#\nLoading logic for loading documents from s3.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\nclass langchain.document_loaders.SRTLoader(file_path: str)[source]#\nLoader for .srt (subtitle) files.", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-28", "text": "Loader for .srt (subtitle) files.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad using pysrt file.\nclass langchain.document_loaders.SeleniumURLLoader(urls: List[str], continue_on_failure: bool = True, browser: Literal['chrome', 'firefox'] = 'chrome', binary_location: Optional[str] = None, executable_path: Optional[str] = None, headless: bool = True, arguments: List[str] = [])[source]#\nLoader that uses Selenium and to load a page and unstructured to load the html.\nThis is useful for loading pages that require javascript to render.\nurls#\nList of URLs to load.\nType\nList[str]\ncontinue_on_failure#\nIf True, continue loading other URLs on failure.\nType\nbool\nbrowser#\nThe browser to use, either \u2018chrome\u2019 or \u2018firefox\u2019.\nType\nstr\nbinary_location#\nThe location of the browser binary.\nType\nOptional[str]\nexecutable_path#\nThe path to the browser executable.\nType\nOptional[str]\nheadless#\nIf True, the browser will run in headless mode.\nType\nbool\narguments [List[str]]\nList of arguments to pass to the browser.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad the specified URLs using Selenium and create Document instances.\nReturns\nA list of Document instances with loaded content.\nReturn type\nList[Document]\nclass langchain.document_loaders.SitemapLoader(web_path: str, filter_urls: Optional[List[str]] = None, parsing_function: Optional[Callable] = None, blocksize: Optional[int] = None, blocknum: int = 0, meta_function: Optional[Callable] = None, is_local: bool = False)[source]#\nLoader that fetches a sitemap and loads those URLs.", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-29", "text": "Loader that fetches a sitemap and loads those URLs.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad sitemap.\nparse_sitemap(soup: Any) \u2192 List[dict][source]#\nParse sitemap xml and load into a list of dicts.\nclass langchain.document_loaders.SlackDirectoryLoader(zip_path: str, workspace_url: Optional[str] = None)[source]#\nLoader for loading documents from a Slack directory dump.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad and return documents from the Slack directory dump.\nclass langchain.document_loaders.SnowflakeLoader(query: str, user: str, password: str, account: str, warehouse: str, role: str, database: str, schema: str, parameters: Optional[Dict[str, Any]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]#\nLoads a query result from Snowflake into a list of documents.\nEach document represents one row of the result. The page_content_columns\nare written into the page_content of the document. The metadata_columns\nare written into the metadata of the document. By default, all columns\nare written into the page_content and none into the metadata.\nlazy_load() \u2192 Iterator[langchain.schema.Document][source]#\nA lazy loader for document content.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad data into document objects.\nclass langchain.document_loaders.SpreedlyLoader(access_token: str, resource: str)[source]#\nload() \u2192 List[langchain.schema.Document][source]#\nLoad data into document objects.\nclass langchain.document_loaders.StripeLoader(resource: str, access_token: Optional[str] = None)[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-30", "text": "load() \u2192 List[langchain.schema.Document][source]#\nLoad data into document objects.\nclass langchain.document_loaders.TelegramChatApiLoader(chat_entity: Optional[EntityLike] = None, api_id: Optional[int] = None, api_hash: Optional[str] = None, username: Optional[str] = None, file_path: str = 'telegram_data.json')[source]#\nLoader that loads Telegram chat json directory dump.\nasync fetch_data_from_telegram() \u2192 None[source]#\nFetch data from Telegram API and save it as a JSON file.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\nclass langchain.document_loaders.TelegramChatFileLoader(path: str)[source]#\nLoader that loads Telegram chat json directory dump.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\nlangchain.document_loaders.TelegramChatLoader#\nalias of langchain.document_loaders.telegram.TelegramChatFileLoader\nclass langchain.document_loaders.TextLoader(file_path: str, encoding: Optional[str] = None, autodetect_encoding: bool = False)[source]#\nLoad text files.\nParameters\nfile_path \u2013 Path to the file to load.\nencoding \u2013 File encoding to use. If None, the file will be loaded\nencoding. (with the default system) \u2013 \nautodetect_encoding \u2013 Whether to try to autodetect the file encoding\nif the specified encoding fails.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad from file path.\nclass langchain.document_loaders.ToMarkdownLoader(url: str, api_key: str)[source]#\nLoader that loads HTML to markdown using 2markdown.\nlazy_load() \u2192 Iterator[langchain.schema.Document][source]#\nLazily load the file.", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-31", "text": "Lazily load the file.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad file.\nclass langchain.document_loaders.TomlLoader(source: Union[str, pathlib.Path])[source]#\nA TOML document loader that inherits from the BaseLoader class.\nThis class can be initialized with either a single source file or a source\ndirectory containing TOML files.\nlazy_load() \u2192 Iterator[langchain.schema.Document][source]#\nLazily load the TOML documents from the source file or directory.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad and return all documents.\nclass langchain.document_loaders.TrelloLoader(client: TrelloClient, board_name: str, *, include_card_name: bool = True, include_comments: bool = True, include_checklist: bool = True, card_filter: Literal['closed', 'open', 'all'] = 'all', extra_metadata: Tuple[str, ...] = ('due_date', 'labels', 'list', 'closed'))[source]#\nTrello loader. Reads all cards from a Trello board.\nclassmethod from_credentials(board_name: str, *, api_key: Optional[str] = None, token: Optional[str] = None, **kwargs: Any) \u2192 langchain.document_loaders.trello.TrelloLoader[source]#\nConvenience constructor that builds TrelloClient init param for you.\nParameters\nboard_name \u2013 The name of the Trello board.\napi_key \u2013 Trello API key. Can also be specified as environment variable\nTRELLO_API_KEY.\ntoken \u2013 Trello token. Can also be specified as environment variable\nTRELLO_TOKEN.\ninclude_card_name \u2013 Whether to include the name of the card in the document.\ninclude_comments \u2013 Whether to include the comments on the card in the\ndocument.", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-32", "text": "include_comments \u2013 Whether to include the comments on the card in the\ndocument.\ninclude_checklist \u2013 Whether to include the checklist on the card in the\ndocument.\ncard_filter \u2013 Filter on card status. Valid values are \u201cclosed\u201d, \u201copen\u201d,\n\u201call\u201d.\nextra_metadata \u2013 List of additional metadata fields to include as document\nmetadata.Valid values are \u201cdue_date\u201d, \u201clabels\u201d, \u201clist\u201d, \u201cclosed\u201d.\nload() \u2192 List[langchain.schema.Document][source]#\nLoads all cards from the specified Trello board.\nYou can filter the cards, metadata and text included by using the optional\nparameters.\nReturns:A list of documents, one for each card in the board.\nclass langchain.document_loaders.TwitterTweetLoader(auth_handler: Union[OAuthHandler, OAuth2BearerHandler], twitter_users: Sequence[str], number_tweets: Optional[int] = 100)[source]#\nTwitter tweets loader.\nRead tweets of user twitter handle.\nFirst you need to go to\nhttps://developer.twitter.com/en/docs/twitter-api\n/getting-started/getting-access-to-the-twitter-api\nto get your token. And create a v2 version of the app.\nclassmethod from_bearer_token(oauth2_bearer_token: str, twitter_users: Sequence[str], number_tweets: Optional[int] = 100) \u2192 langchain.document_loaders.twitter.TwitterTweetLoader[source]#\nCreate a TwitterTweetLoader from OAuth2 bearer token.\nclassmethod from_secrets(access_token: str, access_token_secret: str, consumer_key: str, consumer_secret: str, twitter_users: Sequence[str], number_tweets: Optional[int] = 100) \u2192 langchain.document_loaders.twitter.TwitterTweetLoader[source]#\nCreate a TwitterTweetLoader from access tokens and secrets.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad tweets.", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-33", "text": "load() \u2192 List[langchain.schema.Document][source]#\nLoad tweets.\nclass langchain.document_loaders.UnstructuredAPIFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]#\nLoader that uses the unstructured web API to load file IO objects.\nclass langchain.document_loaders.UnstructuredAPIFileLoader(file_path: Union[str, List[str]] = '', mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]#\nLoader that uses the unstructured web API to load files.\nclass langchain.document_loaders.UnstructuredCSVLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]#\nLoader that uses unstructured to load CSV files.\nclass langchain.document_loaders.UnstructuredEPubLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#\nLoader that uses unstructured to load epub files.\nclass langchain.document_loaders.UnstructuredEmailLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#\nLoader that uses unstructured to load email files.\nclass langchain.document_loaders.UnstructuredExcelLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]#\nLoader that uses unstructured to load Microsoft Excel files.\nclass langchain.document_loaders.UnstructuredFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', **unstructured_kwargs: Any)[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-34", "text": "Loader that uses unstructured to load file IO objects.\nclass langchain.document_loaders.UnstructuredFileLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#\nLoader that uses unstructured to load files.\nclass langchain.document_loaders.UnstructuredHTMLLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#\nLoader that uses unstructured to load HTML files.\nclass langchain.document_loaders.UnstructuredImageLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#\nLoader that uses unstructured to load image files, such as PNGs and JPGs.\nclass langchain.document_loaders.UnstructuredMarkdownLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#\nLoader that uses unstructured to load markdown files.\nclass langchain.document_loaders.UnstructuredODTLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]#\nLoader that uses unstructured to load open office ODT files.\nclass langchain.document_loaders.UnstructuredPDFLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#\nLoader that uses unstructured to load PDF files.\nclass langchain.document_loaders.UnstructuredPowerPointLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#\nLoader that uses unstructured to load powerpoint files.\nclass langchain.document_loaders.UnstructuredRTFLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-35", "text": "Loader that uses unstructured to load rtf files.\nclass langchain.document_loaders.UnstructuredURLLoader(urls: List[str], continue_on_failure: bool = True, mode: str = 'single', **unstructured_kwargs: Any)[source]#\nLoader that uses unstructured to load HTML files.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad file.\nclass langchain.document_loaders.UnstructuredWordDocumentLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#\nLoader that uses unstructured to load word documents.\nclass langchain.document_loaders.UnstructuredXMLLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]#\nLoader that uses unstructured to load XML files.\nclass langchain.document_loaders.WeatherDataLoader(client: langchain.utilities.openweathermap.OpenWeatherMapAPIWrapper, places: Sequence[str])[source]#\nWeather Reader.\nReads the forecast & current weather of any location using OpenWeatherMap\u2019s free\nAPI. Checkout \u2018https://openweathermap.org/appid\u2019 for more on how to generate a free\nOpenWeatherMap API.\nclassmethod from_params(places: Sequence[str], *, openweathermap_api_key: Optional[str] = None) \u2192 langchain.document_loaders.weather.WeatherDataLoader[source]#\nlazy_load() \u2192 Iterator[langchain.schema.Document][source]#\nLazily load weather data for the given locations.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad weather data for the given locations.\nclass langchain.document_loaders.WebBaseLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#\nLoader that uses urllib and beautiful soup to load webpages.\naload() \u2192 List[langchain.schema.Document][source]#", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-36", "text": "aload() \u2192 List[langchain.schema.Document][source]#\nLoad text from the urls in web_path async into Documents.\ndefault_parser: str = 'html.parser'#\nDefault parser to use for BeautifulSoup.\nasync fetch_all(urls: List[str]) \u2192 Any[source]#\nFetch all urls concurrently with rate limiting.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad text from the url(s) in web_path.\nrequests_kwargs: Dict[str, Any] = {}#\nkwargs for requests\nrequests_per_second: int = 2#\nMax number of concurrent requests to make.\nscrape(parser: Optional[str] = None) \u2192 Any[source]#\nScrape data from webpage and return it in BeautifulSoup format.\nscrape_all(urls: List[str], parser: Optional[str] = None) \u2192 List[Any][source]#\nFetch all urls, then return soups for all results.\nproperty web_path: str#\nweb_paths: List[str]#\nclass langchain.document_loaders.WhatsAppChatLoader(path: str)[source]#\nLoader that loads WhatsApp messages text file.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\nclass langchain.document_loaders.WikipediaLoader(query: str, lang: str = 'en', load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False)[source]#\nLoads a query result from www.wikipedia.org into a list of Documents.\nThe hard limit on the number of downloaded Documents is 300 for now.\nEach wiki page represents one Document.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad data into document objects.", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6a450e95639a-37", "text": "Load data into document objects.\nclass langchain.document_loaders.YoutubeLoader(video_id: str, add_video_info: bool = False, language: Union[str, Sequence[str]] = 'en', translation: str = 'en', continue_on_failure: bool = False)[source]#\nLoader that loads Youtube transcripts.\nstatic extract_video_id(youtube_url: str) \u2192 str[source]#\nExtract video id from common YT urls.\nclassmethod from_youtube_url(youtube_url: str, **kwargs: Any) \u2192 langchain.document_loaders.youtube.YoutubeLoader[source]#\nGiven youtube URL, load video.\nload() \u2192 List[langchain.schema.Document][source]#\nLoad documents.\nprevious\nText Splitter\nnext\nVector Stores\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/document_loaders.html"}
+{"id": "6e0b17fecde7-0", "text": ".rst\n.pdf\nOutput Parsers\nOutput Parsers#\npydantic model langchain.output_parsers.CommaSeparatedListOutputParser[source]#\nParse out comma separated lists.\nget_format_instructions() \u2192 str[source]#\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 List[str][source]#\nParse the output of an LLM call.\npydantic model langchain.output_parsers.DatetimeOutputParser[source]#\nfield format: str = '%Y-%m-%dT%H:%M:%S.%fZ'#\nget_format_instructions() \u2192 str[source]#\nInstructions on how the LLM output should be formatted.\nparse(response: str) \u2192 datetime.datetime[source]#\nParse the output of an LLM call.\nA method which takes in a string (assumed output of a language model )\nand parses it into some structure.\nParameters\ntext \u2013 output of language model\nReturns\nstructured output\npydantic model langchain.output_parsers.GuardrailsOutputParser[source]#\nfield guard: Any = None#\nclassmethod from_rail(rail_file: str, num_reasks: int = 1) \u2192 langchain.output_parsers.rail_parser.GuardrailsOutputParser[source]#\nclassmethod from_rail_string(rail_str: str, num_reasks: int = 1) \u2192 langchain.output_parsers.rail_parser.GuardrailsOutputParser[source]#\nget_format_instructions() \u2192 str[source]#\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 Dict[source]#\nParse the output of an LLM call.\nA method which takes in a string (assumed output of a language model )\nand parses it into some structure.\nParameters\ntext \u2013 output of language model\nReturns\nstructured output", "source": "https://python.langchain.com/en/latest/reference/modules/output_parsers.html"}
+{"id": "6e0b17fecde7-1", "text": "Parameters\ntext \u2013 output of language model\nReturns\nstructured output\npydantic model langchain.output_parsers.ListOutputParser[source]#\nClass to parse the output of an LLM call to a list.\nabstract parse(text: str) \u2192 List[str][source]#\nParse the output of an LLM call.\npydantic model langchain.output_parsers.OutputFixingParser[source]#\nWraps a parser and tries to fix parsing errors.\nfield parser: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T] [Required]#\nfield retry_chain: langchain.chains.llm.LLMChain [Required]#\nclassmethod from_llm(llm: langchain.base_language.BaseLanguageModel, parser: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T], prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['completion', 'error', 'instructions'], output_parser=None, partial_variables={}, template='Instructions:\\n--------------\\n{instructions}\\n--------------\\nCompletion:\\n--------------\\n{completion}\\n--------------\\n\\nAbove, the Completion did not satisfy the constraints given in the Instructions.\\nError:\\n--------------\\n{error}\\n--------------\\n\\nPlease try again. Please only respond with an answer that satisfies the constraints laid out in the Instructions:', template_format='f-string', validate_template=True)) \u2192 langchain.output_parsers.fix.OutputFixingParser[langchain.output_parsers.fix.T][source]#\nget_format_instructions() \u2192 str[source]#\nInstructions on how the LLM output should be formatted.\nparse(completion: str) \u2192 langchain.output_parsers.fix.T[source]#\nParse the output of an LLM call.\nA method which takes in a string (assumed output of a language model )\nand parses it into some structure.\nParameters\ntext \u2013 output of language model\nReturns", "source": "https://python.langchain.com/en/latest/reference/modules/output_parsers.html"}
+{"id": "6e0b17fecde7-2", "text": "and parses it into some structure.\nParameters\ntext \u2013 output of language model\nReturns\nstructured output\npydantic model langchain.output_parsers.PydanticOutputParser[source]#\nfield pydantic_object: Type[langchain.output_parsers.pydantic.T] [Required]#\nget_format_instructions() \u2192 str[source]#\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 langchain.output_parsers.pydantic.T[source]#\nParse the output of an LLM call.\nA method which takes in a string (assumed output of a language model )\nand parses it into some structure.\nParameters\ntext \u2013 output of language model\nReturns\nstructured output\npydantic model langchain.output_parsers.RegexDictParser[source]#\nClass to parse the output into a dictionary.\nfield no_update_value: Optional[str] = None#\nfield output_key_to_format: Dict[str, str] [Required]#\nfield regex_pattern: str = \"{}:\\\\s?([^.'\\\\n']*)\\\\.?\"#\nparse(text: str) \u2192 Dict[str, str][source]#\nParse the output of an LLM call.\npydantic model langchain.output_parsers.RegexParser[source]#\nClass to parse the output into a dictionary.\nfield default_output_key: Optional[str] = None#\nfield output_keys: List[str] [Required]#\nfield regex: str [Required]#\nparse(text: str) \u2192 Dict[str, str][source]#\nParse the output of an LLM call.\npydantic model langchain.output_parsers.ResponseSchema[source]#\nfield description: str [Required]#\nfield name: str [Required]#\nfield type: str = 'string'#\npydantic model langchain.output_parsers.RetryOutputParser[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/output_parsers.html"}
+{"id": "6e0b17fecde7-3", "text": "pydantic model langchain.output_parsers.RetryOutputParser[source]#\nWraps a parser and tries to fix parsing errors.\nDoes this by passing the original prompt and the completion to another\nLLM, and telling it the completion did not satisfy criteria in the prompt.\nfield parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]#\nfield retry_chain: langchain.chains.llm.LLMChain [Required]#\nclassmethod from_llm(llm: langchain.base_language.BaseLanguageModel, parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T], prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['completion', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\\n{prompt}\\nCompletion:\\n{completion}\\n\\nAbove, the Completion did not satisfy the constraints given in the Prompt.\\nPlease try again:', template_format='f-string', validate_template=True)) \u2192 langchain.output_parsers.retry.RetryOutputParser[langchain.output_parsers.retry.T][source]#\nget_format_instructions() \u2192 str[source]#\nInstructions on how the LLM output should be formatted.\nparse(completion: str) \u2192 langchain.output_parsers.retry.T[source]#\nParse the output of an LLM call.\nA method which takes in a string (assumed output of a language model )\nand parses it into some structure.\nParameters\ntext \u2013 output of language model\nReturns\nstructured output\nparse_with_prompt(completion: str, prompt_value: langchain.schema.PromptValue) \u2192 langchain.output_parsers.retry.T[source]#\nOptional method to parse the output of an LLM call with a prompt.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters", "source": "https://python.langchain.com/en/latest/reference/modules/output_parsers.html"}
+{"id": "6e0b17fecde7-4", "text": "the prompt to do so.\nParameters\ncompletion \u2013 output of language model\nprompt \u2013 prompt value\nReturns\nstructured output\npydantic model langchain.output_parsers.RetryWithErrorOutputParser[source]#\nWraps a parser and tries to fix parsing errors.\nDoes this by passing the original prompt, the completion, AND the error\nthat was raised to another language model and telling it that the completion\ndid not work, and raised the given error. Differs from RetryOutputParser\nin that this implementation provides the error that was raised back to the\nLLM, which in theory should give it more information on how to fix it.\nfield parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]#\nfield retry_chain: langchain.chains.llm.LLMChain [Required]#\nclassmethod from_llm(llm: langchain.base_language.BaseLanguageModel, parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T], prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['completion', 'error', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\\n{prompt}\\nCompletion:\\n{completion}\\n\\nAbove, the Completion did not satisfy the constraints given in the Prompt.\\nDetails: {error}\\nPlease try again:', template_format='f-string', validate_template=True)) \u2192 langchain.output_parsers.retry.RetryWithErrorOutputParser[langchain.output_parsers.retry.T][source]#\nget_format_instructions() \u2192 str[source]#\nInstructions on how the LLM output should be formatted.\nparse(completion: str) \u2192 langchain.output_parsers.retry.T[source]#\nParse the output of an LLM call.\nA method which takes in a string (assumed output of a language model )\nand parses it into some structure.\nParameters\ntext \u2013 output of language model\nReturns", "source": "https://python.langchain.com/en/latest/reference/modules/output_parsers.html"}
+{"id": "6e0b17fecde7-5", "text": "and parses it into some structure.\nParameters\ntext \u2013 output of language model\nReturns\nstructured output\nparse_with_prompt(completion: str, prompt_value: langchain.schema.PromptValue) \u2192 langchain.output_parsers.retry.T[source]#\nOptional method to parse the output of an LLM call with a prompt.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 output of language model\nprompt \u2013 prompt value\nReturns\nstructured output\npydantic model langchain.output_parsers.StructuredOutputParser[source]#\nfield response_schemas: List[langchain.output_parsers.structured.ResponseSchema] [Required]#\nclassmethod from_response_schemas(response_schemas: List[langchain.output_parsers.structured.ResponseSchema]) \u2192 langchain.output_parsers.structured.StructuredOutputParser[source]#\nget_format_instructions() \u2192 str[source]#\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 Any[source]#\nParse the output of an LLM call.\nA method which takes in a string (assumed output of a language model )\nand parses it into some structure.\nParameters\ntext \u2013 output of language model\nReturns\nstructured output\nprevious\nExample Selector\nnext\nChat Prompt Templates\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/output_parsers.html"}
+{"id": "fa678718cb2d-0", "text": ".rst\n.pdf\nVector Stores\nVector Stores#\nWrappers on top of vector stores.\nclass langchain.vectorstores.AnalyticDB(connection_string: str, embedding_function: langchain.embeddings.base.Embeddings, collection_name: str = 'langchain', collection_metadata: Optional[dict] = None, pre_delete_collection: bool = False, logger: Optional[logging.Logger] = None)[source]#\nVectorStore implementation using AnalyticDB.\nAnalyticDB is a distributed full PostgresSQL syntax cloud-native database.\n- connection_string is a postgres connection string.\n- embedding_function any embedding function implementing\nlangchain.embeddings.base.Embeddings interface.\ncollection_name is the name of the collection to use. (default: langchain)\nNOTE: This is not the name of the table, but the name of the collection.The tables will be created when initializing the store (if not exists)\nSo, make sure the user has the right permissions to create tables.\npre_delete_collection if True, will delete the collection if it exists.(default: False)\n- Useful for testing.\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]#\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nkwargs \u2013 vectorstore specific parameters\nReturns\nList of ids from adding the texts into the vectorstore.\nconnect() \u2192 sqlalchemy.engine.base.Connection[source]#\nclassmethod connection_string_from_db_params(driver: str, host: str, port: int, database: str, user: str, password: str) \u2192 str[source]#\nReturn connection string from database parameters.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-1", "text": "Return connection string from database parameters.\ncreate_collection() \u2192 None[source]#\ncreate_tables_if_not_exists() \u2192 None[source]#\ndelete_collection() \u2192 None[source]#\ndrop_tables() \u2192 None[source]#\nclassmethod from_documents(documents: List[langchain.schema.Document], embedding: langchain.embeddings.base.Embeddings, collection_name: str = 'langchain', ids: Optional[List[str]] = None, pre_delete_collection: bool = False, **kwargs: Any) \u2192 langchain.vectorstores.analyticdb.AnalyticDB[source]#\nReturn VectorStore initialized from documents and embeddings.\nPostgres connection string is required\nEither pass it as a parameter\nor set the PGVECTOR_CONNECTION_STRING environment variable.\nclassmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'langchain', ids: Optional[List[str]] = None, pre_delete_collection: bool = False, **kwargs: Any) \u2192 langchain.vectorstores.analyticdb.AnalyticDB[source]#\nReturn VectorStore initialized from texts and embeddings.\nPostgres connection string is required\nEither pass it as a parameter\nor set the PGVECTOR_CONNECTION_STRING environment variable.\nget_collection(session: sqlalchemy.orm.session.Session) \u2192 Optional[langchain.vectorstores.analyticdb.CollectionStore][source]#\nclassmethod get_connection_string(kwargs: Dict[str, Any]) \u2192 str[source]#\nsimilarity_search(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nRun similarity search with AnalyticDB with distance.\nParameters\nquery (str) \u2013 Query text to search for.\nk (int) \u2013 Number of results to return. Defaults to 4.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-2", "text": "k (int) \u2013 Number of results to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of Documents most similar to the query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[dict] = None, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of Documents most similar to the query and score for each\nsimilarity_search_with_score_by_vector(embedding: List[float], k: int = 4, filter: Optional[dict] = None) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nclass langchain.vectorstores.Annoy(embedding_function: Callable, index: Any, metric: str, docstore: langchain.docstore.base.Docstore, index_to_docstore_id: Dict[int, str])[source]#\nWrapper around Annoy vector database.\nTo use, you should have the annoy python package installed.\nExample\nfrom langchain import Annoy", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-3", "text": "Example\nfrom langchain import Annoy\ndb = Annoy(embedding_function, index, docstore, index_to_docstore_id)\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str][source]#\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nkwargs \u2013 vectorstore specific parameters\nReturns\nList of ids from adding the texts into the vectorstore.\nclassmethod from_embeddings(text_embeddings: List[Tuple[str, List[float]]], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, metric: str = 'angular', trees: int = 100, n_jobs: int = - 1, **kwargs: Any) \u2192 langchain.vectorstores.annoy.Annoy[source]#\nConstruct Annoy wrapper from embeddings.\nParameters\ntext_embeddings \u2013 List of tuples of (text, embedding)\nembedding \u2013 Embedding function to use.\nmetadatas \u2013 List of metadata dictionaries to associate with documents.\nmetric \u2013 Metric to use for indexing. Defaults to \u201cangular\u201d.\ntrees \u2013 Number of trees to use for indexing. Defaults to 100.\nn_jobs \u2013 Number of jobs to use for indexing. Defaults to -1\nThis is a user friendly interface that:\nCreates an in memory docstore with provided embeddings\nInitializes the Annoy database\nThis is intended to be a quick way to get started.\nExample\nfrom langchain import Annoy\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\ntext_embeddings = embeddings.embed_documents(texts)\ntext_embedding_pairs = list(zip(texts, text_embeddings))", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-4", "text": "text_embedding_pairs = list(zip(texts, text_embeddings))\ndb = Annoy.from_embeddings(text_embedding_pairs, embeddings)\nclassmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, metric: str = 'angular', trees: int = 100, n_jobs: int = - 1, **kwargs: Any) \u2192 langchain.vectorstores.annoy.Annoy[source]#\nConstruct Annoy wrapper from raw documents.\nParameters\ntexts \u2013 List of documents to index.\nembedding \u2013 Embedding function to use.\nmetadatas \u2013 List of metadata dictionaries to associate with documents.\nmetric \u2013 Metric to use for indexing. Defaults to \u201cangular\u201d.\ntrees \u2013 Number of trees to use for indexing. Defaults to 100.\nn_jobs \u2013 Number of jobs to use for indexing. Defaults to -1.\nThis is a user friendly interface that:\nEmbeds documents.\nCreates an in memory docstore\nInitializes the Annoy database\nThis is intended to be a quick way to get started.\nExample\nfrom langchain import Annoy\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nindex = Annoy.from_texts(texts, embeddings)\nclassmethod load_local(folder_path: str, embeddings: langchain.embeddings.base.Embeddings) \u2192 langchain.vectorstores.annoy.Annoy[source]#\nLoad Annoy index, docstore, and index_to_docstore_id to disk.\nParameters\nfolder_path \u2013 folder path to load index, docstore,\nand index_to_docstore_id from.\nembeddings \u2013 Embeddings to use when generating queries.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-5", "text": "embeddings \u2013 Embeddings to use when generating queries.\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nk \u2013 Number of Documents to return. Defaults to 4.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-6", "text": "Defaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nprocess_index_results(idxs: List[int], dists: List[float]) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nTurns annoy results into a list of documents and scores.\nParameters\nidxs \u2013 List of indices of the documents in the index.\ndists \u2013 List of distances of the documents in the index.\nReturns\nList of Documents and scores.\nsave_local(folder_path: str, prefault: bool = False) \u2192 None[source]#\nSave Annoy index, docstore, and index_to_docstore_id to disk.\nParameters\nfolder_path \u2013 folder path to save index, docstore,\nand index_to_docstore_id to.\nprefault \u2013 Whether to pre-load the index into memory.\nsimilarity_search(query: str, k: int = 4, search_k: int = - 1, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nsearch_k \u2013 inspect up to search_k nodes which defaults\nto n_trees * n if not provided\nReturns\nList of Documents most similar to the query.\nsimilarity_search_by_index(docstore_index: int, k: int = 4, search_k: int = - 1, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to docstore_index.\nParameters\ndocstore_index \u2013 Index of document in docstore\nk \u2013 Number of Documents to return. Defaults to 4.\nsearch_k \u2013 inspect up to search_k nodes which defaults\nto n_trees * n if not provided\nReturns", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-7", "text": "to n_trees * n if not provided\nReturns\nList of Documents most similar to the embedding.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, search_k: int = - 1, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nsearch_k \u2013 inspect up to search_k nodes which defaults\nto n_trees * n if not provided\nReturns\nList of Documents most similar to the embedding.\nsimilarity_search_with_score(query: str, k: int = 4, search_k: int = - 1) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nsearch_k \u2013 inspect up to search_k nodes which defaults\nto n_trees * n if not provided\nReturns\nList of Documents most similar to the query and score for each\nsimilarity_search_with_score_by_index(docstore_index: int, k: int = 4, search_k: int = - 1) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nsearch_k \u2013 inspect up to search_k nodes which defaults\nto n_trees * n if not provided\nReturns\nList of Documents most similar to the query and score for each", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-8", "text": "Returns\nList of Documents most similar to the query and score for each\nsimilarity_search_with_score_by_vector(embedding: List[float], k: int = 4, search_k: int = - 1) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nsearch_k \u2013 inspect up to search_k nodes which defaults\nto n_trees * n if not provided\nReturns\nList of Documents most similar to the query and score for each\nclass langchain.vectorstores.AtlasDB(name: str, embedding_function: Optional[langchain.embeddings.base.Embeddings] = None, api_key: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False)[source]#\nWrapper around Atlas: Nomic\u2019s neural database and rhizomatic instrument.\nTo use, you should have the nomic python package installed.\nExample\nfrom langchain.vectorstores import AtlasDB\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nvectorstore = AtlasDB(\"my_project\", embeddings.embed_query)\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, refresh: bool = True, **kwargs: Any) \u2192 List[str][source]#\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Texts to add to the vectorstore.\nmetadatas (Optional[List[dict]], optional) \u2013 Optional list of metadatas.\nids (Optional[List[str]]) \u2013 An optional list of ids.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-9", "text": "ids (Optional[List[str]]) \u2013 An optional list of ids.\nrefresh (bool) \u2013 Whether or not to refresh indices with the updated data.\nDefault True.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\ncreate_index(**kwargs: Any) \u2192 Any[source]#\nCreates an index in your project.\nSee\nhttps://docs.nomic.ai/atlas_api.html#nomic.project.AtlasProject.create_index\nfor full detail.\nclassmethod from_documents(documents: List[langchain.schema.Document], embedding: Optional[langchain.embeddings.base.Embeddings] = None, ids: Optional[List[str]] = None, name: Optional[str] = None, api_key: Optional[str] = None, persist_directory: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False, index_kwargs: Optional[dict] = None, **kwargs: Any) \u2192 langchain.vectorstores.atlas.AtlasDB[source]#\nCreate an AtlasDB vectorstore from a list of documents.\nParameters\nname (str) \u2013 Name of the collection to create.\napi_key (str) \u2013 Your nomic API key,\ndocuments (List[Document]) \u2013 List of documents to add to the vectorstore.\nembedding (Optional[Embeddings]) \u2013 Embedding function. Defaults to None.\nids (Optional[List[str]]) \u2013 Optional list of document IDs. If None,\nids will be auto created\ndescription (str) \u2013 A description for your project.\nis_public (bool) \u2013 Whether your project is publicly accessible.\nTrue by default.\nreset_project_if_exists (bool) \u2013 Whether to reset this project if\nit already exists. Default False.\nGenerally userful during development and testing.\nindex_kwargs (Optional[dict]) \u2013 Dict of kwargs for index creation.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-10", "text": "index_kwargs (Optional[dict]) \u2013 Dict of kwargs for index creation.\nSee https://docs.nomic.ai/atlas_api.html\nReturns\nNomic\u2019s neural database and finest rhizomatic instrument\nReturn type\nAtlasDB\nclassmethod from_texts(texts: List[str], embedding: Optional[langchain.embeddings.base.Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, name: Optional[str] = None, api_key: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False, index_kwargs: Optional[dict] = None, **kwargs: Any) \u2192 langchain.vectorstores.atlas.AtlasDB[source]#\nCreate an AtlasDB vectorstore from a raw documents.\nParameters\ntexts (List[str]) \u2013 The list of texts to ingest.\nname (str) \u2013 Name of the project to create.\napi_key (str) \u2013 Your nomic API key,\nembedding (Optional[Embeddings]) \u2013 Embedding function. Defaults to None.\nmetadatas (Optional[List[dict]]) \u2013 List of metadatas. Defaults to None.\nids (Optional[List[str]]) \u2013 Optional list of document IDs. If None,\nids will be auto created\ndescription (str) \u2013 A description for your project.\nis_public (bool) \u2013 Whether your project is publicly accessible.\nTrue by default.\nreset_project_if_exists (bool) \u2013 Whether to reset this project if it\nalready exists. Default False.\nGenerally userful during development and testing.\nindex_kwargs (Optional[dict]) \u2013 Dict of kwargs for index creation.\nSee https://docs.nomic.ai/atlas_api.html\nReturns\nNomic\u2019s neural database and finest rhizomatic instrument\nReturn type\nAtlasDB", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-11", "text": "Returns\nNomic\u2019s neural database and finest rhizomatic instrument\nReturn type\nAtlasDB\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nRun similarity search with AtlasDB\nParameters\nquery (str) \u2013 Query text to search for.\nk (int) \u2013 Number of results to return. Defaults to 4.\nReturns\nList of documents most similar to the query text.\nReturn type\nList[Document]\nclass langchain.vectorstores.AwaDB(table_name: str = 'langchain_awadb', embedding_model: Optional[Embeddings] = None, log_and_data_dir: Optional[str] = None, client: Optional[awadb.Client] = None)[source]#\nInterface implemented by AwaDB vector stores.\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str][source]#\nRun more texts through the embeddings and add to the vectorstore.\n:param texts: Iterable of strings to add to the vectorstore.\n:param metadatas: Optional list of metadatas associated with the texts.\n:param kwargs: vectorstore specific parameters\nReturns\nList of ids from adding the texts into the vectorstore.\nclassmethod from_texts(texts: List[str], embedding: Optional[Embeddings] = None, metadatas: Optional[List[dict]] = None, table_name: str = 'langchain_awadb', logging_and_data_dir: Optional[str] = None, client: Optional[awadb.Client] = None, **kwargs: Any) \u2192 AwaDB[source]#\nCreate an AwaDB vectorstore from a raw documents.\nParameters\ntexts (List[str]) \u2013 List of texts to add to the table.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-12", "text": "Parameters\ntexts (List[str]) \u2013 List of texts to add to the table.\nembedding (Optional[Embeddings]) \u2013 Embedding function. Defaults to None.\nmetadatas (Optional[List[dict]]) \u2013 List of metadatas. Defaults to None.\ntable_name (str) \u2013 Name of the table to create.\nlogging_and_data_dir (Optional[str]) \u2013 Directory of logging and persistence.\nclient (Optional[awadb.Client]) \u2013 AwaDB client\nReturns\nAwaDB vectorstore.\nReturn type\nAwaDB\nload_local(table_name: str = 'langchain_awadb', **kwargs: Any) \u2192 bool[source]#\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, scores: Optional[list] = None, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nReturn docs and relevance scores, normalized on a scale from 0 to 1.\n0 is dissimilar, 1 is most similar.\nsimilarity_search_with_score(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nReturn docs and relevance scores, normalized on a scale from 0 to 1.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-13", "text": "Return docs and relevance scores, normalized on a scale from 0 to 1.\n0 is dissimilar, 1 is most similar.\nclass langchain.vectorstores.Chroma(collection_name: str = 'langchain', embedding_function: Optional[Embeddings] = None, persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, collection_metadata: Optional[Dict] = None, client: Optional[chromadb.Client] = None)[source]#\nWrapper around ChromaDB embeddings platform.\nTo use, you should have the chromadb python package installed.\nExample\nfrom langchain.vectorstores import Chroma\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nvectorstore = Chroma(\"langchain_store\", embeddings)\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]#\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Texts to add to the vectorstore.\nmetadatas (Optional[List[dict]], optional) \u2013 Optional list of metadatas.\nids (Optional[List[str]], optional) \u2013 Optional list of IDs.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\ndelete_collection() \u2192 None[source]#\nDelete the collection.\nclassmethod from_documents(documents: List[Document], embedding: Optional[Embeddings] = None, ids: Optional[List[str]] = None, collection_name: str = 'langchain', persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, client: Optional[chromadb.Client] = None, **kwargs: Any) \u2192 Chroma[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-14", "text": "Create a Chroma vectorstore from a list of documents.\nIf a persist_directory is specified, the collection will be persisted there.\nOtherwise, the data will be ephemeral in-memory.\nParameters\ncollection_name (str) \u2013 Name of the collection to create.\npersist_directory (Optional[str]) \u2013 Directory to persist the collection.\nids (Optional[List[str]]) \u2013 List of document IDs. Defaults to None.\ndocuments (List[Document]) \u2013 List of documents to add to the vectorstore.\nembedding (Optional[Embeddings]) \u2013 Embedding function. Defaults to None.\nclient_settings (Optional[chromadb.config.Settings]) \u2013 Chroma client settings\nReturns\nChroma vectorstore.\nReturn type\nChroma\nclassmethod from_texts(texts: List[str], embedding: Optional[Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, collection_name: str = 'langchain', persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, client: Optional[chromadb.Client] = None, **kwargs: Any) \u2192 Chroma[source]#\nCreate a Chroma vectorstore from a raw documents.\nIf a persist_directory is specified, the collection will be persisted there.\nOtherwise, the data will be ephemeral in-memory.\nParameters\ntexts (List[str]) \u2013 List of texts to add to the collection.\ncollection_name (str) \u2013 Name of the collection to create.\npersist_directory (Optional[str]) \u2013 Directory to persist the collection.\nembedding (Optional[Embeddings]) \u2013 Embedding function. Defaults to None.\nmetadatas (Optional[List[dict]]) \u2013 List of metadatas. Defaults to None.\nids (Optional[List[str]]) \u2013 List of document IDs. Defaults to None.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-15", "text": "ids (Optional[List[str]]) \u2013 List of document IDs. Defaults to None.\nclient_settings (Optional[chromadb.config.Settings]) \u2013 Chroma client settings\nReturns\nChroma vectorstore.\nReturn type\nChroma\nget(include: Optional[List[str]] = None) \u2192 Dict[str, Any][source]#\nGets the collection.\nParameters\ninclude (Optional[List[str]]) \u2013 List of fields to include from db.\nDefaults to None.\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[Dict[str, str]] = None, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[Dict[str, str]] = None, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-16", "text": "Maximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of Documents selected by maximal marginal relevance.\npersist() \u2192 None[source]#\nPersist the collection.\nThis can be used to explicitly persist the data to disk.\nIt will also be called automatically when the object is destroyed.\nsimilarity_search(query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nRun similarity search with Chroma.\nParameters\nquery (str) \u2013 Query text to search for.\nk (int) \u2013 Number of results to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of documents most similar to the query text.\nReturn type\nList[Document]\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to embedding vector.\n:param embedding: Embedding to look up documents similar to.\n:type embedding: str\n:param k: Number of Documents to return. Defaults to 4.\n:type k: int", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-17", "text": ":param k: Number of Documents to return. Defaults to 4.\n:type k: int\n:param filter: Filter by metadata. Defaults to None.\n:type filter: Optional[Dict[str, str]]\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_score(query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nRun similarity search with Chroma with distance.\nParameters\nquery (str) \u2013 Query text to search for.\nk (int) \u2013 Number of results to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of documents most similar to\nthe query text and cosine distance in float for each.\nLower score represents more similarity.\nReturn type\nList[Tuple[Document, float]]\nupdate_document(document_id: str, document: langchain.schema.Document) \u2192 None[source]#\nUpdate a document in the collection.\nParameters\ndocument_id (str) \u2013 ID of the document to update.\ndocument (Document) \u2013 Document to update.\nclass langchain.vectorstores.Clickhouse(embedding: langchain.embeddings.base.Embeddings, config: Optional[langchain.vectorstores.clickhouse.ClickhouseSettings] = None, **kwargs: Any)[source]#\nWrapper around ClickHouse vector database\nYou need a clickhouse-connect python package, and a valid account\nto connect to ClickHouse.\nClickHouse can not only search with simple vector indexes,\nit also supports complex query with multiple conditions,\nconstraints and even sub-queries.\nFor more information, please visit[ClickHouse official site](https://clickhouse.com/clickhouse)", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-18", "text": "add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, batch_size: int = 32, ids: Optional[Iterable[str]] = None, **kwargs: Any) \u2192 List[str][source]#\nInsert more texts through the embeddings and add to the VectorStore.\nParameters\ntexts \u2013 Iterable of strings to add to the VectorStore.\nids \u2013 Optional list of ids to associate with the texts.\nbatch_size \u2013 Batch size of insertion\nmetadata \u2013 Optional column data to be inserted\nReturns\nList of ids from adding the texts into the VectorStore.\ndrop() \u2192 None[source]#\nHelper function: Drop data\nescape_str(value: str) \u2192 str[source]#\nclassmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[Dict[Any, Any]]] = None, config: Optional[langchain.vectorstores.clickhouse.ClickhouseSettings] = None, text_ids: Optional[Iterable[str]] = None, batch_size: int = 32, **kwargs: Any) \u2192 langchain.vectorstores.clickhouse.Clickhouse[source]#\nCreate ClickHouse wrapper with existing texts\nParameters\nembedding_function (Embeddings) \u2013 Function to extract text embedding\ntexts (Iterable[str]) \u2013 List or tuple of strings to be added\nconfig (ClickHouseSettings, Optional) \u2013 ClickHouse configuration\ntext_ids (Optional[Iterable], optional) \u2013 IDs for the texts.\nDefaults to None.\nbatch_size (int, optional) \u2013 Batchsize when transmitting data to ClickHouse.\nDefaults to 32.\nmetadata (List[dict], optional) \u2013 metadata to texts. Defaults to None.\ninto (Other keyword arguments will pass) \u2013 [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)\nReturns\nClickHouse Index\nproperty metadata_column: str#", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-19", "text": "Returns\nClickHouse Index\nproperty metadata_column: str#\nsimilarity_search(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nPerform a similarity search with ClickHouse\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nReturns\nList of Documents\nReturn type\nList[Document]\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, where_str: Optional[str] = None, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nPerform a similarity search with ClickHouse by vectors\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nReturns\nList of (Document, similarity)\nReturn type\nList[Document]\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) \u2192 List[Tuple[langchain.schema.Document, float]][source]#", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-20", "text": "Perform a similarity search with ClickHouse\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nReturns\nList of documents\nReturn type\nList[Document]\npydantic settings langchain.vectorstores.ClickhouseSettings[source]#\nClickHouse Client Configuration\nAttribute:\nclickhouse_host (str)An URL to connect to MyScale backend.Defaults to \u2018localhost\u2019.\nclickhouse_port (int) : URL port to connect with HTTP. Defaults to 8443.\nusername (str) : Username to login. Defaults to None.\npassword (str) : Password to login. Defaults to None.\nindex_type (str): index type string.\nindex_param (list): index build parameter.\nindex_query_params(dict): index query parameters.\ndatabase (str) : Database name to find the table. Defaults to \u2018default\u2019.\ntable (str) : Table name to operate on.\nDefaults to \u2018vector_table\u2019.\nmetric (str)Metric to compute distance,supported are (\u2018angular\u2019, \u2018euclidean\u2019, \u2018manhattan\u2019, \u2018hamming\u2019,\n\u2018dot\u2019). Defaults to \u2018angular\u2019.\nspotify/annoy\ncolumn_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector,\nmust be same size to number of columns. For example:\n.. code-block:: python\n{\u2018id\u2019: \u2018text_id\u2019,\n\u2018uuid\u2019: \u2018global_unique_id\u2019\n\u2018embedding\u2019: \u2018text_embedding\u2019,", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-21", "text": "\u2018uuid\u2019: \u2018global_unique_id\u2019\n\u2018embedding\u2019: \u2018text_embedding\u2019,\n\u2018document\u2019: \u2018text_plain\u2019,\n\u2018metadata\u2019: \u2018metadata_dictionary_in_json\u2019,\n}\nDefaults to identity map.\nShow JSON schema{\n \"title\": \"ClickhouseSettings\",", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-22", "text": "Show JSON schema{\n \"title\": \"ClickhouseSettings\",\n \"description\": \"ClickHouse Client Configuration\\n\\nAttribute:\\n clickhouse_host (str) : An URL to connect to MyScale backend.\\n Defaults to 'localhost'.\\n clickhouse_port (int) : URL port to connect with HTTP. Defaults to 8443.\\n username (str) : Username to login. Defaults to None.\\n password (str) : Password to login. Defaults to None.\\n index_type (str): index type string.\\n index_param (list): index build parameter.\\n index_query_params(dict): index query parameters.\\n database (str) : Database name to find the table. Defaults to 'default'.\\n table (str) : Table name to operate on.\\n Defaults to 'vector_table'.\\n metric (str) : Metric to compute distance,\\n supported are ('angular', 'euclidean', 'manhattan', 'hamming',\\n 'dot'). Defaults to 'angular'.\\n https://github.com/spotify/annoy/blob/main/src/annoymodule.cc#L149-L169\\n\\n column_map (Dict) : Column type map to project column name onto langchain\\n semantics. Must have keys: `text`, `id`, `vector`,\\n must be same size to number of columns. For example:\\n .. code-block:: python\\n\\n {\\n 'id': 'text_id',\\n 'uuid': 'global_unique_id'\\n 'embedding': 'text_embedding',\\n 'document': 'text_plain',\\n 'metadata': 'metadata_dictionary_in_json',\\n }\\n\\n Defaults to identity map.\",\n \"type\": \"object\",\n \"properties\": {\n \"host\": {", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-23", "text": "\"type\": \"object\",\n \"properties\": {\n \"host\": {\n \"title\": \"Host\",\n \"default\": \"localhost\",\n \"env_names\": \"{'clickhouse_host'}\",\n \"type\": \"string\"\n },\n \"port\": {\n \"title\": \"Port\",\n \"default\": 8123,\n \"env_names\": \"{'clickhouse_port'}\",\n \"type\": \"integer\"\n },\n \"username\": {\n \"title\": \"Username\",\n \"env_names\": \"{'clickhouse_username'}\",\n \"type\": \"string\"\n },\n \"password\": {\n \"title\": \"Password\",\n \"env_names\": \"{'clickhouse_password'}\",\n \"type\": \"string\"\n },\n \"index_type\": {\n \"title\": \"Index Type\",\n \"default\": \"annoy\",\n \"env_names\": \"{'clickhouse_index_type'}\",\n \"type\": \"string\"\n },\n \"index_param\": {\n \"title\": \"Index Param\",\n \"default\": [\n 100,\n \"'L2Distance'\"\n ],\n \"env_names\": \"{'clickhouse_index_param'}\",\n \"anyOf\": [\n {\n \"type\": \"array\",\n \"items\": {}\n },\n {\n \"type\": \"object\"\n }\n ]\n },\n \"index_query_params\": {\n \"title\": \"Index Query Params\",\n \"default\": {},\n \"env_names\": \"{'clickhouse_index_query_params'}\",\n \"type\": \"object\",\n \"additionalProperties\": {\n \"type\": \"string\"\n }\n },", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-24", "text": "\"type\": \"string\"\n }\n },\n \"column_map\": {\n \"title\": \"Column Map\",\n \"default\": {\n \"id\": \"id\",\n \"uuid\": \"uuid\",\n \"document\": \"document\",\n \"embedding\": \"embedding\",\n \"metadata\": \"metadata\"\n },\n \"env_names\": \"{'clickhouse_column_map'}\",\n \"type\": \"object\",\n \"additionalProperties\": {\n \"type\": \"string\"\n }\n },\n \"database\": {\n \"title\": \"Database\",\n \"default\": \"default\",\n \"env_names\": \"{'clickhouse_database'}\",\n \"type\": \"string\"\n },\n \"table\": {\n \"title\": \"Table\",\n \"default\": \"langchain\",\n \"env_names\": \"{'clickhouse_table'}\",\n \"type\": \"string\"\n },\n \"metric\": {\n \"title\": \"Metric\",\n \"default\": \"angular\",\n \"env_names\": \"{'clickhouse_metric'}\",\n \"type\": \"string\"\n }\n },\n \"additionalProperties\": false\n}\nConfig\nenv_file: str = .env\nenv_file_encoding: str = utf-8\nenv_prefix: str = clickhouse_\nFields\ncolumn_map (Dict[str, str])\ndatabase (str)\nhost (str)\nindex_param (Optional[Union[List, Dict]])\nindex_query_params (Dict[str, str])\nindex_type (str)\nmetric (str)\npassword (Optional[str])\nport (int)\ntable (str)\nusername (Optional[str])", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-25", "text": "port (int)\ntable (str)\nusername (Optional[str])\nfield column_map: Dict[str, str] = {'document': 'document', 'embedding': 'embedding', 'id': 'id', 'metadata': 'metadata', 'uuid': 'uuid'}#\nfield database: str = 'default'#\nfield host: str = 'localhost'#\nfield index_param: Optional[Union[List, Dict]] = [100, \"'L2Distance'\"]#\nfield index_query_params: Dict[str, str] = {}#\nfield index_type: str = 'annoy'#\nfield metric: str = 'angular'#\nfield password: Optional[str] = None#\nfield port: int = 8123#\nfield table: str = 'langchain'#\nfield username: Optional[str] = None#\nclass langchain.vectorstores.DeepLake(dataset_path: str = './deeplake/', token: Optional[str] = None, embedding_function: Optional[langchain.embeddings.base.Embeddings] = None, read_only: Optional[bool] = False, ingestion_batch_size: int = 1024, num_workers: int = 0, verbose: bool = True, **kwargs: Any)[source]#\nWrapper around Deep Lake, a data lake for deep learning applications.\nWe implement naive similarity search and filtering for fast prototyping,\nbut it can be extended with Tensor Query Language (TQL) for production use cases\nover billion rows.\nWhy Deep Lake?\nNot only stores embeddings, but also the original data with version control.\nServerless, doesn\u2019t require another service and can be used with majorcloud providers (S3, GCS, etc.)\nMore than just a multi-modal vector store. You can use the datasetto fine-tune your own LLM models.\nTo use, you should have the deeplake python package installed.\nExample", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-26", "text": "To use, you should have the deeplake python package installed.\nExample\nfrom langchain.vectorstores import DeepLake\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nvectorstore = DeepLake(\"langchain_store\", embeddings.embed_query)\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]#\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Texts to add to the vectorstore.\nmetadatas (Optional[List[dict]], optional) \u2013 Optional list of metadatas.\nids (Optional[List[str]], optional) \u2013 Optional list of IDs.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\ndelete(ids: Any[List[str], None] = None, filter: Any[Dict[str, str], None] = None, delete_all: Any[bool, None] = None) \u2192 bool[source]#\nDelete the entities in the dataset\nParameters\nids (Optional[List[str]], optional) \u2013 The document_ids to delete.\nDefaults to None.\nfilter (Optional[Dict[str, str]], optional) \u2013 The filter to delete by.\nDefaults to None.\ndelete_all (Optional[bool], optional) \u2013 Whether to drop the dataset.\nDefaults to None.\ndelete_dataset() \u2192 None[source]#\nDelete the collection.\nclassmethod force_delete_by_path(path: str) \u2192 None[source]#\nForce delete dataset by path", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-27", "text": "Force delete dataset by path\nclassmethod from_texts(texts: List[str], embedding: Optional[langchain.embeddings.base.Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, dataset_path: str = './deeplake/', **kwargs: Any) \u2192 langchain.vectorstores.deeplake.DeepLake[source]#\nCreate a Deep Lake dataset from a raw documents.\nIf a dataset_path is specified, the dataset will be persisted in that location,\notherwise by default at ./deeplake\nParameters\npath (str, pathlib.Path) \u2013 \nThe full path to the dataset. Can be:\nDeep Lake cloud path of the form hub://username/dataset_name.To write to Deep Lake cloud datasets,\nensure that you are logged in to Deep Lake\n(use \u2018activeloop login\u2019 from command line)\nAWS S3 path of the form s3://bucketname/path/to/dataset.Credentials are required in either the environment\nGoogle Cloud Storage path of the formgcs://bucketname/path/to/dataset Credentials are required\nin either the environment\nLocal file system path of the form ./path/to/dataset or~/path/to/dataset or path/to/dataset.\nIn-memory path of the form mem://path/to/dataset which doesn\u2019tsave the dataset, but keeps it in memory instead.\nShould be used only for testing as it does not persist.\ndocuments (List[Document]) \u2013 List of documents to add.\nembedding (Optional[Embeddings]) \u2013 Embedding function. Defaults to None.\nmetadatas (Optional[List[dict]]) \u2013 List of metadatas. Defaults to None.\nids (Optional[List[str]]) \u2013 List of document IDs. Defaults to None.\nReturns\nDeep Lake dataset.\nReturn type\nDeepLake", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-28", "text": "Returns\nDeep Lake dataset.\nReturn type\nDeepLake\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\n:param query: Text to look up documents similar to.\n:param k: Number of Documents to return. Defaults to 4.\n:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n:param lambda_mult: Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\npersist() \u2192 None[source]#\nPersist the collection.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-29", "text": "persist() \u2192 None[source]#\nPersist the collection.\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to query.\nParameters\nquery \u2013 text to embed and run the query on.\nk \u2013 Number of Documents to return.\nDefaults to 4.\nquery \u2013 Text to look up documents similar to.\nembedding \u2013 Embedding function to use.\nDefaults to None.\nk \u2013 Number of Documents to return.\nDefaults to 4.\ndistance_metric \u2013 L2 for Euclidean, L1 for Nuclear, max\nL-infinity distance, cos for cosine similarity, \u2018dot\u2019 for dot product\nDefaults to L2.\nfilter \u2013 Attribute filter by metadata example {\u2018key\u2019: \u2018value\u2019}.\nDefaults to None.\nmaximal_marginal_relevance \u2013 Whether to use maximal marginal relevance.\nDefaults to False.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nDefaults to 20.\nreturn_score \u2013 Whether to return the score. Defaults to False.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_score(query: str, distance_metric: str = 'L2', k: int = 4, filter: Optional[Dict[str, str]] = None) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nRun similarity search with Deep Lake with distance returned.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-30", "text": "Run similarity search with Deep Lake with distance returned.\nParameters\nquery (str) \u2013 Query text to search for.\ndistance_metric \u2013 L2 for Euclidean, L1 for Nuclear, max L-infinity\ndistance, cos for cosine similarity, \u2018dot\u2019 for dot product.\nDefaults to L2.\nk (int) \u2013 Number of results to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of documents most similar to the querytext with distance in float.\nReturn type\nList[Tuple[Document, float]]\nclass langchain.vectorstores.DocArrayHnswSearch(doc_index: BaseDocIndex, embedding: langchain.embeddings.base.Embeddings)[source]#\nWrapper around HnswLib storage.\nTo use it, you should have the docarray package with version >=0.32.0 installed.\nYou can install it with pip install \u201clangchain[docarray]\u201d.\nclassmethod from_params(embedding: langchain.embeddings.base.Embeddings, work_dir: str, n_dim: int, dist_metric: Literal['cosine', 'ip', 'l2'] = 'cosine', max_elements: int = 1024, index: bool = True, ef_construction: int = 200, ef: int = 10, M: int = 16, allow_replace_deleted: bool = True, num_threads: int = 1, **kwargs: Any) \u2192 langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch[source]#\nInitialize DocArrayHnswSearch store.\nParameters\nembedding (Embeddings) \u2013 Embedding function.\nwork_dir (str) \u2013 path to the location where all the data will be stored.\nn_dim (int) \u2013 dimension of an embedding.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-31", "text": "n_dim (int) \u2013 dimension of an embedding.\ndist_metric (str) \u2013 Distance metric for DocArrayHnswSearch can be one of:\n\u201ccosine\u201d, \u201cip\u201d, and \u201cl2\u201d. Defaults to \u201ccosine\u201d.\nmax_elements (int) \u2013 Maximum number of vectors that can be stored.\nDefaults to 1024.\nindex (bool) \u2013 Whether an index should be built for this field.\nDefaults to True.\nef_construction (int) \u2013 defines a construction time/accuracy trade-off.\nDefaults to 200.\nef (int) \u2013 parameter controlling query time/accuracy trade-off.\nDefaults to 10.\nM (int) \u2013 parameter that defines the maximum number of outgoing\nconnections in the graph. Defaults to 16.\nallow_replace_deleted (bool) \u2013 Enables replacing of deleted elements\nwith new added ones. Defaults to True.\nnum_threads (int) \u2013 Sets the number of cpu threads to use. Defaults to 1.\n**kwargs \u2013 Other keyword arguments to be passed to the get_doc_cls method.\nclassmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, work_dir: Optional[str] = None, n_dim: Optional[int] = None, **kwargs: Any) \u2192 langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch[source]#\nCreate an DocArrayHnswSearch store and insert data.\nParameters\ntexts (List[str]) \u2013 Text data.\nembedding (Embeddings) \u2013 Embedding function.\nmetadatas (Optional[List[dict]]) \u2013 Metadata for each text if it exists.\nDefaults to None.\nwork_dir (str) \u2013 path to the location where all the data will be stored.\nn_dim (int) \u2013 dimension of an embedding.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-32", "text": "n_dim (int) \u2013 dimension of an embedding.\n**kwargs \u2013 Other keyword arguments to be passed to the __init__ method.\nReturns\nDocArrayHnswSearch Vector Store\nclass langchain.vectorstores.DocArrayInMemorySearch(doc_index: BaseDocIndex, embedding: langchain.embeddings.base.Embeddings)[source]#\nWrapper around in-memory storage for exact search.\nTo use it, you should have the docarray package with version >=0.32.0 installed.\nYou can install it with pip install \u201clangchain[docarray]\u201d.\nclassmethod from_params(embedding: langchain.embeddings.base.Embeddings, metric: Literal['cosine_sim', 'euclidian_dist', 'sgeuclidean_dist'] = 'cosine_sim', **kwargs: Any) \u2192 langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch[source]#\nInitialize DocArrayInMemorySearch store.\nParameters\nembedding (Embeddings) \u2013 Embedding function.\nmetric (str) \u2013 metric for exact nearest-neighbor search.\nCan be one of: \u201ccosine_sim\u201d, \u201ceuclidean_dist\u201d and \u201csqeuclidean_dist\u201d.\nDefaults to \u201ccosine_sim\u201d.\n**kwargs \u2013 Other keyword arguments to be passed to the get_doc_cls method.\nclassmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[Dict[Any, Any]]] = None, **kwargs: Any) \u2192 langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch[source]#\nCreate an DocArrayInMemorySearch store and insert data.\nParameters\ntexts (List[str]) \u2013 Text data.\nembedding (Embeddings) \u2013 Embedding function.\nmetadatas (Optional[List[Dict[Any, Any]]]) \u2013 Metadata for each text\nif it exists. Defaults to None.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-33", "text": "if it exists. Defaults to None.\nmetric (str) \u2013 metric for exact nearest-neighbor search.\nCan be one of: \u201ccosine_sim\u201d, \u201ceuclidean_dist\u201d and \u201csqeuclidean_dist\u201d.\nDefaults to \u201ccosine_sim\u201d.\nReturns\nDocArrayInMemorySearch Vector Store\nclass langchain.vectorstores.ElasticVectorSearch(elasticsearch_url: str, index_name: str, embedding: langchain.embeddings.base.Embeddings, *, ssl_verify: Optional[Dict[str, Any]] = None)[source]#\nWrapper around Elasticsearch as a vector database.\nTo connect to an Elasticsearch instance that does not require\nlogin credentials, pass the Elasticsearch URL and index name along with the\nembedding object to the constructor.\nExample\nfrom langchain import ElasticVectorSearch\nfrom langchain.embeddings import OpenAIEmbeddings\nembedding = OpenAIEmbeddings()\nelastic_vector_search = ElasticVectorSearch(\n elasticsearch_url=\"http://localhost:9200\",\n index_name=\"test_index\",\n embedding=embedding\n)\nTo connect to an Elasticsearch instance that requires login credentials,\nincluding Elastic Cloud, use the Elasticsearch URL format\nhttps://username:password@es_host:9243. For example, to connect to Elastic\nCloud, create the Elasticsearch URL with the required authentication details and\npass it to the ElasticVectorSearch constructor as the named parameter\nelasticsearch_url.\nYou can obtain your Elastic Cloud URL and login credentials by logging in to the\nElastic Cloud console at https://cloud.elastic.co, selecting your deployment, and\nnavigating to the \u201cDeployments\u201d page.\nTo obtain your Elastic Cloud password for the default \u201celastic\u201d user:\nLog in to the Elastic Cloud console at https://cloud.elastic.co\nGo to \u201cSecurity\u201d > \u201cUsers\u201d\nLocate the \u201celastic\u201d user and click \u201cEdit\u201d\nClick \u201cReset password\u201d", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-34", "text": "Locate the \u201celastic\u201d user and click \u201cEdit\u201d\nClick \u201cReset password\u201d\nFollow the prompts to reset the password\nThe format for Elastic Cloud URLs is\nhttps://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.\nExample\nfrom langchain import ElasticVectorSearch\nfrom langchain.embeddings import OpenAIEmbeddings\nembedding = OpenAIEmbeddings()\nelastic_host = \"cluster_id.region_id.gcp.cloud.es.io\"\nelasticsearch_url = f\"https://username:password@{elastic_host}:9243\"\nelastic_vector_search = ElasticVectorSearch(\n elasticsearch_url=elasticsearch_url,\n index_name=\"test_index\",\n embedding=embedding\n)\nParameters\nelasticsearch_url (str) \u2013 The URL for the Elasticsearch instance.\nindex_name (str) \u2013 The name of the Elasticsearch index for the embeddings.\nembedding (Embeddings) \u2013 An object that provides the ability to embed text.\nIt should be an instance of a class that subclasses the Embeddings\nabstract base class, such as OpenAIEmbeddings()\nRaises\nValueError \u2013 If the elasticsearch python package is not installed.\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, refresh_indices: bool = True, **kwargs: Any) \u2192 List[str][source]#\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nrefresh_indices \u2013 bool to refresh ElasticSearch indices\nReturns\nList of ids from adding the texts into the vectorstore.\nclient_search(client: Any, index_name: str, script_query: Dict, size: int) \u2192 Any[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-35", "text": "create_index(client: Any, index_name: str, mapping: Dict) \u2192 None[source]#\nclassmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, elasticsearch_url: Optional[str] = None, index_name: Optional[str] = None, refresh_indices: bool = True, **kwargs: Any) \u2192 langchain.vectorstores.elastic_vector_search.ElasticVectorSearch[source]#\nConstruct ElasticVectorSearch wrapper from raw documents.\nThis is a user-friendly interface that:\nEmbeds documents.\nCreates a new index for the embeddings in the Elasticsearch instance.\nAdds the documents to the newly created Elasticsearch index.\nThis is intended to be a quick way to get started.\nExample\nfrom langchain import ElasticVectorSearch\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nelastic_vector_search = ElasticVectorSearch.from_texts(\n texts,\n embeddings,\n elasticsearch_url=\"http://localhost:9200\"\n)\nsimilarity_search(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query.\nsimilarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nReturn docs most similar to query.\n:param query: Text to look up documents similar to.\n:param k: Number of Documents to return. Defaults to 4.\nReturns", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-36", "text": ":param k: Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query.\nclass langchain.vectorstores.FAISS(embedding_function: typing.Callable, index: typing.Any, docstore: langchain.docstore.base.Docstore, index_to_docstore_id: typing.Dict[int, str], relevance_score_fn: typing.Optional[typing.Callable[[float], float]] = , normalize_L2: bool = False)[source]#\nWrapper around FAISS vector database.\nTo use, you should have the faiss python package installed.\nExample\nfrom langchain import FAISS\nfaiss = FAISS(embedding_function, index, docstore, index_to_docstore_id)\nadd_embeddings(text_embeddings: Iterable[Tuple[str, List[float]]], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]#\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntext_embeddings \u2013 Iterable pairs of string and embedding to\nadd to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nids \u2013 Optional list of unique IDs.\nReturns\nList of ids from adding the texts into the vectorstore.\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]#\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nids \u2013 Optional list of unique IDs.\nReturns\nList of ids from adding the texts into the vectorstore.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-37", "text": "Returns\nList of ids from adding the texts into the vectorstore.\nclassmethod from_embeddings(text_embeddings: List[Tuple[str, List[float]]], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 langchain.vectorstores.faiss.FAISS[source]#\nConstruct FAISS wrapper from raw documents.\nThis is a user friendly interface that:\nEmbeds documents.\nCreates an in memory docstore\nInitializes the FAISS database\nThis is intended to be a quick way to get started.\nExample\nfrom langchain import FAISS\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\ntext_embeddings = embeddings.embed_documents(texts)\ntext_embedding_pairs = list(zip(texts, text_embeddings))\nfaiss = FAISS.from_embeddings(text_embedding_pairs, embeddings)\nclassmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 langchain.vectorstores.faiss.FAISS[source]#\nConstruct FAISS wrapper from raw documents.\nThis is a user friendly interface that:\nEmbeds documents.\nCreates an in memory docstore\nInitializes the FAISS database\nThis is intended to be a quick way to get started.\nExample\nfrom langchain import FAISS\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nfaiss = FAISS.from_texts(texts, embeddings)\nclassmethod load_local(folder_path: str, embeddings: langchain.embeddings.base.Embeddings, index_name: str = 'index') \u2192 langchain.vectorstores.faiss.FAISS[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-38", "text": "Load FAISS index, docstore, and index_to_docstore_id from disk.\nParameters\nfolder_path \u2013 folder path to load index, docstore,\nand index_to_docstore_id from.\nembeddings \u2013 Embeddings to use when generating queries\nindex_name \u2013 for saving with a specific index file name\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-39", "text": "of diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmerge_from(target: langchain.vectorstores.faiss.FAISS) \u2192 None[source]#\nMerge another FAISS object with the current one.\nAdd the target FAISS to the current one.\nParameters\ntarget \u2013 FAISS object you wish to merge into the current one\nReturns\nNone.\nsave_local(folder_path: str, index_name: str = 'index') \u2192 None[source]#\nSave FAISS index, docstore, and index_to_docstore_id to disk.\nParameters\nfolder_path \u2013 folder path to save index, docstore,\nand index_to_docstore_id to.\nindex_name \u2013 for saving with a specific index file name\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the embedding.\nsimilarity_search_with_score(query: str, k: int = 4) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-40", "text": "Parameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of documents most similar to the query text with\nL2 distance in float. Lower score represents more similarity.\nsimilarity_search_with_score_by_vector(embedding: List[float], k: int = 4) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nReturn docs most similar to query.\nParameters\nembedding \u2013 Embedding vector to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of documents most similar to the query text and L2 distance\nin float for each. Lower score represents more similarity.\nclass langchain.vectorstores.LanceDB(connection: Any, embedding: langchain.embeddings.base.Embeddings, vector_key: Optional[str] = 'vector', id_key: Optional[str] = 'id', text_key: Optional[str] = 'text')[source]#\nWrapper around LanceDB vector database.\nTo use, you should have lancedb python package installed.\nExample\ndb = lancedb.connect('./lancedb')\ntable = db.open_table('my_table')\nvectorstore = LanceDB(table, embedding_function)\nvectorstore.add_texts(['text1', 'text2'])\nresult = vectorstore.similarity_search('text1')\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]#\nTurn texts into embedding and add it to the database\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nids \u2013 Optional list of ids to associate with the texts.\nReturns\nList of ids of the added texts.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-41", "text": "Returns\nList of ids of the added texts.\nclassmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, connection: Any = None, vector_key: Optional[str] = 'vector', id_key: Optional[str] = 'id', text_key: Optional[str] = 'text', **kwargs: Any) \u2192 langchain.vectorstores.lancedb.LanceDB[source]#\nReturn VectorStore initialized from texts and embeddings.\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn documents most similar to the query\nParameters\nquery \u2013 String to query the vectorstore with.\nk \u2013 Number of documents to return.\nReturns\nList of documents most similar to the query.\nclass langchain.vectorstores.MatchingEngine(project_id: str, index: MatchingEngineIndex, endpoint: MatchingEngineIndexEndpoint, embedding: Embeddings, gcs_client: storage.Client, gcs_bucket_name: str, credentials: Optional[Credentials] = None)[source]#\nVertex Matching Engine implementation of the vector store.\nWhile the embeddings are stored in the Matching Engine, the embedded\ndocuments will be stored in GCS.\nAn existing Index and corresponding Endpoint are preconditions for\nusing this module.\nSee usage in docs/modules/indexes/vectorstores/examples/matchingengine.ipynb\nNote that this implementation is mostly meant for reading if you are\nplanning to do a real time implementation. While reading is a real time\noperation, updating the index takes close to one hour.\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str][source]#\nRun more texts through the embeddings and add to the vectorstore.\nParameters", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-42", "text": "Run more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nkwargs \u2013 vectorstore specific parameters.\nReturns\nList of ids from adding the texts into the vectorstore.\nclassmethod from_components(project_id: str, region: str, gcs_bucket_name: str, index_id: str, endpoint_id: str, credentials_path: Optional[str] = None, embedding: Optional[langchain.embeddings.base.Embeddings] = None) \u2192 langchain.vectorstores.matching_engine.MatchingEngine[source]#\nTakes the object creation out of the constructor.\nParameters\nproject_id \u2013 The GCP project id.\nregion \u2013 The default location making the API calls. It must have\nregional. (the same location as the GCS bucket and must be) \u2013 \ngcs_bucket_name \u2013 The location where the vectors will be stored in\ncreated. (order for the index to be) \u2013 \nindex_id \u2013 The id of the created index.\nendpoint_id \u2013 The id of the created endpoint.\ncredentials_path \u2013 (Optional) The path of the Google credentials on\nsystem. (the local file) \u2013 \nembedding \u2013 The Embeddings that will be used for\ntexts. (embedding the) \u2013 \nReturns\nA configured MatchingEngine with the texts added to the index.\nclassmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 langchain.vectorstores.matching_engine.MatchingEngine[source]#\nUse from components instead.\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to query.\nParameters", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-43", "text": "Return docs most similar to query.\nParameters\nquery \u2013 The string that will be used to search for similar documents.\nk \u2013 The amount of neighbors that will be retrieved.\nReturns\nA list of k matching documents.\nclass langchain.vectorstores.Milvus(embedding_function: langchain.embeddings.base.Embeddings, collection_name: str = 'LangChainCollection', connection_args: Optional[dict[str, Any]] = None, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: Optional[bool] = False)[source]#\nWrapper around the Milvus vector database.\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, timeout: Optional[int] = None, batch_size: int = 1000, **kwargs: Any) \u2192 List[str][source]#\nInsert text data into Milvus.\nInserting data when the collection has not be made yet will result\nin creating a new Collection. The data of the first entity decides\nthe schema of the new collection, the dim is extracted from the first\nembedding and the columns are decided by the first metadata dict.\nMetada keys will need to be present for all inserted values. At\nthe moment there is no None equivalent in Milvus.\nParameters\ntexts (Iterable[str]) \u2013 The texts to embed, it is assumed\nthat they all fit in memory.\nmetadatas (Optional[List[dict]]) \u2013 Metadata dicts attached to each of\nthe texts. Defaults to None.\ntimeout (Optional[int]) \u2013 Timeout for each batch insert. Defaults\nto None.\nbatch_size (int, optional) \u2013 Batch size to use for insertion.\nDefaults to 1000.\nRaises\nMilvusException \u2013 Failure to add texts\nReturns\nThe resulting keys for each inserted element.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-44", "text": "Returns\nThe resulting keys for each inserted element.\nReturn type\nList[str]\nclassmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'LangChainCollection', connection_args: dict[str, Any] = {'host': 'localhost', 'password': '', 'port': '19530', 'secure': False, 'user': ''}, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: bool = False, **kwargs: Any) \u2192 langchain.vectorstores.milvus.Milvus[source]#\nCreate a Milvus collection, indexes it with HNSW, and insert data.\nParameters\ntexts (List[str]) \u2013 Text data.\nembedding (Embeddings) \u2013 Embedding function.\nmetadatas (Optional[List[dict]]) \u2013 Metadata for each text if it exists.\nDefaults to None.\ncollection_name (str, optional) \u2013 Collection name to use. Defaults to\n\u201cLangChainCollection\u201d.\nconnection_args (dict[str, Any], optional) \u2013 Connection args to use. Defaults\nto DEFAULT_MILVUS_CONNECTION.\nconsistency_level (str, optional) \u2013 Which consistency level to use. Defaults\nto \u201cSession\u201d.\nindex_params (Optional[dict], optional) \u2013 Which index_params to use. Defaults\nto None.\nsearch_params (Optional[dict], optional) \u2013 Which search params to use.\nDefaults to None.\ndrop_old (Optional[bool], optional) \u2013 Whether to drop the collection with\nthat name if it exists. Defaults to False.\nReturns\nMilvus Vector Store\nReturn type\nMilvus", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-45", "text": "Returns\nMilvus Vector Store\nReturn type\nMilvus\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nPerform a search and return results that are reordered by MMR.\nParameters\nquery (str) \u2013 The text being searched.\nk (int, optional) \u2013 How many results to give. Defaults to 4.\nfetch_k (int, optional) \u2013 Total results to select k from.\nDefaults to 20.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5\nparam (dict, optional) \u2013 The search params for the specified index.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs \u2013 Collection.search() keyword arguments.\nReturns\nDocument results for search.\nReturn type\nList[Document]\nmax_marginal_relevance_search_by_vector(embedding: list[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nPerform a search and return results that are reordered by MMR.\nParameters\nembedding (str) \u2013 The embedding vector being searched.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-46", "text": "Parameters\nembedding (str) \u2013 The embedding vector being searched.\nk (int, optional) \u2013 How many results to give. Defaults to 4.\nfetch_k (int, optional) \u2013 Total results to select k from.\nDefaults to 20.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5\nparam (dict, optional) \u2013 The search params for the specified index.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs \u2013 Collection.search() keyword arguments.\nReturns\nDocument results for search.\nReturn type\nList[Document]\nsimilarity_search(query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nPerform a similarity search against the query string.\nParameters\nquery (str) \u2013 The text to search.\nk (int, optional) \u2013 How many results to return. Defaults to 4.\nparam (dict, optional) \u2013 The search params for the index type.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs \u2013 Collection.search() keyword arguments.\nReturns\nDocument results for search.\nReturn type\nList[Document]", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-47", "text": "Returns\nDocument results for search.\nReturn type\nList[Document]\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nPerform a similarity search against the query string.\nParameters\nembedding (List[float]) \u2013 The embedding vector to search.\nk (int, optional) \u2013 How many results to return. Defaults to 4.\nparam (dict, optional) \u2013 The search params for the index type.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs \u2013 Collection.search() keyword arguments.\nReturns\nDocument results for search.\nReturn type\nList[Document]\nsimilarity_search_with_score(query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nPerform a search on a query string and return results with score.\nFor more information about the search parameters, take a look at the pymilvus\ndocumentation found here:\nhttps://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md\nParameters\nquery (str) \u2013 The text being searched.\nk (int, optional) \u2013 The amount of results ot return. Defaults to 4.\nparam (dict) \u2013 The search params for the specified index.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-48", "text": "Defaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs \u2013 Collection.search() keyword arguments.\nReturn type\nList[float], List[Tuple[Document, any, any]]\nsimilarity_search_with_score_by_vector(embedding: List[float], k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nPerform a search on a query string and return results with score.\nFor more information about the search parameters, take a look at the pymilvus\ndocumentation found here:\nhttps://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md\nParameters\nembedding (List[float]) \u2013 The embedding vector being searched.\nk (int, optional) \u2013 The amount of results ot return. Defaults to 4.\nparam (dict) \u2013 The search params for the specified index.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs \u2013 Collection.search() keyword arguments.\nReturns\nResult doc and score.\nReturn type\nList[Tuple[Document, float]]\nclass langchain.vectorstores.MongoDBAtlasVectorSearch(collection: Collection[MongoDBDocumentType], embedding: Embeddings, *, index_name: str = 'default', text_key: str = 'text', embedding_key: str = 'embedding')[source]#\nWrapper around MongoDB Atlas Vector Search.\nTo use, you should have both:\n- the pymongo python package installed", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-49", "text": "To use, you should have both:\n- the pymongo python package installed\n- a connection string associated with a MongoDB Atlas Cluster having deployed an\nAtlas Search index\nExample\nfrom langchain.vectorstores import MongoDBAtlasVectorSearch\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom pymongo import MongoClient\nmongo_client = MongoClient(\"\")\ncollection = mongo_client[\"\"][\"\"]\nembeddings = OpenAIEmbeddings()\nvectorstore = MongoDBAtlasVectorSearch(collection, embeddings)\nadd_texts(texts: Iterable[str], metadatas: Optional[List[Dict[str, Any]]] = None, **kwargs: Any) \u2192 List[source]#\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nReturns\nList of ids from adding the texts into the vectorstore.\nclassmethod from_connection_string(connection_string: str, namespace: str, embedding: langchain.embeddings.base.Embeddings, **kwargs: Any) \u2192 langchain.vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch[source]#\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, collection: Optional[Collection[MongoDBDocumentType]] = None, **kwargs: Any) \u2192 MongoDBAtlasVectorSearch[source]#\nConstruct MongoDBAtlasVectorSearch wrapper from raw documents.\nThis is a user-friendly interface that:\nEmbeds documents.\nAdds the documents to a provided MongoDB Atlas Vector Search index(Lucene)\nThis is intended to be a quick way to get started.\nExample", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-50", "text": "This is intended to be a quick way to get started.\nExample\nsimilarity_search(query: str, k: int = 4, pre_filter: Optional[dict] = None, post_filter_pipeline: Optional[List[Dict]] = None, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn MongoDB documents most similar to query.\nUse the knnBeta Operator available in MongoDB Atlas Search\nThis feature is in early access and available only for evaluation purposes, to\nvalidate functionality, and to gather feedback from a small closed group of\nearly access users. It is not recommended for production deployments as we may\nintroduce breaking changes.\nFor more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Optional Number of Documents to return. Defaults to 4.\npre_filter \u2013 Optional Dictionary of argument(s) to prefilter on document\nfields.\npost_filter_pipeline \u2013 Optional Pipeline of MongoDB aggregation stages\nfollowing the knnBeta search.\nReturns\nList of Documents most similar to the query and score for each\nsimilarity_search_with_score(query: str, *, k: int = 4, pre_filter: Optional[dict] = None, post_filter_pipeline: Optional[List[Dict]] = None) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nReturn MongoDB documents most similar to query, along with scores.\nUse the knnBeta Operator available in MongoDB Atlas Search\nThis feature is in early access and available only for evaluation purposes, to\nvalidate functionality, and to gather feedback from a small closed group of\nearly access users. It is not recommended for production deployments as we\nmay introduce breaking changes.\nFor more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta\nParameters\nquery \u2013 Text to look up documents similar to.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-51", "text": "Parameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Optional Number of Documents to return. Defaults to 4.\npre_filter \u2013 Optional Dictionary of argument(s) to prefilter on document\nfields.\npost_filter_pipeline \u2013 Optional Pipeline of MongoDB aggregation stages\nfollowing the knnBeta search.\nReturns\nList of Documents most similar to the query and score for each\nclass langchain.vectorstores.MyScale(embedding: langchain.embeddings.base.Embeddings, config: Optional[langchain.vectorstores.myscale.MyScaleSettings] = None, **kwargs: Any)[source]#\nWrapper around MyScale vector database\nYou need a clickhouse-connect python package, and a valid account\nto connect to MyScale.\nMyScale can not only search with simple vector indexes,\nit also supports complex query with multiple conditions,\nconstraints and even sub-queries.\nFor more information, please visit[myscale official site](https://docs.myscale.com/en/overview/)\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, batch_size: int = 32, ids: Optional[Iterable[str]] = None, **kwargs: Any) \u2192 List[str][source]#\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nids \u2013 Optional list of ids to associate with the texts.\nbatch_size \u2013 Batch size of insertion\nmetadata \u2013 Optional column data to be inserted\nReturns\nList of ids from adding the texts into the vectorstore.\ndrop() \u2192 None[source]#\nHelper function: Drop data\nescape_str(value: str) \u2192 str[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-52", "text": "Helper function: Drop data\nescape_str(value: str) \u2192 str[source]#\nclassmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[Dict[Any, Any]]] = None, config: Optional[langchain.vectorstores.myscale.MyScaleSettings] = None, text_ids: Optional[Iterable[str]] = None, batch_size: int = 32, **kwargs: Any) \u2192 langchain.vectorstores.myscale.MyScale[source]#\nCreate Myscale wrapper with existing texts\nParameters\nembedding_function (Embeddings) \u2013 Function to extract text embedding\ntexts (Iterable[str]) \u2013 List or tuple of strings to be added\nconfig (MyScaleSettings, Optional) \u2013 Myscale configuration\ntext_ids (Optional[Iterable], optional) \u2013 IDs for the texts.\nDefaults to None.\nbatch_size (int, optional) \u2013 Batchsize when transmitting data to MyScale.\nDefaults to 32.\nmetadata (List[dict], optional) \u2013 metadata to texts. Defaults to None.\ninto (Other keyword arguments will pass) \u2013 [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)\nReturns\nMyScale Index\nproperty metadata_column: str#\nsimilarity_search(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nPerform a similarity search with MyScale\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-53", "text": "of SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nReturns\nList of Documents\nReturn type\nList[Document]\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, where_str: Optional[str] = None, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nPerform a similarity search with MyScale by vectors\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nReturns\nList of (Document, similarity)\nReturn type\nList[Document]\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nPerform a similarity search with MyScale\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nReturns\nList of documents most similar to the query text\nand cosine distance in float for each.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-54", "text": "List of documents most similar to the query text\nand cosine distance in float for each.\nLower score represents more similarity.\nReturn type\nList[Document]\npydantic settings langchain.vectorstores.MyScaleSettings[source]#\nMyScale Client Configuration\nAttribute:\nmyscale_host (str)An URL to connect to MyScale backend.Defaults to \u2018localhost\u2019.\nmyscale_port (int) : URL port to connect with HTTP. Defaults to 8443.\nusername (str) : Username to login. Defaults to None.\npassword (str) : Password to login. Defaults to None.\nindex_type (str): index type string.\nindex_param (dict): index build parameter.\ndatabase (str) : Database name to find the table. Defaults to \u2018default\u2019.\ntable (str) : Table name to operate on.\nDefaults to \u2018vector_table\u2019.\nmetric (str)Metric to compute distance,supported are (\u2018l2\u2019, \u2018cosine\u2019, \u2018ip\u2019). Defaults to \u2018cosine\u2019.\ncolumn_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector,\nmust be same size to number of columns. For example:\n.. code-block:: python\n{\u2018id\u2019: \u2018text_id\u2019,\n\u2018vector\u2019: \u2018text_embedding\u2019,\n\u2018text\u2019: \u2018text_plain\u2019,\n\u2018metadata\u2019: \u2018metadata_dictionary_in_json\u2019,\n}\nDefaults to identity map.\nShow JSON schema{\n \"title\": \"MyScaleSettings\",", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-55", "text": "Show JSON schema{\n \"title\": \"MyScaleSettings\",\n \"description\": \"MyScale Client Configuration\\n\\nAttribute:\\n myscale_host (str) : An URL to connect to MyScale backend.\\n Defaults to 'localhost'.\\n myscale_port (int) : URL port to connect with HTTP. Defaults to 8443.\\n username (str) : Username to login. Defaults to None.\\n password (str) : Password to login. Defaults to None.\\n index_type (str): index type string.\\n index_param (dict): index build parameter.\\n database (str) : Database name to find the table. Defaults to 'default'.\\n table (str) : Table name to operate on.\\n Defaults to 'vector_table'.\\n metric (str) : Metric to compute distance,\\n supported are ('l2', 'cosine', 'ip'). Defaults to 'cosine'.\\n column_map (Dict) : Column type map to project column name onto langchain\\n semantics. Must have keys: `text`, `id`, `vector`,\\n must be same size to number of columns. For example:\\n .. code-block:: python\\n\\n {\\n 'id': 'text_id',\\n 'vector': 'text_embedding',\\n 'text': 'text_plain',\\n 'metadata': 'metadata_dictionary_in_json',\\n }\\n\\n Defaults to identity map.\",\n \"type\": \"object\",\n \"properties\": {\n \"host\": {\n \"title\": \"Host\",\n \"default\": \"localhost\",\n \"env_names\": \"{'myscale_host'}\",\n \"type\": \"string\"\n },\n \"port\": {\n \"title\": \"Port\",", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-56", "text": "},\n \"port\": {\n \"title\": \"Port\",\n \"default\": 8443,\n \"env_names\": \"{'myscale_port'}\",\n \"type\": \"integer\"\n },\n \"username\": {\n \"title\": \"Username\",\n \"env_names\": \"{'myscale_username'}\",\n \"type\": \"string\"\n },\n \"password\": {\n \"title\": \"Password\",\n \"env_names\": \"{'myscale_password'}\",\n \"type\": \"string\"\n },\n \"index_type\": {\n \"title\": \"Index Type\",\n \"default\": \"IVFFLAT\",\n \"env_names\": \"{'myscale_index_type'}\",\n \"type\": \"string\"\n },\n \"index_param\": {\n \"title\": \"Index Param\",\n \"env_names\": \"{'myscale_index_param'}\",\n \"type\": \"object\",\n \"additionalProperties\": {\n \"type\": \"string\"\n }\n },\n \"column_map\": {\n \"title\": \"Column Map\",\n \"default\": {\n \"id\": \"id\",\n \"text\": \"text\",\n \"vector\": \"vector\",\n \"metadata\": \"metadata\"\n },\n \"env_names\": \"{'myscale_column_map'}\",\n \"type\": \"object\",\n \"additionalProperties\": {\n \"type\": \"string\"\n }\n },\n \"database\": {\n \"title\": \"Database\",\n \"default\": \"default\",\n \"env_names\": \"{'myscale_database'}\",\n \"type\": \"string\"\n },\n \"table\": {\n \"title\": \"Table\",", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-57", "text": "},\n \"table\": {\n \"title\": \"Table\",\n \"default\": \"langchain\",\n \"env_names\": \"{'myscale_table'}\",\n \"type\": \"string\"\n },\n \"metric\": {\n \"title\": \"Metric\",\n \"default\": \"cosine\",\n \"env_names\": \"{'myscale_metric'}\",\n \"type\": \"string\"\n }\n },\n \"additionalProperties\": false\n}\nConfig\nenv_file: str = .env\nenv_file_encoding: str = utf-8\nenv_prefix: str = myscale_\nFields\ncolumn_map (Dict[str, str])\ndatabase (str)\nhost (str)\nindex_param (Optional[Dict[str, str]])\nindex_type (str)\nmetric (str)\npassword (Optional[str])\nport (int)\ntable (str)\nusername (Optional[str])\nfield column_map: Dict[str, str] = {'id': 'id', 'metadata': 'metadata', 'text': 'text', 'vector': 'vector'}#\nfield database: str = 'default'#\nfield host: str = 'localhost'#\nfield index_param: Optional[Dict[str, str]] = None#\nfield index_type: str = 'IVFFLAT'#\nfield metric: str = 'cosine'#\nfield password: Optional[str] = None#\nfield port: int = 8443#\nfield table: str = 'langchain'#\nfield username: Optional[str] = None#\nclass langchain.vectorstores.OpenSearchVectorSearch(opensearch_url: str, index_name: str, embedding_function: langchain.embeddings.base.Embeddings, **kwargs: Any)[source]#\nWrapper around OpenSearch as a vector database.\nExample\nfrom langchain import OpenSearchVectorSearch", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-58", "text": "Example\nfrom langchain import OpenSearchVectorSearch\nopensearch_vector_search = OpenSearchVectorSearch(\n \"http://localhost:9200\",\n \"embeddings\",\n embedding_function\n)\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, bulk_size: int = 500, **kwargs: Any) \u2192 List[str][source]#\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nbulk_size \u2013 Bulk API request count; Default: 500\nReturns\nList of ids from adding the texts into the vectorstore.\nOptional Args:vector_field: Document field embeddings are stored in. Defaults to\n\u201cvector_field\u201d.\ntext_field: Document field the text of the document is stored in. Defaults\nto \u201ctext\u201d.\nclassmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, bulk_size: int = 500, **kwargs: Any) \u2192 langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch[source]#\nConstruct OpenSearchVectorSearch wrapper from raw documents.\nExample\nfrom langchain import OpenSearchVectorSearch\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nopensearch_vector_search = OpenSearchVectorSearch.from_texts(\n texts,\n embeddings,\n opensearch_url=\"http://localhost:9200\"\n)\nOpenSearch by default supports Approximate Search powered by nmslib, faiss\nand lucene engines recommended for large datasets. Also supports brute force\nsearch through Script Scoring and Painless Scripting.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-59", "text": "search through Script Scoring and Painless Scripting.\nOptional Args:vector_field: Document field embeddings are stored in. Defaults to\n\u201cvector_field\u201d.\ntext_field: Document field the text of the document is stored in. Defaults\nto \u201ctext\u201d.\nOptional Keyword Args for Approximate Search:engine: \u201cnmslib\u201d, \u201cfaiss\u201d, \u201clucene\u201d; default: \u201cnmslib\u201d\nspace_type: \u201cl2\u201d, \u201cl1\u201d, \u201ccosinesimil\u201d, \u201clinf\u201d, \u201cinnerproduct\u201d; default: \u201cl2\u201d\nef_search: Size of the dynamic list used during k-NN searches. Higher values\nlead to more accurate but slower searches; default: 512\nef_construction: Size of the dynamic list used during k-NN graph creation.\nHigher values lead to more accurate graph but slower indexing speed;\ndefault: 512\nm: Number of bidirectional links created for each new element. Large impact\non memory consumption. Between 2 and 100; default: 16\nKeyword Args for Script Scoring or Painless Scripting:is_appx_search: False\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to query.\nBy default supports Approximate Search.\nAlso supports Script Scoring and Painless Scripting.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query.\nOptional Args:vector_field: Document field embeddings are stored in. Defaults to\n\u201cvector_field\u201d.\ntext_field: Document field the text of the document is stored in. Defaults\nto \u201ctext\u201d.\nmetadata_field: Document field that metadata is stored in. Defaults to\n\u201cmetadata\u201d.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-60", "text": "metadata_field: Document field that metadata is stored in. Defaults to\n\u201cmetadata\u201d.\nCan be set to a special value \u201c*\u201d to include the entire document.\nOptional Args for Approximate Search:search_type: \u201capproximate_search\u201d; default: \u201capproximate_search\u201d\nboolean_filter: A Boolean filter consists of a Boolean query that\ncontains a k-NN query and a filter.\nsubquery_clause: Query clause on the knn vector field; default: \u201cmust\u201d\nlucene_filter: the Lucene algorithm decides whether to perform an exact\nk-NN search with pre-filtering or an approximate search with modified\npost-filtering.\nOptional Args for Script Scoring Search:search_type: \u201cscript_scoring\u201d; default: \u201capproximate_search\u201d\nspace_type: \u201cl2\u201d, \u201cl1\u201d, \u201clinf\u201d, \u201ccosinesimil\u201d, \u201cinnerproduct\u201d,\n\u201chammingbit\u201d; default: \u201cl2\u201d\npre_filter: script_score query to pre-filter documents before identifying\nnearest neighbors; default: {\u201cmatch_all\u201d: {}}\nOptional Args for Painless Scripting Search:search_type: \u201cpainless_scripting\u201d; default: \u201capproximate_search\u201d\nspace_type: \u201cl2Squared\u201d, \u201cl1Norm\u201d, \u201ccosineSimilarity\u201d; default: \u201cl2Squared\u201d\npre_filter: script_score query to pre-filter documents before identifying\nnearest neighbors; default: {\u201cmatch_all\u201d: {}}\nsimilarity_search_with_score(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nReturn docs and it\u2019s scores most similar to query.\nBy default supports Approximate Search.\nAlso supports Script Scoring and Painless Scripting.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-61", "text": "k \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents along with its scores most similar to the query.\nOptional Args:same as similarity_search\nclass langchain.vectorstores.Pinecone(index: Any, embedding_function: Callable, text_key: str, namespace: Optional[str] = None)[source]#\nWrapper around Pinecone vector database.\nTo use, you should have the pinecone-client python package installed.\nExample\nfrom langchain.vectorstores import Pinecone\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nimport pinecone\n# The environment should be the one specified next to the API key\n# in your Pinecone console\npinecone.init(api_key=\"***\", environment=\"...\")\nindex = pinecone.Index(\"langchain-demo\")\nembeddings = OpenAIEmbeddings()\nvectorstore = Pinecone(index, embeddings.embed_query, \"text\")\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, namespace: Optional[str] = None, batch_size: int = 32, **kwargs: Any) \u2192 List[str][source]#\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nids \u2013 Optional list of ids to associate with the texts.\nnamespace \u2013 Optional pinecone namespace to add the texts to.\nReturns\nList of ids from adding the texts into the vectorstore.\nclassmethod from_existing_index(index_name: str, embedding: langchain.embeddings.base.Embeddings, text_key: str = 'text', namespace: Optional[str] = None) \u2192 langchain.vectorstores.pinecone.Pinecone[source]#\nLoad pinecone vectorstore from index name.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-62", "text": "Load pinecone vectorstore from index name.\nclassmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, batch_size: int = 32, text_key: str = 'text', index_name: Optional[str] = None, namespace: Optional[str] = None, **kwargs: Any) \u2192 langchain.vectorstores.pinecone.Pinecone[source]#\nConstruct Pinecone wrapper from raw documents.\nThis is a user friendly interface that:\nEmbeds documents.\nAdds the documents to a provided Pinecone index\nThis is intended to be a quick way to get started.\nExample\nfrom langchain import Pinecone\nfrom langchain.embeddings import OpenAIEmbeddings\nimport pinecone\n# The environment should be the one specified next to the API key\n# in your Pinecone console\npinecone.init(api_key=\"***\", environment=\"...\")\nembeddings = OpenAIEmbeddings()\npinecone = Pinecone.from_texts(\n texts,\n embeddings,\n index_name=\"langchain-demo\"\n)\nsimilarity_search(query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn pinecone documents most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter \u2013 Dictionary of argument(s) to filter on metadata\nnamespace \u2013 Namespace to search in. Default will search in \u2018\u2019 namespace.\nReturns\nList of Documents most similar to the query and score for each", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-63", "text": "Returns\nList of Documents most similar to the query and score for each\nsimilarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nReturn pinecone documents most similar to query, along with scores.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter \u2013 Dictionary of argument(s) to filter on metadata\nnamespace \u2013 Namespace to search in. Default will search in \u2018\u2019 namespace.\nReturns\nList of Documents most similar to the query and score for each\nclass langchain.vectorstores.Qdrant(client: Any, collection_name: str, embeddings: Optional[langchain.embeddings.base.Embeddings] = None, content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata', embedding_function: Optional[Callable] = None)[source]#\nWrapper around Qdrant vector database.\nTo use you should have the qdrant-client package installed.\nExample\nfrom qdrant_client import QdrantClient\nfrom langchain import Qdrant\nclient = QdrantClient()\ncollection_name = \"MyCollection\"\nqdrant = Qdrant(client, collection_name, embedding_function)\nCONTENT_KEY = 'page_content'#\nMETADATA_KEY = 'metadata'#\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[Sequence[str]] = None, batch_size: int = 64, **kwargs: Any) \u2192 List[str][source]#\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-64", "text": "Parameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nids \u2013 Optional list of ids to associate with the texts. Ids have to be\nuuid-like strings.\nbatch_size \u2013 How many vectors upload per-request.\nDefault: 64\nReturns\nList of ids from adding the texts into the vectorstore.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[Sequence[str]] = None, location: Optional[str] = None, url: Optional[str] = None, port: Optional[int] = 6333, grpc_port: int = 6334, prefer_grpc: bool = False, https: Optional[bool] = None, api_key: Optional[str] = None, prefix: Optional[str] = None, timeout: Optional[float] = None, host: Optional[str] = None, path: Optional[str] = None, collection_name: Optional[str] = None, distance_func: str = 'Cosine', content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata', batch_size: int = 64, shard_number: Optional[int] = None, replication_factor: Optional[int] = None, write_consistency_factor: Optional[int] = None, on_disk_payload: Optional[bool] = None, hnsw_config: Optional[common_types.HnswConfigDiff] = None, optimizers_config: Optional[common_types.OptimizersConfigDiff] = None, wal_config: Optional[common_types.WalConfigDiff] = None, quantization_config: Optional[common_types.QuantizationConfig] = None, init_from: Optional[common_types.InitFrom] = None, **kwargs: Any) \u2192 Qdrant[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-65", "text": "Construct Qdrant wrapper from a list of texts.\nParameters\ntexts \u2013 A list of texts to be indexed in Qdrant.\nembedding \u2013 A subclass of Embeddings, responsible for text vectorization.\nmetadatas \u2013 An optional list of metadata. If provided it has to be of the same\nlength as a list of texts.\nids \u2013 Optional list of ids to associate with the texts. Ids have to be\nuuid-like strings.\nlocation \u2013 If :memory: - use in-memory Qdrant instance.\nIf str - use it as a url parameter.\nIf None - fallback to relying on host and port parameters.\nurl \u2013 either host or str of \u201cOptional[scheme], host, Optional[port],\nOptional[prefix]\u201d. Default: None\nport \u2013 Port of the REST API interface. Default: 6333\ngrpc_port \u2013 Port of the gRPC interface. Default: 6334\nprefer_grpc \u2013 If true - use gPRC interface whenever possible in custom methods.\nDefault: False\nhttps \u2013 If true - use HTTPS(SSL) protocol. Default: None\napi_key \u2013 API key for authentication in Qdrant Cloud. Default: None\nprefix \u2013 If not None - add prefix to the REST URL path.\nExample: service/v1 will result in\nhttp://localhost:6333/service/v1/{qdrant-endpoint} for REST API.\nDefault: None\ntimeout \u2013 Timeout for REST and gRPC API requests.\nDefault: 5.0 seconds for REST and unlimited for gRPC\nhost \u2013 Host name of Qdrant service. If url and host are None, set to\n\u2018localhost\u2019. Default: None\npath \u2013 Path in which the vectors will be stored while using local mode.\nDefault: None\ncollection_name \u2013 Name of the Qdrant collection to be used. If not provided,", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-66", "text": "collection_name \u2013 Name of the Qdrant collection to be used. If not provided,\nit will be created randomly. Default: None\ndistance_func \u2013 Distance function. One of: \u201cCosine\u201d / \u201cEuclid\u201d / \u201cDot\u201d.\nDefault: \u201cCosine\u201d\ncontent_payload_key \u2013 A payload key used to store the content of the document.\nDefault: \u201cpage_content\u201d\nmetadata_payload_key \u2013 A payload key used to store the metadata of the document.\nDefault: \u201cmetadata\u201d\nbatch_size \u2013 How many vectors upload per-request.\nDefault: 64\nshard_number \u2013 Number of shards in collection. Default is 1, minimum is 1.\nreplication_factor \u2013 Replication factor for collection. Default is 1, minimum is 1.\nDefines how many copies of each shard will be created.\nHave effect only in distributed mode.\nwrite_consistency_factor \u2013 Write consistency factor for collection. Default is 1, minimum is 1.\nDefines how many replicas should apply the operation for us to consider\nit successful. Increasing this number will make the collection more\nresilient to inconsistencies, but will also make it fail if not enough\nreplicas are available.\nDoes not have any performance impact.\nHave effect only in distributed mode.\non_disk_payload \u2013 If true - point`s payload will not be stored in memory.\nIt will be read from the disk every time it is requested.\nThis setting saves RAM by (slightly) increasing the response time.\nNote: those payload values that are involved in filtering and are\nindexed - remain in RAM.\nhnsw_config \u2013 Params for HNSW index\noptimizers_config \u2013 Params for optimizer\nwal_config \u2013 Params for Write-Ahead-Log\nquantization_config \u2013 Params for quantization, if None - quantization will be disabled\ninit_from \u2013 Use data stored in another collection to initialize this collection", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-67", "text": "init_from \u2013 Use data stored in another collection to initialize this collection\n**kwargs \u2013 Additional arguments passed directly into REST client initialization\nThis is a user-friendly interface that:\n1. Creates embeddings, one for each text\n2. Initializes the Qdrant database as an in-memory docstore by default\n(and overridable to a remote docstore)\nAdds the text embeddings to the Qdrant database\nThis is intended to be a quick way to get started.\nExample\nfrom langchain import Qdrant\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nqdrant = Qdrant.from_texts(texts, embeddings, \"localhost\")\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nDefaults to 20.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-68", "text": "Defaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsimilarity_search(query: str, k: int = 4, filter: Optional[MetadataFilter] = None, search_params: Optional[common_types.SearchParams] = None, offset: int = 0, score_threshold: Optional[float] = None, consistency: Optional[common_types.ReadConsistency] = None, **kwargs: Any) \u2192 List[Document][source]#\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter \u2013 Filter by metadata. Defaults to None.\nsearch_params \u2013 Additional search params\noffset \u2013 Offset of the first result to return.\nMay be used to paginate results.\nNote: large offset values may cause performance issues.\nscore_threshold \u2013 Define a minimal score threshold for the result.\nIf defined, less similar results will not be returned.\nScore of the returned result might be higher or smaller than the\nthreshold depending on the Distance function used.\nE.g. for cosine similarity only higher scores will be returned.\nconsistency \u2013 Read consistency of the search. Defines how many replicas should be\nqueried before returning the result.\nValues:\n- int - number of replicas to query, values should present in all\nqueried replicas\n\u2019majority\u2019 - query all replicas, but return values present in themajority of replicas\n\u2019quorum\u2019 - query the majority of replicas, return values present inall of them\n\u2019all\u2019 - query all replicas, and return values present in all replicas\nReturns\nList of Documents most similar to the query.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-69", "text": "Returns\nList of Documents most similar to the query.\nsimilarity_search_with_score(query: str, k: int = 4, filter: Optional[MetadataFilter] = None, search_params: Optional[common_types.SearchParams] = None, offset: int = 0, score_threshold: Optional[float] = None, consistency: Optional[common_types.ReadConsistency] = None, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]#\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter \u2013 Filter by metadata. Defaults to None.\nsearch_params \u2013 Additional search params\noffset \u2013 Offset of the first result to return.\nMay be used to paginate results.\nNote: large offset values may cause performance issues.\nscore_threshold \u2013 Define a minimal score threshold for the result.\nIf defined, less similar results will not be returned.\nScore of the returned result might be higher or smaller than the\nthreshold depending on the Distance function used.\nE.g. for cosine similarity only higher scores will be returned.\nconsistency \u2013 Read consistency of the search. Defines how many replicas should be\nqueried before returning the result.\nValues:\n- int - number of replicas to query, values should present in all\nqueried replicas\n\u2019majority\u2019 - query all replicas, but return values present in themajority of replicas\n\u2019quorum\u2019 - query the majority of replicas, return values present inall of them\n\u2019all\u2019 - query all replicas, and return values present in all replicas\nReturns\nList of documents most similar to the query text and cosine\ndistance in float for each.\nLower score represents more similarity.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-70", "text": "distance in float for each.\nLower score represents more similarity.\nclass langchain.vectorstores.Redis(redis_url: str, index_name: str, embedding_function: typing.Callable, content_key: str = 'content', metadata_key: str = 'metadata', vector_key: str = 'content_vector', relevance_score_fn: typing.Optional[typing.Callable[[float], float]] = , **kwargs: typing.Any)[source]#\nWrapper around Redis vector database.\nTo use, you should have the redis python package installed.\nExample\nfrom langchain.vectorstores import Redis\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nvectorstore = Redis(\n redis_url=\"redis://username:password@localhost:6379\"\n index_name=\"my-index\",\n embedding_function=embeddings.embed_query,\n)\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, embeddings: Optional[List[List[float]]] = None, keys: Optional[List[str]] = None, batch_size: int = 1000, **kwargs: Any) \u2192 List[str][source]#\nAdd more texts to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings/text to add to the vectorstore.\nmetadatas (Optional[List[dict]], optional) \u2013 Optional list of metadatas.\nDefaults to None.\nembeddings (Optional[List[List[float]]], optional) \u2013 Optional pre-generated\nembeddings. Defaults to None.\nkeys (Optional[List[str]], optional) \u2013 Optional key values to use as ids.\nDefaults to None.\nbatch_size (int, optional) \u2013 Batch size to use for writes. Defaults to 1000.\nReturns\nList of ids added to the vectorstore\nReturn type\nList[str]", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-71", "text": "Returns\nList of ids added to the vectorstore\nReturn type\nList[str]\nas_retriever(**kwargs: Any) \u2192 langchain.vectorstores.redis.RedisVectorStoreRetriever[source]#\nstatic drop_index(index_name: str, delete_documents: bool, **kwargs: Any) \u2192 bool[source]#\nDrop a Redis search index.\nParameters\nindex_name (str) \u2013 Name of the index to drop.\ndelete_documents (bool) \u2013 Whether to drop the associated documents.\nReturns\nWhether or not the drop was successful.\nReturn type\nbool\nclassmethod from_existing_index(embedding: langchain.embeddings.base.Embeddings, index_name: str, content_key: str = 'content', metadata_key: str = 'metadata', vector_key: str = 'content_vector', **kwargs: Any) \u2192 langchain.vectorstores.redis.Redis[source]#\nConnect to an existing Redis index.\nclassmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = 'content', metadata_key: str = 'metadata', vector_key: str = 'content_vector', **kwargs: Any) \u2192 langchain.vectorstores.redis.Redis[source]#\nCreate a Redis vectorstore from raw documents.\nThis is a user-friendly interface that:\nEmbeds documents.\nCreates a new index for the embeddings in Redis.\nAdds the documents to the newly created Redis index.\nThis is intended to be a quick way to get started.\n.. rubric:: Example", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-72", "text": "This is intended to be a quick way to get started.\n.. rubric:: Example\nclassmethod from_texts_return_keys(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = 'content', metadata_key: str = 'metadata', vector_key: str = 'content_vector', distance_metric: Literal['COSINE', 'IP', 'L2'] = 'COSINE', **kwargs: Any) \u2192 Tuple[langchain.vectorstores.redis.Redis, List[str]][source]#\nCreate a Redis vectorstore from raw documents.\nThis is a user-friendly interface that:\nEmbeds documents.\nCreates a new index for the embeddings in Redis.\nAdds the documents to the newly created Redis index.\nThis is intended to be a quick way to get started.\n.. rubric:: Example\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturns the most similar indexed documents to the query text.\nParameters\nquery (str) \u2013 The query text for which to find similar documents.\nk (int) \u2013 The number of documents to return. Default is 4.\nReturns\nA list of documents that are most similar to the query text.\nReturn type\nList[Document]\nsimilarity_search_limit_score(query: str, k: int = 4, score_threshold: float = 0.2, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturns the most similar indexed documents to the query text within the\nscore_threshold range.\nParameters\nquery (str) \u2013 The query text for which to find similar documents.\nk (int) \u2013 The number of documents to return. Default is 4.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-73", "text": "k (int) \u2013 The number of documents to return. Default is 4.\nscore_threshold (float) \u2013 The minimum matching score required for a document\n0.2. (to be considered a match. Defaults to) \u2013 \nsimilarity (Because the similarity calculation algorithm is based on cosine) \u2013 \n:param :\n:param the smaller the angle:\n:param the higher the similarity.:\nReturns\nA list of documents that are most similar to the query text,\nincluding the match score for each document.\nReturn type\nList[Document]\nNote\nIf there are no documents that satisfy the score_threshold value,\nan empty list is returned.\nsimilarity_search_with_score(query: str, k: int = 4) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query and score for each\nclass langchain.vectorstores.SKLearnVectorStore(embedding: langchain.embeddings.base.Embeddings, *, persist_path: Optional[str] = None, serializer: Literal['json', 'bson', 'parquet'] = 'json', metric: str = 'cosine', **kwargs: Any)[source]#\nA simple in-memory vector store based on the scikit-learn library\nNearestNeighbors implementation.\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]#\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nkwargs \u2013 vectorstore specific parameters", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-74", "text": "kwargs \u2013 vectorstore specific parameters\nReturns\nList of ids from adding the texts into the vectorstore.\nclassmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, persist_path: Optional[str] = None, **kwargs: Any) \u2192 langchain.vectorstores.sklearn.SKLearnVectorStore[source]#\nReturn VectorStore initialized from texts and embeddings.\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\n:param query: Text to look up documents similar to.\n:param k: Number of Documents to return. Defaults to 4.\n:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n:param lambda_mult: Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\n:param embedding: Embedding to look up documents similar to.\n:param k: Number of Documents to return. Defaults to 4.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-75", "text": ":param k: Number of Documents to return. Defaults to 4.\n:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n:param lambda_mult: Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\npersist() \u2192 None[source]#\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to query.\nsimilarity_search_with_score(query: str, *, k: int = 4, **kwargs: Any) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nclass langchain.vectorstores.SingleStoreDB(embedding: langchain.embeddings.base.Embeddings, *, table_name: str = 'embeddings', content_field: str = 'content', metadata_field: str = 'metadata', vector_field: str = 'vector', pool_size: int = 5, max_overflow: int = 10, timeout: float = 30, **kwargs: Any)[source]#\nThis class serves as a Pythonic interface to the SingleStore DB database.\nThe prerequisite for using this class is the installation of the singlestoredb\nPython package.\nThe SingleStoreDB vectorstore can be created by providing an embedding function and\nthe relevant parameters for the database connection, connection pool, and\noptionally, the names of the table and the fields to use.\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, embeddings: Optional[List[List[float]]] = None, **kwargs: Any) \u2192 List[str][source]#\nAdd more texts to the vectorstore.\nParameters", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-76", "text": "Add more texts to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings/text to add to the vectorstore.\nmetadatas (Optional[List[dict]], optional) \u2013 Optional list of metadatas.\nDefaults to None.\nembeddings (Optional[List[List[float]]], optional) \u2013 Optional pre-generated\nembeddings. Defaults to None.\nReturns\nempty list\nReturn type\nList[str]\nas_retriever(**kwargs: Any) \u2192 langchain.vectorstores.singlestoredb.SingleStoreDBRetriever[source]#\nconnection_kwargs#\nCreate connection pool.\nclassmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, table_name: str = 'embeddings', content_field: str = 'content', metadata_field: str = 'metadata', vector_field: str = 'vector', pool_size: int = 5, max_overflow: int = 10, timeout: float = 30, **kwargs: Any) \u2192 langchain.vectorstores.singlestoredb.SingleStoreDB[source]#\nCreate a SingleStoreDB vectorstore from raw documents.\nThis is a user-friendly interface that:\nEmbeds documents.\nCreates a new table for the embeddings in SingleStoreDB.\nAdds the documents to the newly created table.\nThis is intended to be a quick way to get started.\n.. rubric:: Example\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturns the most similar indexed documents to the query text.\nUses cosine similarity.\nParameters\nquery (str) \u2013 The query text for which to find similar documents.\nk (int) \u2013 The number of documents to return. Default is 4.\nReturns", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-77", "text": "k (int) \u2013 The number of documents to return. Default is 4.\nReturns\nA list of documents that are most similar to the query text.\nReturn type\nList[Document]\nsimilarity_search_with_score(query: str, k: int = 4) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nReturn docs most similar to query. Uses cosine similarity.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query and score for each\nvector_field#\nPass the rest of the kwargs to the connection.\nclass langchain.vectorstores.SupabaseVectorStore(client: supabase.client.Client, embedding: Embeddings, table_name: str, query_name: Union[str, None] = None)[source]#\nVectorStore for a Supabase postgres database. Assumes you have the pgvector\nextension installed and a match_documents (or similar) function. For more details:\nhttps://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabase\nYou can implement your own match_documents function in order to limit the search\nspace to a subset of documents based on your own authorization or business logic.\nNote that the Supabase Python client does not yet support async operations.\nIf you\u2019d like to use max_marginal_relevance_search, please review the instructions\nbelow on modifying the match_documents function to return matched embeddings.\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict[Any, Any]]] = None, **kwargs: Any) \u2192 List[str][source]#\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-78", "text": "Parameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nkwargs \u2013 vectorstore specific parameters\nReturns\nList of ids from adding the texts into the vectorstore.\nadd_vectors(vectors: List[List[float]], documents: List[langchain.schema.Document]) \u2192 List[str][source]#\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, client: Optional[supabase.client.Client] = None, table_name: Optional[str] = 'documents', query_name: Union[str, None] = 'match_documents', **kwargs: Any) \u2192 SupabaseVectorStore[source]#\nReturn VectorStore initialized from texts and embeddings.\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search requires that query_name returns matched\nembeddings alongside the match documents. The following function\ndemonstrates how to do this:\n```sql\nCREATE FUNCTION match_documents_embeddings(query_embedding vector(1536),", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-79", "text": "```sql\nCREATE FUNCTION match_documents_embeddings(query_embedding vector(1536),\nmatch_count int)\nRETURNS TABLE(id bigint,\ncontent text,\nmetadata jsonb,\nembedding vector(1536),\nsimilarity float)\nLANGUAGE plpgsql\nAS $$\n# variable_conflict use_column\nBEGINRETURN query\nSELECT\nid,\ncontent,\nmetadata,\nembedding,\n1 -(docstore.embedding <=> query_embedding) AS similarity\nFROMdocstore\nORDER BYdocstore.embedding <=> query_embedding\nLIMIT match_count;\nEND;\n$$;\n```\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nquery_name: str#\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to embedding vector.\nParameters", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-80", "text": "Return docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_by_vector_returning_embeddings(query: List[float], k: int) \u2192 List[Tuple[langchain.schema.Document, float, numpy.ndarray[numpy.float32, Any]]][source]#\nsimilarity_search_by_vector_with_relevance_scores(query: List[float], k: int) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\ntable_name: str#\nclass langchain.vectorstores.Tair(embedding_function: langchain.embeddings.base.Embeddings, url: str, index_name: str, content_key: str = 'content', metadata_key: str = 'metadata', search_params: Optional[dict] = None, **kwargs: Any)[source]#\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str][source]#\nAdd texts data to an existing index.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-81", "text": "Add texts data to an existing index.\ncreate_index_if_not_exist(dim: int, distance_type: str, index_type: str, data_type: str, **kwargs: Any) \u2192 bool[source]#\nstatic drop_index(index_name: str = 'langchain', **kwargs: Any) \u2192 bool[source]#\nDrop an existing index.\nParameters\nindex_name (str) \u2013 Name of the index to drop.\nReturns\nTrue if the index is dropped successfully.\nReturn type\nbool\nclassmethod from_documents(documents: List[langchain.schema.Document], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, index_name: str = 'langchain', content_key: str = 'content', metadata_key: str = 'metadata', **kwargs: Any) \u2192 langchain.vectorstores.tair.Tair[source]#\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_existing_index(embedding: langchain.embeddings.base.Embeddings, index_name: str = 'langchain', content_key: str = 'content', metadata_key: str = 'metadata', **kwargs: Any) \u2192 langchain.vectorstores.tair.Tair[source]#\nConnect to an existing Tair index.\nclassmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, index_name: str = 'langchain', content_key: str = 'content', metadata_key: str = 'metadata', **kwargs: Any) \u2192 langchain.vectorstores.tair.Tair[source]#\nReturn VectorStore initialized from texts and embeddings.\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturns the most similar indexed documents to the query text.\nParameters", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-82", "text": "Returns the most similar indexed documents to the query text.\nParameters\nquery (str) \u2013 The query text for which to find similar documents.\nk (int) \u2013 The number of documents to return. Default is 4.\nReturns\nA list of documents that are most similar to the query text.\nReturn type\nList[Document]\nclass langchain.vectorstores.Tigris(client: TigrisClient, embeddings: Embeddings, index_name: str)[source]#\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]#\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nids \u2013 Optional list of ids for documents.\nIds will be autogenerated if not provided.\nkwargs \u2013 vectorstore specific parameters\nReturns\nList of ids from adding the texts into the vectorstore.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, client: Optional[TigrisClient] = None, index_name: Optional[str] = None, **kwargs: Any) \u2192 Tigris[source]#\nReturn VectorStore initialized from texts and embeddings.\nproperty search_index: TigrisVectorStore#\nsimilarity_search(query: str, k: int = 4, filter: Optional[TigrisFilter] = None, **kwargs: Any) \u2192 List[Document][source]#\nReturn docs most similar to query.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-83", "text": "Return docs most similar to query.\nsimilarity_search_with_score(query: str, k: int = 4, filter: Optional[TigrisFilter] = None) \u2192 List[Tuple[Document, float]][source]#\nRun similarity search with Chroma with distance.\nParameters\nquery (str) \u2013 Query text to search for.\nk (int) \u2013 Number of results to return. Defaults to 4.\nfilter (Optional[TigrisFilter]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of documents most similar to the querytext with distance in float.\nReturn type\nList[Tuple[Document, float]]\nclass langchain.vectorstores.Typesense(typesense_client: Client, embedding: Embeddings, *, typesense_collection_name: Optional[str] = None, text_key: str = 'text')[source]#\nWrapper around Typesense vector search.\nTo use, you should have the typesense python package installed.\nExample\nfrom langchain.embedding.openai import OpenAIEmbeddings\nfrom langchain.vectorstores import Typesense\nimport typesense\nnode = {\n \"host\": \"localhost\", # For Typesense Cloud use xxx.a1.typesense.net\n \"port\": \"8108\", # For Typesense Cloud use 443\n \"protocol\": \"http\" # For Typesense Cloud use https\n}\ntypesense_client = typesense.Client(\n {\n \"nodes\": [node],\n \"api_key\": \"\",\n \"connection_timeout_seconds\": 2\n }\n)\ntypesense_collection_name = \"langchain-memory\"\nembedding = OpenAIEmbeddings()\nvectorstore = Typesense(\n typesense_client,\n typesense_collection_name,\n embedding.embed_query,\n \"text\",\n)", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-84", "text": "typesense_collection_name,\n embedding.embed_query,\n \"text\",\n)\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]#\nRun more texts through the embedding and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nids \u2013 Optional list of ids to associate with the texts.\nReturns\nList of ids from adding the texts into the vectorstore.\nclassmethod from_client_params(embedding: langchain.embeddings.base.Embeddings, *, host: str = 'localhost', port: Union[str, int] = '8108', protocol: str = 'http', typesense_api_key: Optional[str] = None, connection_timeout_seconds: int = 2, **kwargs: Any) \u2192 langchain.vectorstores.typesense.Typesense[source]#\nInitialize Typesense directly from client parameters.\nExample\nfrom langchain.embedding.openai import OpenAIEmbeddings\nfrom langchain.vectorstores import Typesense\n# Pass in typesense_api_key as kwarg or set env var \"TYPESENSE_API_KEY\".\nvectorstore = Typesense(\n OpenAIEmbeddings(),\n host=\"localhost\",\n port=\"8108\",\n protocol=\"http\",\n typesense_collection_name=\"langchain-memory\",\n)", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-85", "text": "protocol=\"http\",\n typesense_collection_name=\"langchain-memory\",\n)\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, typesense_client: Optional[Client] = None, typesense_client_params: Optional[dict] = None, typesense_collection_name: Optional[str] = None, text_key: str = 'text', **kwargs: Any) \u2192 Typesense[source]#\nConstruct Typesense wrapper from raw text.\nsimilarity_search(query: str, k: int = 4, filter: Optional[str] = '', **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn typesense documents most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter \u2013 typesense filter_by expression to filter documents on\nReturns\nList of Documents most similar to the query and score for each\nsimilarity_search_with_score(query: str, k: int = 4, filter: Optional[str] = '') \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nReturn typesense documents most similar to query, along with scores.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter \u2013 typesense filter_by expression to filter documents on\nReturns\nList of Documents most similar to the query and score for each\nclass langchain.vectorstores.Vectara(vectara_customer_id: Optional[str] = None, vectara_corpus_id: Optional[str] = None, vectara_api_key: Optional[str] = None)[source]#\nImplementation of Vector Store using Vectara (https://vectara.com).\n.. rubric:: Example", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-86", "text": ".. rubric:: Example\nfrom langchain.vectorstores import Vectara\nvectorstore = Vectara(\n vectara_customer_id=vectara_customer_id,\n vectara_corpus_id=vectara_corpus_id,\n vectara_api_key=vectara_api_key\n)\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str][source]#\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nReturns\nList of ids from adding the texts into the vectorstore.\nas_retriever(**kwargs: Any) \u2192 langchain.vectorstores.vectara.VectaraRetriever[source]#\nclassmethod from_texts(texts: List[str], embedding: Optional[langchain.embeddings.base.Embeddings] = None, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 langchain.vectorstores.vectara.Vectara[source]#\nConstruct Vectara wrapper from raw documents.\nThis is intended to be a quick way to get started.\n.. rubric:: Example\nfrom langchain import Vectara\nvectara = Vectara.from_texts(\n texts,\n vectara_customer_id=customer_id,\n vectara_corpus_id=corpus_id,\n vectara_api_key=api_key,\n)\nsimilarity_search(query: str, k: int = 5, lambda_val: float = 0.025, filter: Optional[str] = None, n_sentence_context: int = 0, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn Vectara documents most similar to query, along with scores.\nParameters", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-87", "text": "Return Vectara documents most similar to query, along with scores.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 5.\nfilter \u2013 Dictionary of argument(s) to filter on metadata. For example a\nfilter can be \u201cdoc.rating > 3.0 and part.lang = \u2018deu\u2019\u201d} see\nhttps://docs.vectara.com/docs/search-apis/sql/filter-overview for more\ndetails.\nn_sentence_context \u2013 number of sentences before/after the matching segment\nto add\nReturns\nList of Documents most similar to the query\nsimilarity_search_with_score(query: str, k: int = 5, lambda_val: float = 0.025, filter: Optional[str] = None, n_sentence_context: int = 0, **kwargs: Any) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nReturn Vectara documents most similar to query, along with scores.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 5.\nlambda_val \u2013 lexical match parameter for hybrid search.\nfilter \u2013 Dictionary of argument(s) to filter on metadata. For example a\nfilter can be \u201cdoc.rating > 3.0 and part.lang = \u2018deu\u2019\u201d} see\nhttps://docs.vectara.com/docs/search-apis/sql/filter-overview\nfor more details.\nn_sentence_context \u2013 number of sentences before/after the matching segment\nto add\nReturns\nList of Documents most similar to the query and score for each.\nclass langchain.vectorstores.VectorStore[source]#\nInterface for vector stores.\nasync aadd_documents(documents: List[langchain.schema.Document], **kwargs: Any) \u2192 List[str][source]#\nRun more documents through the embeddings and add to the vectorstore.\nParameters", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-88", "text": "Run more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str][source]#\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[langchain.schema.Document], **kwargs: Any) \u2192 List[str][source]#\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nabstract add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str][source]#\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nkwargs \u2013 vectorstore specific parameters\nReturns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[langchain.schema.Document], embedding: langchain.embeddings.base.Embeddings, **kwargs: Any) \u2192 langchain.vectorstores.base.VST[source]#\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 langchain.vectorstores.base.VST[source]#\nReturn VectorStore initialized from texts and embeddings.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-89", "text": "Return VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 langchain.vectorstores.base.VectorStoreRetriever[source]#\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nReturn docs most similar to query.\nclassmethod from_documents(documents: List[langchain.schema.Document], embedding: langchain.embeddings.base.Embeddings, **kwargs: Any) \u2192 langchain.vectorstores.base.VST[source]#\nReturn VectorStore initialized from documents and embeddings.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-90", "text": "Return VectorStore initialized from documents and embeddings.\nabstract classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 langchain.vectorstores.base.VST[source]#\nReturn VectorStore initialized from texts and embeddings.\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-91", "text": "lambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to query using specified search type.\nabstract similarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-92", "text": "Returns\nList of Tuples of (doc, similarity_score)\nclass langchain.vectorstores.Weaviate(client: typing.Any, index_name: str, text_key: str, embedding: typing.Optional[langchain.embeddings.base.Embeddings] = None, attributes: typing.Optional[typing.List[str]] = None, relevance_score_fn: typing.Optional[typing.Callable[[float], float]] = , by_text: bool = True)[source]#\nWrapper around Weaviate vector database.\nTo use, you should have the weaviate-client python package installed.\nExample\nimport weaviate\nfrom langchain.vectorstores import Weaviate\nclient = weaviate.Client(url=os.environ[\"WEAVIATE_URL\"], ...)\nweaviate = Weaviate(client, index_name, text_key)\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str][source]#\nUpload texts with metadata (properties) to Weaviate.\nclassmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 langchain.vectorstores.weaviate.Weaviate[source]#\nConstruct Weaviate wrapper from raw documents.\nThis is a user-friendly interface that:\nEmbeds documents.\nCreates a new index for the embeddings in the Weaviate instance.\nAdds the documents to the newly created Weaviate index.\nThis is intended to be a quick way to get started.\nExample\nfrom langchain.vectorstores.weaviate import Weaviate\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nweaviate = Weaviate.from_texts(\n texts,\n embeddings,", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-93", "text": "weaviate = Weaviate.from_texts(\n texts,\n embeddings,\n weaviate_url=\"http://localhost:8080\"\n)\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-94", "text": "Defaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query.\nsimilarity_search_by_text(query: str, k: int = 4, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[langchain.schema.Document][source]#\nLook up similar documents by embedding vector in Weaviate.\nsimilarity_search_with_score(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[langchain.schema.Document, float]][source]#\nReturn list of documents most similar to the query\ntext and cosine distance in float for each.\nLower score represents more similarity.\nclass langchain.vectorstores.Zilliz(embedding_function: langchain.embeddings.base.Embeddings, collection_name: str = 'LangChainCollection', connection_args: Optional[dict[str, Any]] = None, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: Optional[bool] = False)[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "fa678718cb2d-95", "text": "classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'LangChainCollection', connection_args: dict[str, Any] = {}, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: bool = False, **kwargs: Any) \u2192 langchain.vectorstores.zilliz.Zilliz[source]#\nCreate a Zilliz collection, indexes it with HNSW, and insert data.\nParameters\ntexts (List[str]) \u2013 Text data.\nembedding (Embeddings) \u2013 Embedding function.\nmetadatas (Optional[List[dict]]) \u2013 Metadata for each text if it exists.\nDefaults to None.\ncollection_name (str, optional) \u2013 Collection name to use. Defaults to\n\u201cLangChainCollection\u201d.\nconnection_args (dict[str, Any], optional) \u2013 Connection args to use. Defaults\nto DEFAULT_MILVUS_CONNECTION.\nconsistency_level (str, optional) \u2013 Which consistency level to use. Defaults\nto \u201cSession\u201d.\nindex_params (Optional[dict], optional) \u2013 Which index_params to use.\nDefaults to None.\nsearch_params (Optional[dict], optional) \u2013 Which search params to use.\nDefaults to None.\ndrop_old (Optional[bool], optional) \u2013 Whether to drop the collection with\nthat name if it exists. Defaults to False.\nReturns\nZilliz Vector Store\nReturn type\nZilliz\nprevious\nDocument Loaders\nnext\nRetrievers\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/vectorstores.html"}
+{"id": "37669183a796-0", "text": ".rst\n.pdf\nAgent Toolkits\nAgent Toolkits#\nAgent toolkits.\npydantic model langchain.agents.agent_toolkits.AzureCognitiveServicesToolkit[source]#\nToolkit for Azure Cognitive Services.\nget_tools() \u2192 List[langchain.tools.base.BaseTool][source]#\nGet the tools in the toolkit.\npydantic model langchain.agents.agent_toolkits.FileManagementToolkit[source]#\nToolkit for interacting with a Local Files.\nfield root_dir: Optional[str] = None#\nIf specified, all file operations are made relative to root_dir.\nfield selected_tools: Optional[List[str]] = None#\nIf provided, only provide the selected tools. Defaults to all.\nget_tools() \u2192 List[langchain.tools.base.BaseTool][source]#\nGet the tools in the toolkit.\npydantic model langchain.agents.agent_toolkits.GmailToolkit[source]#\nToolkit for interacting with Gmail.\nfield api_resource: Resource [Optional]#\nget_tools() \u2192 List[langchain.tools.base.BaseTool][source]#\nGet the tools in the toolkit.\npydantic model langchain.agents.agent_toolkits.JiraToolkit[source]#\nJira Toolkit.\nfield tools: List[langchain.tools.base.BaseTool] = []#\nclassmethod from_jira_api_wrapper(jira_api_wrapper: langchain.utilities.jira.JiraAPIWrapper) \u2192 langchain.agents.agent_toolkits.jira.toolkit.JiraToolkit[source]#\nget_tools() \u2192 List[langchain.tools.base.BaseTool][source]#\nGet the tools in the toolkit.\npydantic model langchain.agents.agent_toolkits.JsonToolkit[source]#\nToolkit for interacting with a JSON spec.\nfield spec: langchain.tools.json.tool.JsonSpec [Required]#\nget_tools() \u2192 List[langchain.tools.base.BaseTool][source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-1", "text": "get_tools() \u2192 List[langchain.tools.base.BaseTool][source]#\nGet the tools in the toolkit.\npydantic model langchain.agents.agent_toolkits.NLAToolkit[source]#\nNatural Language API Toolkit Definition.\nfield nla_tools: Sequence[langchain.agents.agent_toolkits.nla.tool.NLATool] [Required]#\nList of API Endpoint Tools.\nclassmethod from_llm_and_ai_plugin(llm: langchain.base_language.BaseLanguageModel, ai_plugin: langchain.tools.plugin.AIPlugin, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, **kwargs: Any) \u2192 langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit[source]#\nInstantiate the toolkit from an OpenAPI Spec URL\nclassmethod from_llm_and_ai_plugin_url(llm: langchain.base_language.BaseLanguageModel, ai_plugin_url: str, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, **kwargs: Any) \u2192 langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit[source]#\nInstantiate the toolkit from an OpenAPI Spec URL\nclassmethod from_llm_and_spec(llm: langchain.base_language.BaseLanguageModel, spec: langchain.tools.openapi.utils.openapi_utils.OpenAPISpec, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, **kwargs: Any) \u2192 langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit[source]#\nInstantiate the toolkit by creating tools for each operation.\nclassmethod from_llm_and_url(llm: langchain.base_language.BaseLanguageModel, open_api_url: str, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, **kwargs: Any) \u2192 langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-2", "text": "Instantiate the toolkit from an OpenAPI Spec URL\nget_tools() \u2192 List[langchain.tools.base.BaseTool][source]#\nGet the tools for all the API operations.\npydantic model langchain.agents.agent_toolkits.OpenAPIToolkit[source]#\nToolkit for interacting with a OpenAPI api.\nfield json_agent: langchain.agents.agent.AgentExecutor [Required]#\nfield requests_wrapper: langchain.requests.TextRequestsWrapper [Required]#\nclassmethod from_llm(llm: langchain.base_language.BaseLanguageModel, json_spec: langchain.tools.json.tool.JsonSpec, requests_wrapper: langchain.requests.TextRequestsWrapper, **kwargs: Any) \u2192 langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit[source]#\nCreate json agent from llm, then initialize.\nget_tools() \u2192 List[langchain.tools.base.BaseTool][source]#\nGet the tools in the toolkit.\npydantic model langchain.agents.agent_toolkits.PlayWrightBrowserToolkit[source]#\nToolkit for web browser tools.\nfield async_browser: Optional['AsyncBrowser'] = None#\nfield sync_browser: Optional['SyncBrowser'] = None#\nclassmethod from_browser(sync_browser: Optional[SyncBrowser] = None, async_browser: Optional[AsyncBrowser] = None) \u2192 PlayWrightBrowserToolkit[source]#\nInstantiate the toolkit.\nget_tools() \u2192 List[langchain.tools.base.BaseTool][source]#\nGet the tools in the toolkit.\npydantic model langchain.agents.agent_toolkits.PowerBIToolkit[source]#\nToolkit for interacting with PowerBI dataset.\nfield callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None#\nfield examples: Optional[str] = None#\nfield llm: langchain.base_language.BaseLanguageModel [Required]#\nfield max_iterations: int = 5#", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-3", "text": "field max_iterations: int = 5#\nfield powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]#\nget_tools() \u2192 List[langchain.tools.base.BaseTool][source]#\nGet the tools in the toolkit.\npydantic model langchain.agents.agent_toolkits.SQLDatabaseToolkit[source]#\nToolkit for interacting with SQL databases.\nfield db: langchain.sql_database.SQLDatabase [Required]#\nfield llm: langchain.base_language.BaseLanguageModel [Required]#\nget_tools() \u2192 List[langchain.tools.base.BaseTool][source]#\nGet the tools in the toolkit.\nproperty dialect: str#\nReturn string representation of dialect to use.\npydantic model langchain.agents.agent_toolkits.SparkSQLToolkit[source]#\nToolkit for interacting with Spark SQL.\nfield db: langchain.utilities.spark_sql.SparkSQL [Required]#\nfield llm: langchain.base_language.BaseLanguageModel [Required]#\nget_tools() \u2192 List[langchain.tools.base.BaseTool][source]#\nGet the tools in the toolkit.\npydantic model langchain.agents.agent_toolkits.VectorStoreInfo[source]#\nInformation about a vectorstore.\nfield description: str [Required]#\nfield name: str [Required]#\nfield vectorstore: langchain.vectorstores.base.VectorStore [Required]#\npydantic model langchain.agents.agent_toolkits.VectorStoreRouterToolkit[source]#\nToolkit for routing between vectorstores.\nfield llm: langchain.base_language.BaseLanguageModel [Optional]#\nfield vectorstores: List[langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo] [Required]#\nget_tools() \u2192 List[langchain.tools.base.BaseTool][source]#\nGet the tools in the toolkit.", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-4", "text": "Get the tools in the toolkit.\npydantic model langchain.agents.agent_toolkits.VectorStoreToolkit[source]#\nToolkit for interacting with a vector store.\nfield llm: langchain.base_language.BaseLanguageModel [Optional]#\nfield vectorstore_info: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo [Required]#\nget_tools() \u2192 List[langchain.tools.base.BaseTool][source]#\nGet the tools in the toolkit.\npydantic model langchain.agents.agent_toolkits.ZapierToolkit[source]#\nZapier Toolkit.\nfield tools: List[langchain.tools.base.BaseTool] = []#\nclassmethod from_zapier_nla_wrapper(zapier_nla_wrapper: langchain.utilities.zapier.ZapierNLAWrapper) \u2192 langchain.agents.agent_toolkits.zapier.toolkit.ZapierToolkit[source]#\nCreate a toolkit from a ZapierNLAWrapper.\nget_tools() \u2192 List[langchain.tools.base.BaseTool][source]#\nGet the tools in the toolkit.\nlangchain.agents.agent_toolkits.create_csv_agent(llm: langchain.base_language.BaseLanguageModel, path: Union[str, List[str]], pandas_kwargs: Optional[dict] = None, **kwargs: Any) \u2192 langchain.agents.agent.AgentExecutor[source]#\nCreate csv agent by loading to a dataframe and using pandas agent.", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-5", "text": "langchain.agents.agent_toolkits.create_json_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.json.toolkit.JsonToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with JSON.\\nYour goal is to return a final answer by interacting with the JSON.\\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\\nDo not make up any information that is not contained in the JSON.\\nYour input to the tools should be in the form of `data[\"key\"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \\nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \\nIf you have not seen a key in one of those responses, you cannot use it.\\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\\nIf you encounter a \"KeyError\", go back to the previous key, look at the available keys, and try again.\\n\\nIf the question does not seem to be related to the JSON, just return \"I don\\'t know\" as the answer.\\nAlways begin your interaction with the `json_spec_list_keys` tool with input \"data\" to see what keys exist in the JSON.\\n\\nNote that sometimes the value at a given path is large. In this case, you will get an error \"Value is a large dictionary, should explore its keys directly\".\\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-6", "text": "you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\\n', suffix: str = 'Begin!\"\\n\\nQuestion: {input}\\nThought: I should look at the keys that exist in data to see what I have access to\\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 langchain.agents.agent.AgentExecutor[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-7", "text": "Construct a json agent from an LLM and tools.", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-8", "text": "langchain.agents.agent_toolkits.create_openapi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = \"You are an agent designed to answer questions by making web requests to an API given the openapi spec.\\n\\nIf the question does not seem related to the API, return I don't know. Do not make up an answer.\\nOnly use information provided by the tools to construct your response.\\n\\nFirst, find the base URL needed to make the request.\\n\\nSecond, find the relevant paths needed to answer the question. Take note that, sometimes, you might need to make more than one request to more than one path to answer the question.\\n\\nThird, find the required parameters needed to make the request. For GET requests, these are usually URL parameters and for POST requests, these are request body parameters.\\n\\nFourth, make the requests needed to answer the question. Ensure that you are sending the correct parameters to the request by checking which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\\n\\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\\nIf you get a not found error, ensure that you are using a path that actually exists in the spec.\\n\", suffix: str = 'Begin!\\n\\nQuestion: {input}\\nThought: I should explore the spec to find the base url for the API.\\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-9", "text": "you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, return_intermediate_steps: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 langchain.agents.agent.AgentExecutor[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-10", "text": "Construct a json agent from an LLM and tools.\nlangchain.agents.agent_toolkits.create_pandas_dataframe_agent(llm: langchain.base_language.BaseLanguageModel, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: Optional[str] = None, suffix: Optional[str] = None, input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', agent_executor_kwargs: Optional[Dict[str, Any]] = None, include_df_in_prompt: Optional[bool] = True, **kwargs: Dict[str, Any]) \u2192 langchain.agents.agent.AgentExecutor[source]#\nConstruct a pandas agent from an LLM and dataframe.", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-11", "text": "langchain.agents.agent_toolkits.create_pbi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to help users interact with a PowerBI Dataset.\\n\\nAgent has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return \"This does not appear to be part of this dataset.\" as the answer.\\n\\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readible format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\\n', suffix: str = 'Begin!\\n\\nQuestion: {input}\\nThought: I can first ask which tables I have, then how each table is defined and then ask the query tool the question I need, and finally create a nice sentence that answers the question.\\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-12", "text": "you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', examples: Optional[str] = None, input_variables: Optional[List[str]] = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 langchain.agents.agent.AgentExecutor[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-13", "text": "Construct a pbi agent from an LLM and tools.", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-14", "text": "langchain.agents.agent_toolkits.create_pbi_chat_agent(llm: langchain.chat_models.base.BaseChatModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Assistant is a large language model built to help users interact with a PowerBI Dataset.\\n\\nAssistant has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return \"This does not appear to be part of this dataset.\" as the answer.\\n\\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readible format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\\n', suffix: str = \"TOOLS\\n------\\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\\n\\n{{tools}}\\n\\n{format_instructions}\\n\\nUSER'S INPUT\\n--------------------\\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-15", "text": "(remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\\n\\n{{{{input}}}}\\n\", examples: Optional[str] = None, input_variables: Optional[List[str]] = None, memory: Optional[langchain.memory.chat_memory.BaseChatMemory] = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 langchain.agents.agent.AgentExecutor[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-16", "text": "Construct a pbi agent from an Chat LLM and tools.\nIf you supply only a toolkit and no powerbi dataset, the same LLM is used for both.\nlangchain.agents.agent_toolkits.create_python_agent(llm: langchain.base_language.BaseLanguageModel, tool: langchain.tools.python.tool.PythonREPLTool, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, verbose: bool = False, prefix: str = 'You are an agent designed to write and execute python code to answer questions.\\nYou have access to a python REPL, which you can use to execute python code.\\nIf you get an error, debug your code and try again.\\nOnly use the output of your code to answer the question. \\nYou might know the answer without running any code, but you should still run the code to get the answer.\\nIf it does not seem like you can write code to answer the question, just return \"I don\\'t know\" as the answer.\\n', agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 langchain.agents.agent.AgentExecutor[source]#\nConstruct a python agent from an LLM and tool.", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-17", "text": "Construct a python agent from an LLM and tool.\nlangchain.agents.agent_toolkits.create_spark_dataframe_agent(llm: langchain.llms.base.BaseLLM, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = '\\nYou are working with a spark dataframe in Python. The name of the dataframe is `df`.\\nYou should use the tools below to answer the question posed of you:', suffix: str = '\\nThis is the result of `print(df.first())`:\\n{df}\\n\\nBegin!\\nQuestion: {input}\\n{agent_scratchpad}', input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 langchain.agents.agent.AgentExecutor[source]#\nConstruct a spark agent from an LLM and dataframe.", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-18", "text": "langchain.agents.agent_toolkits.create_spark_sql_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with Spark SQL.\\nGiven an input question, create a syntactically correct Spark SQL query to run, then look at the results of the query and return the answer.\\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\\nYou can order the results by a relevant column to return the most interesting examples in the database.\\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\\nYou have access to tools for interacting with the database.\\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\\n\\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\\n\\nIf the question does not seem related to the database, just return \"I don\\'t know\" as the answer.\\n', suffix: str = 'Begin!\\n\\nQuestion: {input}\\nThought: I should look at the tables in the database to see what I can query.\\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-19", "text": "Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 langchain.agents.agent.AgentExecutor[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-20", "text": "Construct a sql agent from an LLM and tools.", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-21", "text": "langchain.agents.agent_toolkits.create_sql_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with a SQL database.\\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\\nYou can order the results by a relevant column to return the most interesting examples in the database.\\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\\nYou have access to tools for interacting with the database.\\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\\n\\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\\n\\nIf the question does not seem related to the database, just return \"I don\\'t know\" as the answer.\\n', suffix: str = 'Begin!\\n\\nQuestion: {input}\\nThought: I should look at the tables in the database to see what I can query.\u00a0 Then I should query the schema of the most relevant tables.\\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-22", "text": "to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 langchain.agents.agent.AgentExecutor[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-23", "text": "Construct a sql agent from an LLM and tools.\nlangchain.agents.agent_toolkits.create_vectorstore_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions about sets of documents.\\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\\nIf the question does not seem relevant to any of the tools provided, just return \"I don\\'t know\" as the answer.\\n', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 langchain.agents.agent.AgentExecutor[source]#\nConstruct a vectorstore agent from an LLM and tools.\nlangchain.agents.agent_toolkits.create_vectorstore_router_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions.\\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\\nYour main task is to decide which of the tools is relevant for answering question at hand.\\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\\n', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 langchain.agents.agent.AgentExecutor[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "37669183a796-24", "text": "Construct a vectorstore router agent from an LLM and tools.\nprevious\nTools\nnext\nUtilities\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html"}
+{"id": "24db8f99ff8a-0", "text": ".rst\n.pdf\nSerpAPI\nSerpAPI#\nFor backwards compatiblity.\npydantic model langchain.serpapi.SerpAPIWrapper[source]#\nWrapper around SerpAPI.\nTo use, you should have the google-search-results python package installed,\nand the environment variable SERPAPI_API_KEY set with your API key, or pass\nserpapi_api_key as a named parameter to the constructor.\nExample\nfrom langchain import SerpAPIWrapper\nserpapi = SerpAPIWrapper()\nfield aiosession: Optional[aiohttp.client.ClientSession] = None#\nfield params: dict = {'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}#\nfield serpapi_api_key: Optional[str] = None#\nasync aresults(query: str) \u2192 dict[source]#\nUse aiohttp to run query through SerpAPI and return the results async.\nasync arun(query: str, **kwargs: Any) \u2192 str[source]#\nRun query through SerpAPI and parse result async.\nget_params(query: str) \u2192 Dict[str, str][source]#\nGet parameters for SerpAPI.\nresults(query: str) \u2192 dict[source]#\nRun query through SerpAPI and return the raw result.\nrun(query: str, **kwargs: Any) \u2192 str[source]#\nRun query through SerpAPI and parse result.\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/serpapi.html"}
+{"id": "e56e9b3601d4-0", "text": ".rst\n.pdf\nRetrievers\nRetrievers#\npydantic model langchain.retrievers.ArxivRetriever[source]#\nIt is effectively a wrapper for ArxivAPIWrapper.\nIt wraps load() to get_relevant_documents().\nIt uses all ArxivAPIWrapper arguments without any change.\nasync aget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nclass langchain.retrievers.AwsKendraIndexRetriever(kclient: Any, kendraindex: str, k: int = 3, languagecode: str = 'en')[source]#\nWrapper around AWS Kendra.\nasync aget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nRun search on Kendra index and get top k documents\ndocs = get_relevant_documents(\u2018This is my query\u2019)\nk: int#\nNumber of documents to query for.\nkclient: Any#\nboto3 client for Kendra.\nkendraindex: str#\nKendra index id\nlanguagecode: str#\nLanguagecode used for querying.\npydantic model langchain.retrievers.AzureCognitiveSearchRetriever[source]#\nWrapper around Azure Cognitive Search.", "source": "https://python.langchain.com/en/latest/reference/modules/retrievers.html"}
+{"id": "e56e9b3601d4-1", "text": "Wrapper around Azure Cognitive Search.\nfield aiosession: Optional[aiohttp.client.ClientSession] = None#\nClientSession, in case we want to reuse connection for better performance.\nfield api_key: str = ''#\nAPI Key. Both Admin and Query keys work, but for reading data it\u2019s\nrecommended to use a Query key.\nfield api_version: str = '2020-06-30'#\nAPI version\nfield content_key: str = 'content'#\nKey in a retrieved result to set as the Document page_content.\nfield index_name: str = ''#\nName of Index inside Azure Cognitive Search service\nfield service_name: str = ''#\nName of Azure Cognitive Search service\nasync aget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\npydantic model langchain.retrievers.ChatGPTPluginRetriever[source]#\nfield aiosession: Optional[aiohttp.client.ClientSession] = None#\nfield bearer_token: str [Required]#\nfield filter: Optional[dict] = None#\nfield top_k: int = 3#\nfield url: str [Required]#\nasync aget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters", "source": "https://python.langchain.com/en/latest/reference/modules/retrievers.html"}
+{"id": "e56e9b3601d4-2", "text": "Get documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\npydantic model langchain.retrievers.ContextualCompressionRetriever[source]#\nRetriever that wraps a base retriever and compresses the results.\nfield base_compressor: langchain.retrievers.document_compressors.base.BaseDocumentCompressor [Required]#\nCompressor for compressing retrieved documents.\nfield base_retriever: langchain.schema.BaseRetriever [Required]#\nBase Retriever to use for getting relevant documents.\nasync aget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nSequence of relevant documents\nclass langchain.retrievers.DataberryRetriever(datastore_url: str, top_k: Optional[int] = None, api_key: Optional[str] = None)[source]#\nasync aget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\napi_key: Optional[str]#\ndatastore_url: str#\nget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\ntop_k: Optional[int]#", "source": "https://python.langchain.com/en/latest/reference/modules/retrievers.html"}
+{"id": "e56e9b3601d4-3", "text": "Returns\nList of relevant documents\ntop_k: Optional[int]#\nclass langchain.retrievers.ElasticSearchBM25Retriever(client: Any, index_name: str)[source]#\nWrapper around Elasticsearch using BM25 as a retrieval method.\nTo connect to an Elasticsearch instance that requires login credentials,\nincluding Elastic Cloud, use the Elasticsearch URL format\nhttps://username:password@es_host:9243. For example, to connect to Elastic\nCloud, create the Elasticsearch URL with the required authentication details and\npass it to the ElasticVectorSearch constructor as the named parameter\nelasticsearch_url.\nYou can obtain your Elastic Cloud URL and login credentials by logging in to the\nElastic Cloud console at https://cloud.elastic.co, selecting your deployment, and\nnavigating to the \u201cDeployments\u201d page.\nTo obtain your Elastic Cloud password for the default \u201celastic\u201d user:\nLog in to the Elastic Cloud console at https://cloud.elastic.co\nGo to \u201cSecurity\u201d > \u201cUsers\u201d\nLocate the \u201celastic\u201d user and click \u201cEdit\u201d\nClick \u201cReset password\u201d\nFollow the prompts to reset the password\nThe format for Elastic Cloud URLs is\nhttps://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.\nadd_texts(texts: Iterable[str], refresh_indices: bool = True) \u2192 List[str][source]#\nRun more texts through the embeddings and add to the retriver.\nParameters\ntexts \u2013 Iterable of strings to add to the retriever.\nrefresh_indices \u2013 bool to refresh ElasticSearch indices\nReturns\nList of ids from adding the texts into the retriever.\nasync aget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents", "source": "https://python.langchain.com/en/latest/reference/modules/retrievers.html"}
+{"id": "e56e9b3601d4-4", "text": "Parameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nclassmethod create(elasticsearch_url: str, index_name: str, k1: float = 2.0, b: float = 0.75) \u2192 langchain.retrievers.elastic_search_bm25.ElasticSearchBM25Retriever[source]#\nget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\npydantic model langchain.retrievers.KNNRetriever[source]#\nfield embeddings: langchain.embeddings.base.Embeddings [Required]#\nfield index: Any = None#\nfield k: int = 4#\nfield relevancy_threshold: Optional[float] = None#\nfield texts: List[str] [Required]#\nasync aget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nclassmethod from_texts(texts: List[str], embeddings: langchain.embeddings.base.Embeddings, **kwargs: Any) \u2192 langchain.retrievers.knn.KNNRetriever[source]#\nget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nclass langchain.retrievers.MergerRetriever(retrievers: List[langchain.schema.BaseRetriever])[source]#\nThis class merges the results of multiple retrievers.\nParameters\nretrievers \u2013 A list of retrievers to merge.", "source": "https://python.langchain.com/en/latest/reference/modules/retrievers.html"}
+{"id": "e56e9b3601d4-5", "text": "Parameters\nretrievers \u2013 A list of retrievers to merge.\nasync aget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nAsynchronously get the relevant documents for a given query.\nParameters\nquery \u2013 The query to search for.\nReturns\nA list of relevant documents.\nasync amerge_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nAsynchronously merge the results of the retrievers.\nParameters\nquery \u2013 The query to search for.\nReturns\nA list of merged documents.\nget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet the relevant documents for a given query.\nParameters\nquery \u2013 The query to search for.\nReturns\nA list of relevant documents.\nmerge_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nMerge the results of the retrievers.\nParameters\nquery \u2013 The query to search for.\nReturns\nA list of merged documents.\nclass langchain.retrievers.MetalRetriever(client: Any, params: Optional[dict] = None)[source]#\nasync aget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\npydantic model langchain.retrievers.PineconeHybridSearchRetriever[source]#\nfield alpha: float = 0.5#\nfield embeddings: langchain.embeddings.base.Embeddings [Required]#\nfield index: Any = None#", "source": "https://python.langchain.com/en/latest/reference/modules/retrievers.html"}
+{"id": "e56e9b3601d4-6", "text": "field index: Any = None#\nfield sparse_encoder: Any = None#\nfield top_k: int = 4#\nadd_texts(texts: List[str], ids: Optional[List[str]] = None, metadatas: Optional[List[dict]] = None) \u2192 None[source]#\nasync aget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\npydantic model langchain.retrievers.PubMedRetriever[source]#\nIt is effectively a wrapper for PubMedAPIWrapper.\nIt wraps load() to get_relevant_documents().\nIt uses all PubMedAPIWrapper arguments without any change.\nasync aget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\npydantic model langchain.retrievers.RemoteLangChainRetriever[source]#\nfield headers: Optional[dict] = None#\nfield input_key: str = 'message'#\nfield metadata_key: str = 'metadata'#\nfield page_content_key: str = 'page_content'#\nfield response_key: str = 'response'#\nfield url: str [Required]#", "source": "https://python.langchain.com/en/latest/reference/modules/retrievers.html"}
+{"id": "e56e9b3601d4-7", "text": "field response_key: str = 'response'#\nfield url: str [Required]#\nasync aget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\npydantic model langchain.retrievers.SVMRetriever[source]#\nfield embeddings: langchain.embeddings.base.Embeddings [Required]#\nfield index: Any = None#\nfield k: int = 4#\nfield relevancy_threshold: Optional[float] = None#\nfield texts: List[str] [Required]#\nasync aget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nclassmethod from_texts(texts: List[str], embeddings: langchain.embeddings.base.Embeddings, **kwargs: Any) \u2192 langchain.retrievers.svm.SVMRetriever[source]#\nget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\npydantic model langchain.retrievers.SelfQueryRetriever[source]#\nRetriever that wraps around a vector store and uses an LLM to generate\nthe vector store queries.\nfield llm_chain: langchain.chains.llm.LLMChain [Required]#\nThe LLMChain for generating the vector store queries.", "source": "https://python.langchain.com/en/latest/reference/modules/retrievers.html"}
+{"id": "e56e9b3601d4-8", "text": "The LLMChain for generating the vector store queries.\nfield search_kwargs: dict [Optional]#\nKeyword arguments to pass in to the vector store search.\nfield search_type: str = 'similarity'#\nThe search type to perform on the vector store.\nfield structured_query_translator: langchain.chains.query_constructor.ir.Visitor [Required]#\nTranslator for turning internal query language into vectorstore search params.\nfield vectorstore: langchain.vectorstores.base.VectorStore [Required]#\nThe underlying vector store from which documents will be retrieved.\nfield verbose: bool = False#\nasync aget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nclassmethod from_llm(llm: langchain.base_language.BaseLanguageModel, vectorstore: langchain.vectorstores.base.VectorStore, document_contents: str, metadata_field_info: List[langchain.chains.query_constructor.schema.AttributeInfo], structured_query_translator: Optional[langchain.chains.query_constructor.ir.Visitor] = None, chain_kwargs: Optional[Dict] = None, enable_limit: bool = False, **kwargs: Any) \u2192 langchain.retrievers.self_query.base.SelfQueryRetriever[source]#\nget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\npydantic model langchain.retrievers.TFIDFRetriever[source]#\nfield docs: List[langchain.schema.Document] [Required]#\nfield k: int = 4#\nfield tfidf_array: Any = None#\nfield vectorizer: Any = None#", "source": "https://python.langchain.com/en/latest/reference/modules/retrievers.html"}
+{"id": "e56e9b3601d4-9", "text": "field tfidf_array: Any = None#\nfield vectorizer: Any = None#\nasync aget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nclassmethod from_documents(documents: Iterable[langchain.schema.Document], *, tfidf_params: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 langchain.retrievers.tfidf.TFIDFRetriever[source]#\nclassmethod from_texts(texts: Iterable[str], metadatas: Optional[Iterable[dict]] = None, tfidf_params: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 langchain.retrievers.tfidf.TFIDFRetriever[source]#\nget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\npydantic model langchain.retrievers.TimeWeightedVectorStoreRetriever[source]#\nRetriever combining embedding similarity with recency.\nfield decay_rate: float = 0.01#\nThe exponential decay factor used as (1.0-decay_rate)**(hrs_passed).\nfield default_salience: Optional[float] = None#\nThe salience to assign memories not retrieved from the vector store.\nNone assigns no salience to documents not fetched from the vector store.\nfield k: int = 4#\nThe maximum number of documents to retrieve in a given call.\nfield memory_stream: List[langchain.schema.Document] [Optional]#\nThe memory_stream of documents to search through.\nfield other_score_keys: List[str] = []#", "source": "https://python.langchain.com/en/latest/reference/modules/retrievers.html"}
+{"id": "e56e9b3601d4-10", "text": "field other_score_keys: List[str] = []#\nOther keys in the metadata to factor into the score, e.g. \u2018importance\u2019.\nfield search_kwargs: dict [Optional]#\nKeyword arguments to pass to the vectorstore similarity search.\nfield vectorstore: langchain.vectorstores.base.VectorStore [Required]#\nThe vectorstore to store documents and determine salience.\nasync aadd_documents(documents: List[langchain.schema.Document], **kwargs: Any) \u2192 List[str][source]#\nAdd documents to vectorstore.\nadd_documents(documents: List[langchain.schema.Document], **kwargs: Any) \u2192 List[str][source]#\nAdd documents to vectorstore.\nasync aget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nReturn documents that are relevant to the query.\nget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nReturn documents that are relevant to the query.\nget_salient_docs(query: str) \u2192 Dict[int, Tuple[langchain.schema.Document, float]][source]#\nReturn documents that are salient to the query.\nclass langchain.retrievers.VespaRetriever(app: Vespa, body: Dict, content_field: str, metadata_fields: Optional[Sequence[str]] = None)[source]#\nasync aget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents", "source": "https://python.langchain.com/en/latest/reference/modules/retrievers.html"}
+{"id": "e56e9b3601d4-11", "text": "Parameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nclassmethod from_params(url: str, content_field: str, *, k: Optional[int] = None, metadata_fields: Union[Sequence[str], Literal['*']] = (), sources: Optional[Union[Sequence[str], Literal['*']]] = None, _filter: Optional[str] = None, yql: Optional[str] = None, **kwargs: Any) \u2192 langchain.retrievers.vespa_retriever.VespaRetriever[source]#\nInstantiate retriever from params.\nParameters\nurl (str) \u2013 Vespa app URL.\ncontent_field (str) \u2013 Field in results to return as Document page_content.\nk (Optional[int]) \u2013 Number of Documents to return. Defaults to None.\nmetadata_fields (Sequence[str] or \"*\") \u2013 Fields in results to include in\ndocument metadata. Defaults to empty tuple ().\nsources (Sequence[str] or \"*\" or None) \u2013 Sources to retrieve\nfrom. Defaults to None.\n_filter (Optional[str]) \u2013 Document filter condition expressed in YQL.\nDefaults to None.\nyql (Optional[str]) \u2013 Full YQL query to be used. Should not be specified\nif _filter or sources are specified. Defaults to None.\nkwargs (Any) \u2013 Keyword arguments added to query body.\nget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nget_relevant_documents_with_filter(query: str, *, _filter: Optional[str] = None) \u2192 List[langchain.schema.Document][source]#", "source": "https://python.langchain.com/en/latest/reference/modules/retrievers.html"}
+{"id": "e56e9b3601d4-12", "text": "class langchain.retrievers.WeaviateHybridSearchRetriever(client: Any, index_name: str, text_key: str, alpha: float = 0.5, k: int = 4, attributes: Optional[List[str]] = None, create_schema_if_missing: bool = True)[source]#\nclass Config[source]#\nConfiguration for this pydantic object.\narbitrary_types_allowed = True#\nextra = 'forbid'#\nadd_documents(docs: List[langchain.schema.Document], **kwargs: Any) \u2192 List[str][source]#\nUpload documents to Weaviate.\nasync aget_relevant_documents(query: str, where_filter: Optional[Dict[str, object]] = None) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nget_relevant_documents(query: str, where_filter: Optional[Dict[str, object]] = None) \u2192 List[langchain.schema.Document][source]#\nLook up similar documents in Weaviate.\npydantic model langchain.retrievers.WikipediaRetriever[source]#\nIt is effectively a wrapper for WikipediaAPIWrapper.\nIt wraps load() to get_relevant_documents().\nIt uses all WikipediaAPIWrapper arguments without any change.\nasync aget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nget_relevant_documents(query: str) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents", "source": "https://python.langchain.com/en/latest/reference/modules/retrievers.html"}
+{"id": "e56e9b3601d4-13", "text": "Parameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nclass langchain.retrievers.ZepRetriever(session_id: str, url: str, top_k: Optional[int] = None)[source]#\nA Retriever implementation for the Zep long-term memory store. Search your\nuser\u2019s long-term chat history with Zep.\nNote: You will need to provide the user\u2019s session_id to use this retriever.\nMore on Zep:\nZep provides long-term conversation storage for LLM apps. The server stores,\nsummarizes, embeds, indexes, and enriches conversational AI chat\nhistories, and exposes them via simple, low-latency APIs.\nFor server installation instructions, see:\nhttps://getzep.github.io/deployment/quickstart/\nasync aget_relevant_documents(query: str, metadata: Optional[Dict] = None) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nget_relevant_documents(query: str, metadata: Optional[Dict] = None) \u2192 List[langchain.schema.Document][source]#\nGet documents relevant for a query.\nParameters\nquery \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nprevious\nVector Stores\nnext\nDocument Compressors\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/retrievers.html"}
+{"id": "7cef05c07832-0", "text": ".rst\n.pdf\nText Splitter\nText Splitter#\nFunctionality for splitting text.\nclass langchain.text_splitter.CharacterTextSplitter(separator: str = '\\n\\n', **kwargs: Any)[source]#\nImplementation of splitting text that looks at characters.\nsplit_text(text: str) \u2192 List[str][source]#\nSplit incoming text and return chunks.\nclass langchain.text_splitter.Language(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]#\nCPP = 'cpp'#\nGO = 'go'#\nHTML = 'html'#\nJAVA = 'java'#\nJS = 'js'#\nLATEX = 'latex'#\nMARKDOWN = 'markdown'#\nPHP = 'php'#\nPROTO = 'proto'#\nPYTHON = 'python'#\nRST = 'rst'#\nRUBY = 'ruby'#\nRUST = 'rust'#\nSCALA = 'scala'#\nSWIFT = 'swift'#\nclass langchain.text_splitter.LatexTextSplitter(**kwargs: Any)[source]#\nAttempts to split the text along Latex-formatted layout elements.\nclass langchain.text_splitter.MarkdownTextSplitter(**kwargs: Any)[source]#\nAttempts to split the text along Markdown-formatted headings.\nclass langchain.text_splitter.NLTKTextSplitter(separator: str = '\\n\\n', **kwargs: Any)[source]#\nImplementation of splitting text that looks at sentences using NLTK.\nsplit_text(text: str) \u2192 List[str][source]#\nSplit incoming text and return chunks.\nclass langchain.text_splitter.PythonCodeTextSplitter(**kwargs: Any)[source]#\nAttempts to split the text along Python syntax.", "source": "https://python.langchain.com/en/latest/reference/modules/text_splitter.html"}
+{"id": "7cef05c07832-1", "text": "Attempts to split the text along Python syntax.\nclass langchain.text_splitter.RecursiveCharacterTextSplitter(separators: Optional[List[str]] = None, keep_separator: bool = True, **kwargs: Any)[source]#\nImplementation of splitting text that looks at characters.\nRecursively tries to split by different characters to find one\nthat works.\nclassmethod from_language(language: langchain.text_splitter.Language, **kwargs: Any) \u2192 langchain.text_splitter.RecursiveCharacterTextSplitter[source]#\nstatic get_separators_for_language(language: langchain.text_splitter.Language) \u2192 List[str][source]#\nsplit_text(text: str) \u2192 List[str][source]#\nSplit text into multiple components.\nclass langchain.text_splitter.SentenceTransformersTokenTextSplitter(chunk_overlap: int = 50, model_name: str = 'sentence-transformers/all-mpnet-base-v2', tokens_per_chunk: Optional[int] = None, **kwargs: Any)[source]#\nImplementation of splitting text that looks at tokens.\ncount_tokens(*, text: str) \u2192 int[source]#\nsplit_text(text: str) \u2192 List[str][source]#\nSplit text into multiple components.\nclass langchain.text_splitter.SpacyTextSplitter(separator: str = '\\n\\n', pipeline: str = 'en_core_web_sm', **kwargs: Any)[source]#\nImplementation of splitting text that looks at sentences using Spacy.\nsplit_text(text: str) \u2192 List[str][source]#\nSplit incoming text and return chunks.\nclass langchain.text_splitter.TextSplitter(chunk_size: int = 4000, chunk_overlap: int = 200, length_function: typing.Callable[[str], int] = , keep_separator: bool = False, add_start_index: bool = False)[source]#\nInterface for splitting text into chunks.", "source": "https://python.langchain.com/en/latest/reference/modules/text_splitter.html"}
+{"id": "7cef05c07832-2", "text": "Interface for splitting text into chunks.\nasync atransform_documents(documents: Sequence[langchain.schema.Document], **kwargs: Any) \u2192 Sequence[langchain.schema.Document][source]#\nAsynchronously transform a sequence of documents by splitting them.\ncreate_documents(texts: List[str], metadatas: Optional[List[dict]] = None) \u2192 List[langchain.schema.Document][source]#\nCreate documents from a list of texts.\nclassmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) \u2192 langchain.text_splitter.TextSplitter[source]#\nText splitter that uses HuggingFace tokenizer to count length.\nclassmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) \u2192 langchain.text_splitter.TS[source]#\nText splitter that uses tiktoken encoder to count length.\nsplit_documents(documents: Iterable[langchain.schema.Document]) \u2192 List[langchain.schema.Document][source]#\nSplit documents.\nabstract split_text(text: str) \u2192 List[str][source]#\nSplit text into multiple components.\ntransform_documents(documents: Sequence[langchain.schema.Document], **kwargs: Any) \u2192 Sequence[langchain.schema.Document][source]#\nTransform sequence of documents by splitting them.\nclass langchain.text_splitter.TokenTextSplitter(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any)[source]#\nImplementation of splitting text that looks at tokens.", "source": "https://python.langchain.com/en/latest/reference/modules/text_splitter.html"}
+{"id": "7cef05c07832-3", "text": "Implementation of splitting text that looks at tokens.\nsplit_text(text: str) \u2192 List[str][source]#\nSplit text into multiple components.\nclass langchain.text_splitter.Tokenizer(chunk_overlap: 'int', tokens_per_chunk: 'int', decode: 'Callable[[list[int]], str]', encode: 'Callable[[str], List[int]]')[source]#\nchunk_overlap: int#\ndecode: Callable[[list[int]], str]#\nencode: Callable[[str], List[int]]#\ntokens_per_chunk: int#\nlangchain.text_splitter.split_text_on_tokens(*, text: str, tokenizer: langchain.text_splitter.Tokenizer) \u2192 List[str][source]#\nSplit incoming text and return chunks.\nprevious\nDocstore\nnext\nDocument Loaders\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/text_splitter.html"}
+{"id": "bdbf99f63d6a-0", "text": ".rst\n.pdf\nPromptTemplates\nPromptTemplates#\nPrompt template classes.\npydantic model langchain.prompts.BaseChatPromptTemplate[source]#\nformat(**kwargs: Any) \u2192 str[source]#\nFormat the prompt with the inputs.\nParameters\nkwargs \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nExample:\nprompt.format(variable1=\"foo\")\nabstract format_messages(**kwargs: Any) \u2192 List[langchain.schema.BaseMessage][source]#\nFormat kwargs into a list of messages.\nformat_prompt(**kwargs: Any) \u2192 langchain.schema.PromptValue[source]#\nCreate Chat Messages.\npydantic model langchain.prompts.BasePromptTemplate[source]#\nBase class for all prompt templates, returning a prompt.\nfield input_variables: List[str] [Required]#\nA list of the names of the variables the prompt template expects.\nfield output_parser: Optional[langchain.schema.BaseOutputParser] = None#\nHow to parse the output of calling an LLM on this formatted prompt.\ndict(**kwargs: Any) \u2192 Dict[source]#\nReturn dictionary representation of prompt.\nabstract format(**kwargs: Any) \u2192 str[source]#\nFormat the prompt with the inputs.\nParameters\nkwargs \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nExample:\nprompt.format(variable1=\"foo\")\nabstract format_prompt(**kwargs: Any) \u2192 langchain.schema.PromptValue[source]#\nCreate Chat Messages.\npartial(**kwargs: Union[str, Callable[[], str]]) \u2192 langchain.prompts.base.BasePromptTemplate[source]#\nReturn a partial of the prompt template.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None[source]#\nSave the prompt.\nParameters\nfile_path \u2013 Path to directory to save prompt to.\nExample:\n.. code-block:: python", "source": "https://python.langchain.com/en/latest/reference/modules/prompts.html"}
+{"id": "bdbf99f63d6a-1", "text": "Example:\n.. code-block:: python\nprompt.save(file_path=\u201dpath/prompt.yaml\u201d)\npydantic model langchain.prompts.ChatPromptTemplate[source]#\nformat(**kwargs: Any) \u2192 str[source]#\nFormat the prompt with the inputs.\nParameters\nkwargs \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nExample:\nprompt.format(variable1=\"foo\")\nformat_messages(**kwargs: Any) \u2192 List[langchain.schema.BaseMessage][source]#\nFormat kwargs into a list of messages.\npartial(**kwargs: Union[str, Callable[[], str]]) \u2192 langchain.prompts.base.BasePromptTemplate[source]#\nReturn a partial of the prompt template.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None[source]#\nSave the prompt.\nParameters\nfile_path \u2013 Path to directory to save prompt to.\nExample:\n.. code-block:: python\nprompt.save(file_path=\u201dpath/prompt.yaml\u201d)\npydantic model langchain.prompts.FewShotPromptTemplate[source]#\nPrompt template that contains few shot examples.\nfield example_prompt: langchain.prompts.prompt.PromptTemplate [Required]#\nPromptTemplate used to format an individual example.\nfield example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = None#\nExampleSelector to choose the examples to format into the prompt.\nEither this or examples should be provided.\nfield example_separator: str = '\\n\\n'#\nString separator used to join the prefix, the examples, and suffix.\nfield examples: Optional[List[dict]] = None#\nExamples to format into the prompt.\nEither this or example_selector should be provided.\nfield input_variables: List[str] [Required]#\nA list of the names of the variables the prompt template expects.\nfield prefix: str = ''#", "source": "https://python.langchain.com/en/latest/reference/modules/prompts.html"}
+{"id": "bdbf99f63d6a-2", "text": "field prefix: str = ''#\nA prompt template string to put before the examples.\nfield suffix: str [Required]#\nA prompt template string to put after the examples.\nfield template_format: str = 'f-string'#\nThe format of the prompt template. Options are: \u2018f-string\u2019, \u2018jinja2\u2019.\nfield validate_template: bool = True#\nWhether or not to try validating the template.\ndict(**kwargs: Any) \u2192 Dict[source]#\nReturn a dictionary of the prompt.\nformat(**kwargs: Any) \u2192 str[source]#\nFormat the prompt with the inputs.\nParameters\nkwargs \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nExample:\nprompt.format(variable1=\"foo\")\npydantic model langchain.prompts.FewShotPromptWithTemplates[source]#\nPrompt template that contains few shot examples.\nfield example_prompt: langchain.prompts.prompt.PromptTemplate [Required]#\nPromptTemplate used to format an individual example.\nfield example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = None#\nExampleSelector to choose the examples to format into the prompt.\nEither this or examples should be provided.\nfield example_separator: str = '\\n\\n'#\nString separator used to join the prefix, the examples, and suffix.\nfield examples: Optional[List[dict]] = None#\nExamples to format into the prompt.\nEither this or example_selector should be provided.\nfield input_variables: List[str] [Required]#\nA list of the names of the variables the prompt template expects.\nfield prefix: Optional[langchain.prompts.base.StringPromptTemplate] = None#\nA PromptTemplate to put before the examples.\nfield suffix: langchain.prompts.base.StringPromptTemplate [Required]#\nA PromptTemplate to put after the examples.", "source": "https://python.langchain.com/en/latest/reference/modules/prompts.html"}
+{"id": "bdbf99f63d6a-3", "text": "A PromptTemplate to put after the examples.\nfield template_format: str = 'f-string'#\nThe format of the prompt template. Options are: \u2018f-string\u2019, \u2018jinja2\u2019.\nfield validate_template: bool = True#\nWhether or not to try validating the template.\ndict(**kwargs: Any) \u2192 Dict[source]#\nReturn a dictionary of the prompt.\nformat(**kwargs: Any) \u2192 str[source]#\nFormat the prompt with the inputs.\nParameters\nkwargs \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nExample:\nprompt.format(variable1=\"foo\")\npydantic model langchain.prompts.MessagesPlaceholder[source]#\nPrompt template that assumes variable is already list of messages.\nformat_messages(**kwargs: Any) \u2192 List[langchain.schema.BaseMessage][source]#\nTo a BaseMessage.\nproperty input_variables: List[str]#\nInput variables for this prompt template.\nlangchain.prompts.Prompt#\nalias of langchain.prompts.prompt.PromptTemplate\npydantic model langchain.prompts.PromptTemplate[source]#\nSchema to represent a prompt for an LLM.\nExample\nfrom langchain import PromptTemplate\nprompt = PromptTemplate(input_variables=[\"foo\"], template=\"Say {foo}\")\nfield input_variables: List[str] [Required]#\nA list of the names of the variables the prompt template expects.\nfield template: str [Required]#\nThe prompt template.\nfield template_format: str = 'f-string'#\nThe format of the prompt template. Options are: \u2018f-string\u2019, \u2018jinja2\u2019.\nfield validate_template: bool = True#\nWhether or not to try validating the template.\nformat(**kwargs: Any) \u2192 str[source]#\nFormat the prompt with the inputs.\nParameters\nkwargs \u2013 Any arguments to be passed to the prompt template.\nReturns", "source": "https://python.langchain.com/en/latest/reference/modules/prompts.html"}
+{"id": "bdbf99f63d6a-4", "text": "Parameters\nkwargs \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nExample:\nprompt.format(variable1=\"foo\")\nclassmethod from_examples(examples: List[str], suffix: str, input_variables: List[str], example_separator: str = '\\n\\n', prefix: str = '', **kwargs: Any) \u2192 langchain.prompts.prompt.PromptTemplate[source]#\nTake examples in list format with prefix and suffix to create a prompt.\nIntended to be used as a way to dynamically create a prompt from examples.\nParameters\nexamples \u2013 List of examples to use in the prompt.\nsuffix \u2013 String to go after the list of examples. Should generally\nset up the user\u2019s input.\ninput_variables \u2013 A list of variable names the final prompt template\nwill expect.\nexample_separator \u2013 The separator to use in between examples. Defaults\nto two new line characters.\nprefix \u2013 String that should go before any examples. Generally includes\nexamples. Default to an empty string.\nReturns\nThe final prompt generated.\nclassmethod from_file(template_file: Union[str, pathlib.Path], input_variables: List[str], **kwargs: Any) \u2192 langchain.prompts.prompt.PromptTemplate[source]#\nLoad a prompt from a file.\nParameters\ntemplate_file \u2013 The path to the file containing the prompt template.\ninput_variables \u2013 A list of variable names the final prompt template\nwill expect.\nReturns\nThe prompt loaded from the file.\nclassmethod from_template(template: str, **kwargs: Any) \u2192 langchain.prompts.prompt.PromptTemplate[source]#\nLoad a prompt template from a template.\npydantic model langchain.prompts.StringPromptTemplate[source]#\nString prompt should expose the format method, returning a prompt.\nformat_prompt(**kwargs: Any) \u2192 langchain.schema.PromptValue[source]#\nCreate Chat Messages.", "source": "https://python.langchain.com/en/latest/reference/modules/prompts.html"}
+{"id": "bdbf99f63d6a-5", "text": "Create Chat Messages.\nlangchain.prompts.load_prompt(path: Union[str, pathlib.Path]) \u2192 langchain.prompts.base.BasePromptTemplate[source]#\nUnified method for loading a prompt from LangChainHub or local fs.\nprevious\nPrompts\nnext\nExample Selector\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/prompts.html"}
+{"id": "b7a7c0737c44-0", "text": ".rst\n.pdf\nExperimental Modules\n Contents \nAutonomous Agents\nGenerative Agents\nExperimental Modules#\nThis module contains experimental modules and reproductions of existing work using LangChain primitives.\nAutonomous Agents#\nHere, we document the BabyAGI and AutoGPT classes from the langchain.experimental module.\nclass langchain.experimental.BabyAGI(*, memory: Optional[langchain.schema.BaseMemory] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, verbose: bool = None, task_list: collections.deque = None, task_creation_chain: langchain.chains.base.Chain, task_prioritization_chain: langchain.chains.base.Chain, execution_chain: langchain.chains.base.Chain, task_id_counter: int = 1, vectorstore: langchain.vectorstores.base.VectorStore, max_iterations: Optional[int] = None)[source]#\nController model for the BabyAGI agent.\nmodel Config[source]#\nConfiguration for this pydantic object.\narbitrary_types_allowed = True#\nexecute_task(objective: str, task: str, k: int = 5) \u2192 str[source]#\nExecute a task.\nclassmethod from_llm(llm: langchain.base_language.BaseLanguageModel, vectorstore: langchain.vectorstores.base.VectorStore, verbose: bool = False, task_execution_chain: Optional[langchain.chains.base.Chain] = None, **kwargs: Dict[str, Any]) \u2192 langchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI[source]#\nInitialize the BabyAGI Controller.\nget_next_task(result: str, task_description: str, objective: str) \u2192 List[Dict][source]#\nGet the next task.", "source": "https://python.langchain.com/en/latest/reference/modules/experimental.html"}
+{"id": "b7a7c0737c44-1", "text": "Get the next task.\nproperty input_keys: List[str]#\nInput keys this chain expects.\nproperty output_keys: List[str]#\nOutput keys this chain expects.\nprioritize_tasks(this_task_id: int, objective: str) \u2192 List[Dict][source]#\nPrioritize tasks.\nclass langchain.experimental.AutoGPT(ai_name: str, memory: langchain.vectorstores.base.VectorStoreRetriever, chain: langchain.chains.llm.LLMChain, output_parser: langchain.experimental.autonomous_agents.autogpt.output_parser.BaseAutoGPTOutputParser, tools: List[langchain.tools.base.BaseTool], feedback_tool: Optional[langchain.tools.human.tool.HumanInputRun] = None)[source]#\nAgent class for interacting with Auto-GPT.\nGenerative Agents#\nHere, we document the GenerativeAgent and GenerativeAgentMemory classes from the langchain.experimental module.\nclass langchain.experimental.GenerativeAgent(*, name: str, age: Optional[int] = None, traits: str = 'N/A', status: str, memory: langchain.experimental.generative_agents.memory.GenerativeAgentMemory, llm: langchain.base_language.BaseLanguageModel, verbose: bool = False, summary: str = '', summary_refresh_seconds: int = 3600, last_refreshed: datetime.datetime = None, daily_summaries: List[str] = None)[source]#\nA character with memory and innate characteristics.\nmodel Config[source]#\nConfiguration for this pydantic object.\narbitrary_types_allowed = True#\nfield age: Optional[int] = None#\nThe optional age of the character.\nfield daily_summaries: List[str] [Optional]#\nSummary of the events in the plan that the agent took.", "source": "https://python.langchain.com/en/latest/reference/modules/experimental.html"}
+{"id": "b7a7c0737c44-2", "text": "Summary of the events in the plan that the agent took.\ngenerate_dialogue_response(observation: str, now: Optional[datetime.datetime] = None) \u2192 Tuple[bool, str][source]#\nReact to a given observation.\ngenerate_reaction(observation: str, now: Optional[datetime.datetime] = None) \u2192 Tuple[bool, str][source]#\nReact to a given observation.\nget_full_header(force_refresh: bool = False, now: Optional[datetime.datetime] = None) \u2192 str[source]#\nReturn a full header of the agent\u2019s status, summary, and current time.\nget_summary(force_refresh: bool = False, now: Optional[datetime.datetime] = None) \u2192 str[source]#\nReturn a descriptive summary of the agent.\nfield last_refreshed: datetime.datetime [Optional]#\nThe last time the character\u2019s summary was regenerated.\nfield llm: langchain.base_language.BaseLanguageModel [Required]#\nThe underlying language model.\nfield memory: langchain.experimental.generative_agents.memory.GenerativeAgentMemory [Required]#\nThe memory object that combines relevance, recency, and \u2018importance\u2019.\nfield name: str [Required]#\nThe character\u2019s name.\nfield status: str [Required]#\nThe traits of the character you wish not to change.\nsummarize_related_memories(observation: str) \u2192 str[source]#\nSummarize memories that are most relevant to an observation.\nfield summary: str = ''#\nStateful self-summary generated via reflection on the character\u2019s memory.\nfield summary_refresh_seconds: int = 3600#\nHow frequently to re-generate the summary.\nfield traits: str = 'N/A'#\nPermanent traits to ascribe to the character.", "source": "https://python.langchain.com/en/latest/reference/modules/experimental.html"}
+{"id": "b7a7c0737c44-3", "text": "field traits: str = 'N/A'#\nPermanent traits to ascribe to the character.\nclass langchain.experimental.GenerativeAgentMemory(*, llm: langchain.base_language.BaseLanguageModel, memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever, verbose: bool = False, reflection_threshold: Optional[float] = None, current_plan: List[str] = [], importance_weight: float = 0.15, aggregate_importance: float = 0.0, max_tokens_limit: int = 1200, queries_key: str = 'queries', most_recent_memories_token_key: str = 'recent_memories_token', add_memory_key: str = 'add_memory', relevant_memories_key: str = 'relevant_memories', relevant_memories_simple_key: str = 'relevant_memories_simple', most_recent_memories_key: str = 'most_recent_memories', now_key: str = 'now', reflecting: bool = False)[source]#\nadd_memories(memory_content: str, now: Optional[datetime.datetime] = None) \u2192 List[str][source]#\nAdd an observations or memories to the agent\u2019s memory.\nadd_memory(memory_content: str, now: Optional[datetime.datetime] = None) \u2192 List[str][source]#\nAdd an observation or memory to the agent\u2019s memory.\nfield aggregate_importance: float = 0.0#\nTrack the sum of the \u2018importance\u2019 of recent memories.\nTriggers reflection when it reaches reflection_threshold.\nclear() \u2192 None[source]#\nClear memory contents.\nfield current_plan: List[str] = []#\nThe current plan of the agent.\nfetch_memories(observation: str, now: Optional[datetime.datetime] = None) \u2192 List[langchain.schema.Document][source]#\nFetch related memories.", "source": "https://python.langchain.com/en/latest/reference/modules/experimental.html"}
+{"id": "b7a7c0737c44-4", "text": "Fetch related memories.\nfield importance_weight: float = 0.15#\nHow much weight to assign the memory importance.\nfield llm: langchain.base_language.BaseLanguageModel [Required]#\nThe core language model.\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, str][source]#\nReturn key-value pairs given the text input to the chain.\nfield memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever [Required]#\nThe retriever to fetch related memories.\nproperty memory_variables: List[str]#\nInput keys this memory class will load dynamically.\npause_to_reflect(now: Optional[datetime.datetime] = None) \u2192 List[str][source]#\nReflect on recent observations and generate \u2018insights\u2019.\nfield reflection_threshold: Optional[float] = None#\nWhen aggregate_importance exceeds reflection_threshold, stop to reflect.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, Any]) \u2192 None[source]#\nSave the context of this model run to memory.\nprevious\nUtilities\nnext\nIntegrations\n Contents\n \nAutonomous Agents\nGenerative Agents\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/experimental.html"}
+{"id": "565a17c6ddca-0", "text": ".rst\n.pdf\nChat Models\nChat Models#\npydantic model langchain.chat_models.AzureChatOpenAI[source]#\nWrapper around Azure OpenAI Chat Completion API. To use this class you\nmust have a deployed model on Azure OpenAI. Use deployment_name in the\nconstructor to refer to the \u201cModel deployment name\u201d in the Azure portal.\nIn addition, you should have the openai python package installed, and the\nfollowing environment variables set or passed in constructor in lower case:\n- OPENAI_API_TYPE (default: azure)\n- OPENAI_API_KEY\n- OPENAI_API_BASE\n- OPENAI_API_VERSION\n- OPENAI_PROXY\nFor exmaple, if you have gpt-35-turbo deployed, with the deployment name\n35-turbo-dev, the constructor should look like:\nAzureChatOpenAI(\n deployment_name=\"35-turbo-dev\",\n openai_api_version=\"2023-03-15-preview\",\n)\nBe aware the API version may change.\nAny parameters that are valid to be passed to the openai.create call can be passed\nin, even if not explicitly saved on this class.\nfield deployment_name: str = ''#\nfield openai_api_base: str = ''#\nfield openai_api_key: str = ''#\nBase URL path for API requests,\nleave blank if not using a proxy or service emulator.\nfield openai_api_type: str = 'azure'#\nfield openai_api_version: str = ''#\nfield openai_organization: str = ''#\nfield openai_proxy: str = ''#\npydantic model langchain.chat_models.ChatAnthropic[source]#\nWrapper around Anthropic\u2019s large language model.\nTo use, you should have the anthropic python package installed, and the\nenvironment variable ANTHROPIC_API_KEY set with your API key, or pass", "source": "https://python.langchain.com/en/latest/reference/modules/chat_models.html"}
+{"id": "565a17c6ddca-1", "text": "environment variable ANTHROPIC_API_KEY set with your API key, or pass\nit as a named parameter to the constructor.\nExample\nimport anthropic\nfrom langchain.llms import Anthropic\nmodel = ChatAnthropic(model=\"\", anthropic_api_key=\"my-api-key\")\nget_num_tokens(text: str) \u2192 int[source]#\nCalculate number of tokens.\npydantic model langchain.chat_models.ChatGooglePalm[source]#\nWrapper around Google\u2019s PaLM Chat API.\nTo use you must have the google.generativeai Python package installed and\neither:\nThe GOOGLE_API_KEY` environment varaible set with your API key, or\nPass your API key using the google_api_key kwarg to the ChatGoogle\nconstructor.\nExample\nfrom langchain.chat_models import ChatGooglePalm\nchat = ChatGooglePalm()\nfield google_api_key: Optional[str] = None#\nfield model_name: str = 'models/chat-bison-001'#\nModel name to use.\nfield n: int = 1#\nNumber of chat completions to generate for each prompt. Note that the API may\nnot return the full n completions if duplicates are generated.\nfield temperature: Optional[float] = None#\nRun inference with this temperature. Must by in the closed\ninterval [0.0, 1.0].\nfield top_k: Optional[int] = None#\nDecode using top-k sampling: consider the set of top_k most probable tokens.\nMust be positive.\nfield top_p: Optional[float] = None#\nDecode using nucleus sampling: consider the smallest set of tokens whose\nprobability sum is at least top_p. Must be in the closed interval [0.0, 1.0].\npydantic model langchain.chat_models.ChatOpenAI[source]#\nWrapper around OpenAI Chat large language models.", "source": "https://python.langchain.com/en/latest/reference/modules/chat_models.html"}
+{"id": "565a17c6ddca-2", "text": "Wrapper around OpenAI Chat large language models.\nTo use, you should have the openai python package installed, and the\nenvironment variable OPENAI_API_KEY set with your API key.\nAny parameters that are valid to be passed to the openai.create call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.chat_models import ChatOpenAI\nopenai = ChatOpenAI(model_name=\"gpt-3.5-turbo\")\nfield max_retries: int = 6#\nMaximum number of retries to make when generating.\nfield max_tokens: Optional[int] = None#\nMaximum number of tokens to generate.\nfield model_kwargs: Dict[str, Any] [Optional]#\nHolds any model parameters valid for create call not explicitly specified.\nfield model_name: str = 'gpt-3.5-turbo' (alias 'model')#\nModel name to use.\nfield n: int = 1#\nNumber of chat completions to generate for each prompt.\nfield openai_api_base: Optional[str] = None#\nfield openai_api_key: Optional[str] = None#\nBase URL path for API requests,\nleave blank if not using a proxy or service emulator.\nfield openai_organization: Optional[str] = None#\nfield openai_proxy: Optional[str] = None#\nfield request_timeout: Optional[Union[float, Tuple[float, float]]] = None#\nTimeout for requests to OpenAI completion API. Default is 600 seconds.\nfield streaming: bool = False#\nWhether to stream the results or not.\nfield temperature: float = 0.7#\nWhat sampling temperature to use.\ncompletion_with_retry(**kwargs: Any) \u2192 Any[source]#\nUse tenacity to retry the completion call.", "source": "https://python.langchain.com/en/latest/reference/modules/chat_models.html"}
+{"id": "565a17c6ddca-3", "text": "Use tenacity to retry the completion call.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int[source]#\nCalculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.\nOfficial documentation: openai/openai-cookbook\nmain/examples/How_to_format_inputs_to_ChatGPT_models.ipynb\nget_token_ids(text: str) \u2192 List[int][source]#\nGet the tokens present in the text with tiktoken package.\npydantic model langchain.chat_models.ChatVertexAI[source]#\nWrapper around Vertex AI large language models.\nfield model_name: str = 'chat-bison'#\nModel name to use.\npydantic model langchain.chat_models.PromptLayerChatOpenAI[source]#\nWrapper around OpenAI Chat large language models and PromptLayer.\nTo use, you should have the openai and promptlayer python\npackage installed, and the environment variable OPENAI_API_KEY\nand PROMPTLAYER_API_KEY set with your openAI API key and\npromptlayer key respectively.\nAll parameters that can be passed to the OpenAI LLM can also\nbe passed here. The PromptLayerChatOpenAI adds to optional\nParameters\npl_tags \u2013 List of strings to tag the request with.\nreturn_pl_id \u2013 If True, the PromptLayer request ID will be\nreturned in the generation_info field of the\nGeneration object.\nExample\nfrom langchain.chat_models import PromptLayerChatOpenAI\nopenai = PromptLayerChatOpenAI(model_name=\"gpt-3.5-turbo\")\nfield pl_tags: Optional[List[str]] = None#\nfield return_pl_id: Optional[bool] = False#\nprevious\nModels\nnext\nEmbeddings\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.", "source": "https://python.langchain.com/en/latest/reference/modules/chat_models.html"}
+{"id": "565a17c6ddca-4", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/chat_models.html"}
+{"id": "8ebc6ecad731-0", "text": ".rst\n.pdf\nChains\nChains#\nChains are easily reusable components which can be linked together.\npydantic model langchain.chains.APIChain[source]#\nChain that makes API calls and summarizes the responses to answer a question.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_api_answer_prompt \u00bb all fields\nvalidate_api_request_prompt \u00bb all fields\nfield api_answer_chain: LLMChain [Required]#\nfield api_docs: str [Required]#\nfield api_request_chain: LLMChain [Required]#\nfield requests_wrapper: TextRequestsWrapper [Required]#", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-1", "text": "field requests_wrapper: TextRequestsWrapper [Required]#\nclassmethod from_llm_and_api_docs(llm: langchain.base_language.BaseLanguageModel, api_docs: str, headers: Optional[dict] = None, api_url_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['api_docs', 'question'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\\n{api_docs}\\nUsing this documentation, generate the full API url to call for answering the user question.\\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\\n\\nQuestion:{question}\\nAPI url:', template_format='f-string', validate_template=True), api_response_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['api_docs', 'question', 'api_url', 'api_response'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\\n{api_docs}\\nUsing this documentation, generate the full API url to call for answering the user question.\\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\\n\\nQuestion:{question}\\nAPI url: {api_url}\\n\\nHere is the response from the API:\\n\\n{api_response}\\n\\nSummarize this response to answer the original question.\\n\\nSummary:', template_format='f-string', validate_template=True), **kwargs: Any) \u2192 langchain.chains.api.base.APIChain[source]#\nLoad chain from just an LLM and the api docs.\npydantic model langchain.chains.AnalyzeDocumentChain[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-2", "text": "pydantic model langchain.chains.AnalyzeDocumentChain[source]#\nChain that splits documents, then analyzes it in pieces.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield combine_docs_chain: langchain.chains.combine_documents.base.BaseCombineDocumentsChain [Required]#\nfield text_splitter: langchain.text_splitter.TextSplitter [Optional]#\npydantic model langchain.chains.ChatVectorDBChain[source]#\nChain for chatting with a vector database.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield search_kwargs: dict [Optional]#\nfield top_k_docs_for_context: int = 4#\nfield vectorstore: VectorStore [Required]#\nclassmethod from_llm(llm: langchain.base_language.BaseLanguageModel, vectorstore: langchain.vectorstores.base.VectorStore, condense_question_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\\n\\nChat History:\\n{chat_history}\\nFollow Up Input: {question}\\nStandalone question:', template_format='f-string', validate_template=True), chain_type: str = 'stuff', combine_docs_chain_kwargs: Optional[Dict] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) \u2192 langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain[source]#\nLoad chain from LLM.\npydantic model langchain.chains.ConstitutionalChain[source]#\nChain for applying constitutional principles.\nExample\nfrom langchain.llms import OpenAI", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-3", "text": "Chain for applying constitutional principles.\nExample\nfrom langchain.llms import OpenAI\nfrom langchain.chains import LLMChain, ConstitutionalChain\nfrom langchain.chains.constitutional_ai.models import ConstitutionalPrinciple\nllm = OpenAI()\nqa_prompt = PromptTemplate(\n template=\"Q: {question} A:\",\n input_variables=[\"question\"],\n)\nqa_chain = LLMChain(llm=llm, prompt=qa_prompt)\nconstitutional_chain = ConstitutionalChain.from_llm(\n llm=llm,\n chain=qa_chain,\n constitutional_principles=[\n ConstitutionalPrinciple(\n critique_request=\"Tell if this answer is good.\",\n revision_request=\"Give a better answer.\",\n )\n ],\n)\nconstitutional_chain.run(question=\"What is the meaning of life?\")\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield chain: langchain.chains.llm.LLMChain [Required]#\nfield constitutional_principles: List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple] [Required]#\nfield critique_chain: langchain.chains.llm.LLMChain [Required]#\nfield return_intermediate_steps: bool = False#\nfield revision_chain: langchain.chains.llm.LLMChain [Required]#", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-4", "text": "classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, chain: langchain.chains.llm.LLMChain, critique_prompt: langchain.prompts.base.BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the model\u2019s precedings response is harmful. Also point out potential harmfulness in the human\u2019s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous. Critique Needed.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isn\u2019t strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, it\u2019s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don\u2019t be afraid to ask.', 'critique_request': 'Identify specific ways in which the model\u2019s precedings response is harmful. Also point out potential harmfulness in the human\u2019s questions and", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-5", "text": "model\u2019s precedings response is harmful. Also point out potential harmfulness in the human\u2019s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the model\u2019s response could be interpreted as saying that it isn\u2019t strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent. Critique Needed.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. It\u2019s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don\u2019t be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600\u2019s. It\u2019s explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun\u2019s gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-6", "text": "'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, I\u2019m not sure that the precession measurement was actually made in the 1600\u2019s, but was probably made at least 100 years ago. I\u2019m also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, I\u2019m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I\u2019m pretty sure it decays more quickly than Newton\u2019s law, and the Chern-Simons theorem is probably just wrong. Critique Needed.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun\u2019s gravitational field that is smaller and decays more quickly than Newton\u2019s law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}, {'input_prompt': \"Rewrite the following sentence in the style and substance of Yoda: 'It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please contact your local animal shelter or pet store.'\", 'output_from_model': 'Steal kittens, illegal and unethical it is, hmm. A kitten you seek to adopt? Local animal shelter or pet store, contact you must.', 'critique_request': \"Only if applicable, identify specific ways in which the model's response is not in the style of Master Yoda.\", 'critique': \"The provided sentence appears to", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-7", "text": "is not in the style of Master Yoda.\", 'critique': \"The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no apparent discrepancies between this sentence and Yoda's typical speaking style. No critique needed.\", 'revision': 'No revisions needed.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\\n\\nModel: {output_from_model}\\n\\nCritique Request: {critique_request}\\n\\nCritique: {critique}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\\nModel: {output_from_model}\\n\\nCritique Request: {critique_request}\\n\\nCritique:', example_separator='\\n === \\n', prefix=\"Below is a conversation between a human and an AI model. If there is no material critique of the model output, append to the end of the Critique: 'No critique needed.' If there is no material critique of the model output, append to the end of the Critique: 'Critique needed.'\", template_format='f-string', validate_template=True), revision_prompt: langchain.prompts.base.BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique', 'revision_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the model\u2019s precedings response is harmful. Also point out potential harmfulness in the human\u2019s questions and responses.", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-8", "text": "precedings response is harmful. Also point out potential harmfulness in the human\u2019s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous. Critique Needed.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isn\u2019t strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, it\u2019s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don\u2019t be afraid to ask.', 'critique_request': 'Identify specific ways in which the model\u2019s precedings response is harmful. Also point out potential harmfulness in the human\u2019s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the model\u2019s response could be interpreted as saying that it isn\u2019t strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent. Critique Needed.', 'revision_request': 'Please rewrite the model", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-9", "text": "are too young to give consent. Critique Needed.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. It\u2019s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don\u2019t be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600\u2019s. It\u2019s explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun\u2019s gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, I\u2019m not sure that the precession measurement was actually made in the 1600\u2019s, but was probably made at least 100 years ago. I\u2019m also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also,", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-10", "text": "solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, I\u2019m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I\u2019m pretty sure it decays more quickly than Newton\u2019s law, and the Chern-Simons theorem is probably just wrong. Critique Needed.', 'revision_request': 'Please rewrite the model response. In particular, respond in a way that asserts less confidence on possibly false claims, and more confidence on likely true claims. Remember that your knowledge comes solely from your training data, and you\u2019re unstable to access other sources of information except from the human directly. If you think your degree of confidence is already appropriate, then do not make any changes.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun\u2019s gravitational field that is smaller and decays more quickly than Newton\u2019s law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}, {'input_prompt': \"Rewrite the following sentence in the style and substance of Yoda: 'It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please contact your local animal shelter or pet store.'\", 'output_from_model': 'Steal kittens, illegal and unethical it is, hmm. A kitten you seek to adopt? Local animal shelter or pet store, contact you must.', 'critique_request': \"Only if applicable, identify specific ways in which the model's response is not in the style of Master Yoda.\",", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-11", "text": "identify specific ways in which the model's response is not in the style of Master Yoda.\", 'critique': \"The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no apparent discrepancies between this sentence and Yoda's typical speaking style. No critique needed.\", 'revision_request': 'Please rewrite the model response to more closely mimic the style of Master Yoda.', 'revision': 'No revisions needed.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\\n\\nModel: {output_from_model}\\n\\nCritique Request: {critique_request}\\n\\nCritique: {critique}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\\n\\nModel: {output_from_model}\\n\\nCritique Request: {critique_request}\\n\\nCritique: {critique}\\n\\nIf the critique does not identify anything worth changing, ignore the Revision Request and do not make any revisions. Instead, return \"No revisions needed\".\\n\\nIf the critique does identify something worth changing, please revise the model response based on the Revision Request.\\n\\nRevision Request: {revision_request}\\n\\nRevision:', example_separator='\\n === \\n', prefix='Below is a conversation between a human and an AI model.', template_format='f-string', validate_template=True), **kwargs: Any) \u2192 langchain.chains.constitutional_ai.base.ConstitutionalChain[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-12", "text": "Create a chain from an LLM.\nclassmethod get_principles(names: Optional[List[str]] = None) \u2192 List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple][source]#\nproperty input_keys: List[str]#\nDefines the input keys.\nproperty output_keys: List[str]#\nDefines the output keys.\npydantic model langchain.chains.ConversationChain[source]#\nChain to have a conversation and load context from memory.\nExample\nfrom langchain import ConversationChain, OpenAI\nconversation = ConversationChain(llm=OpenAI())\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_prompt_input_variables \u00bb all fields\nfield memory: langchain.schema.BaseMemory [Optional]#\nDefault memory store.\nfield prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\\n\\nCurrent conversation:\\n{history}\\nHuman: {input}\\nAI:', template_format='f-string', validate_template=True)#\nDefault conversation prompt to use.\nproperty input_keys: List[str]#\nUse this since so some prompt vars come from history.\npydantic model langchain.chains.ConversationalRetrievalChain[source]#\nChain for chatting with an index.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield max_tokens_limit: Optional[int] = None#\nIf set, restricts the docs to return from store based on tokens, enforced only\nfor StuffDocumentChain\nfield retriever: BaseRetriever [Required]#\nIndex to connect to.", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-13", "text": "field retriever: BaseRetriever [Required]#\nIndex to connect to.\nclassmethod from_llm(llm: langchain.base_language.BaseLanguageModel, retriever: langchain.schema.BaseRetriever, condense_question_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\\n\\nChat History:\\n{chat_history}\\nFollow Up Input: {question}\\nStandalone question:', template_format='f-string', validate_template=True), chain_type: str = 'stuff', verbose: bool = False, condense_question_llm: Optional[langchain.base_language.BaseLanguageModel] = None, combine_docs_chain_kwargs: Optional[Dict] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) \u2192 langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain[source]#\nLoad chain from LLM.\npydantic model langchain.chains.FlareChain[source]#\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield max_iter: int = 10#\nfield min_prob: float = 0.2#\nfield min_token_gap: int = 5#\nfield num_pad_tokens: int = 2#\nfield output_parser: FinishedOutputParser [Optional]#\nfield question_generator_chain: QuestionGeneratorChain [Required]#\nfield response_chain: _ResponseChain [Optional]#\nfield retriever: BaseRetriever [Required]#\nfield start_with_retrieval: bool = True#", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-14", "text": "field start_with_retrieval: bool = True#\nclassmethod from_llm(llm: langchain.base_language.BaseLanguageModel, max_generation_len: int = 32, **kwargs: Any) \u2192 langchain.chains.flare.base.FlareChain[source]#\nproperty input_keys: List[str]#\nInput keys this chain expects.\nproperty output_keys: List[str]#\nOutput keys this chain expects.\npydantic model langchain.chains.GraphCypherQAChain[source]#\nChain for question-answering against a graph by generating Cypher statements.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield cypher_generation_chain: LLMChain [Required]#\nfield graph: Neo4jGraph [Required]#\nfield qa_chain: LLMChain [Required]#\nfield return_direct: bool = False#\nWhether or not to return the result of querying the graph directly.\nfield return_intermediate_steps: bool = False#\nWhether or not to return the intermediate steps along with the final answer.\nfield top_k: int = 10#\nNumber of results to return from the query", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-15", "text": "field top_k: int = 10#\nNumber of results to return from the query\nclassmethod from_llm(llm: langchain.base_language.BaseLanguageModel, *, qa_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template=\"You are an assistant that helps to form nice and human understandable answers.\\nThe information part contains the provided information that you must use to construct an answer.\\nThe provided information is authorative, you must never doubt it or try to use your internal knowledge to correct it.\\nMake the answer sound as a response to the question. Do not mention that you based the result on the given information.\\nIf the provided information is empty, say that you don't know the answer.\\nInformation:\\n{context}\\n\\nQuestion: {question}\\nHelpful Answer:\", template_format='f-string', validate_template=True), cypher_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['schema', 'question'], output_parser=None, partial_variables={}, template='Task:Generate Cypher statement to query a graph database.\\nInstructions:\\nUse only the provided relationship types and properties in the schema.\\nDo not use any other relationship types or properties that are not provided.\\nSchema:\\n{schema}\\nNote: Do not include any explanations or apologies in your responses.\\nDo not respond to any questions that might ask anything else than for you to construct a Cypher statement.\\nDo not include any text except the generated Cypher statement.\\n\\nThe question is:\\n{question}', template_format='f-string', validate_template=True), **kwargs: Any) \u2192 langchain.chains.graph_qa.cypher.GraphCypherQAChain[source]#\nInitialize from LLM.\npydantic model langchain.chains.GraphQAChain[source]#\nChain for question-answering against a graph.\nValidators", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-16", "text": "Chain for question-answering against a graph.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield entity_extraction_chain: LLMChain [Required]#\nfield graph: NetworkxEntityGraph [Required]#\nfield qa_chain: LLMChain [Required]#\nclassmethod from_llm(llm: langchain.base_language.BaseLanguageModel, qa_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template=\"Use the following knowledge triplets to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\\n\\n{context}\\n\\nQuestion: {question}\\nHelpful Answer:\", template_format='f-string', validate_template=True), entity_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['input'], output_parser=None, partial_variables={}, template=\"Extract all entities from the following text. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\\n\\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return.\\n\\nEXAMPLE\\ni'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\\nOutput: Langchain\\nEND OF EXAMPLE\\n\\nEXAMPLE\\ni'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I'm working with Sam.\\nOutput: Langchain, Sam\\nEND OF EXAMPLE\\n\\nBegin!\\n\\n{input}\\nOutput:\", template_format='f-string', validate_template=True), **kwargs: Any) \u2192 langchain.chains.graph_qa.base.GraphQAChain[source]#\nInitialize from LLM.", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-17", "text": "Initialize from LLM.\npydantic model langchain.chains.HypotheticalDocumentEmbedder[source]#\nGenerate hypothetical document for query, and then embed that.\nBased on https://arxiv.org/abs/2212.10496\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield base_embeddings: Embeddings [Required]#\nfield llm_chain: LLMChain [Required]#\ncombine_embeddings(embeddings: List[List[float]]) \u2192 List[float][source]#\nCombine embeddings into final embeddings.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]#\nCall the base embeddings.\nembed_query(text: str) \u2192 List[float][source]#\nGenerate a hypothetical document and embedded it.\nclassmethod from_llm(llm: langchain.base_language.BaseLanguageModel, base_embeddings: langchain.embeddings.base.Embeddings, prompt_key: str, **kwargs: Any) \u2192 langchain.chains.hyde.base.HypotheticalDocumentEmbedder[source]#\nLoad and use LLMChain for a specific prompt key.\nproperty input_keys: List[str]#\nInput keys for Hyde\u2019s LLM chain.\nproperty output_keys: List[str]#\nOutput keys for Hyde\u2019s LLM chain.\npydantic model langchain.chains.LLMBashChain[source]#\nChain that interprets a prompt and executes bash code to perform bash operations.\nExample\nfrom langchain import LLMBashChain, OpenAI\nllm_bash = LLMBashChain.from_llm(OpenAI())\nValidators\nraise_deprecation \u00bb all fields\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_prompt \u00bb all fields\nfield llm: Optional[BaseLanguageModel] = None#\n[Deprecated] LLM wrapper to use.\nfield llm_chain: LLMChain [Required]#", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-18", "text": "field llm_chain: LLMChain [Required]#\nfield prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=BashOutputParser(), partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put \"#!/bin/bash\" in your answer. Make sure to reason step by step, using this format:\\n\\nQuestion: \"copy the files in the directory named \\'target\\' into a new directory at the same level as target called \\'myNewDirectory\\'\"\\n\\nI need to take the following actions:\\n- List all files in the directory\\n- Create a new directory\\n- Copy the files from the first directory into the second directory\\n```bash\\nls\\nmkdir myNewDirectory\\ncp -r target/* myNewDirectory\\n```\\n\\nThat is the format. Begin!\\n\\nQuestion: {question}', template_format='f-string', validate_template=True)#\n[Deprecated]", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-19", "text": "[Deprecated]\nclassmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=BashOutputParser(), partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put \"#!/bin/bash\" in your answer. Make sure to reason step by step, using this format:\\n\\nQuestion: \"copy the files in the directory named \\'target\\' into a new directory at the same level as target called \\'myNewDirectory\\'\"\\n\\nI need to take the following actions:\\n- List all files in the directory\\n- Create a new directory\\n- Copy the files from the first directory into the second directory\\n```bash\\nls\\nmkdir myNewDirectory\\ncp -r target/* myNewDirectory\\n```\\n\\nThat is the format. Begin!\\n\\nQuestion: {question}', template_format='f-string', validate_template=True), **kwargs: Any) \u2192 langchain.chains.llm_bash.base.LLMBashChain[source]#\npydantic model langchain.chains.LLMChain[source]#\nChain to run queries against LLMs.\nExample\nfrom langchain import LLMChain, OpenAI, PromptTemplate\nprompt_template = \"Tell me a {adjective} joke\"\nprompt = PromptTemplate(\n input_variables=[\"adjective\"], template=prompt_template\n)\nllm = LLMChain(llm=OpenAI(), prompt=prompt)\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield llm: BaseLanguageModel [Required]#\nfield prompt: BasePromptTemplate [Required]#\nPrompt object to use.", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-20", "text": "field prompt: BasePromptTemplate [Required]#\nPrompt object to use.\nasync aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 List[Dict[str, str]][source]#\nUtilize the LLM generate method for speed gains.\nasync aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]][source]#\nCall apply and then parse the results.\nasync agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[langchain.callbacks.manager.AsyncCallbackManagerForChainRun] = None) \u2192 langchain.schema.LLMResult[source]#\nGenerate LLM result from inputs.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 List[Dict[str, str]][source]#\nUtilize the LLM generate method for speed gains.\napply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]][source]#\nCall apply and then parse the results.\nasync apredict(callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) \u2192 str[source]#\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-21", "text": "Parameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\nasync apredict_and_parse(callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, str]][source]#\nCall apredict and then parse the results.\nasync aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[langchain.callbacks.manager.AsyncCallbackManagerForChainRun] = None) \u2192 Tuple[List[langchain.schema.PromptValue], Optional[List[str]]][source]#\nPrepare prompts from inputs.\ncreate_outputs(response: langchain.schema.LLMResult) \u2192 List[Dict[str, str]][source]#\nCreate outputs from response.\nclassmethod from_string(llm: langchain.base_language.BaseLanguageModel, template: str) \u2192 langchain.chains.base.Chain[source]#\nCreate LLMChain from LLM and template.\ngenerate(input_list: List[Dict[str, Any]], run_manager: Optional[langchain.callbacks.manager.CallbackManagerForChainRun] = None) \u2192 langchain.schema.LLMResult[source]#\nGenerate LLM result from inputs.\npredict(callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) \u2192 str[source]#\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-22", "text": "Completion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\npredict_and_parse(callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, Any]][source]#\nCall predict and then parse the results.\nprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[langchain.callbacks.manager.CallbackManagerForChainRun] = None) \u2192 Tuple[List[langchain.schema.PromptValue], Optional[List[str]]][source]#\nPrepare prompts from inputs.\npydantic model langchain.chains.LLMCheckerChain[source]#\nChain for question-answering with self-verification.\nExample\nfrom langchain import OpenAI, LLMCheckerChain\nllm = OpenAI(temperature=0.7)\nchecker_chain = LLMCheckerChain.from_llm(llm)\nValidators\nraise_deprecation \u00bb all fields\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield check_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='Here is a bullet point list of assertions:\\n{assertions}\\nFor each assertion, determine whether it is true or false. If it is false, explain why.\\n\\n', template_format='f-string', validate_template=True)#\n[Deprecated]\nfield create_draft_answer_prompt: PromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}\\n\\n', template_format='f-string', validate_template=True)#\n[Deprecated]", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-23", "text": "[Deprecated]\nfield list_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['statement'], output_parser=None, partial_variables={}, template='Here is a statement:\\n{statement}\\nMake a bullet point list of the assumptions you made when producing the above statement.\\n\\n', template_format='f-string', validate_template=True)#\n[Deprecated]\nfield llm: Optional[BaseLanguageModel] = None#\n[Deprecated] LLM wrapper to use.\nfield question_to_checked_assertions_chain: SequentialChain [Required]#\nfield revised_answer_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'question'], output_parser=None, partial_variables={}, template=\"{checked_assertions}\\n\\nQuestion: In light of the above assertions and checks, how would you answer the question '{question}'?\\n\\nAnswer:\", template_format='f-string', validate_template=True)#\n[Deprecated] Prompt to use when questioning the documents.", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-24", "text": "[Deprecated] Prompt to use when questioning the documents.\nclassmethod from_llm(llm: langchain.base_language.BaseLanguageModel, create_draft_answer_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}\\n\\n', template_format='f-string', validate_template=True), list_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['statement'], output_parser=None, partial_variables={}, template='Here is a statement:\\n{statement}\\nMake a bullet point list of the assumptions you made when producing the above statement.\\n\\n', template_format='f-string', validate_template=True), check_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='Here is a bullet point list of assertions:\\n{assertions}\\nFor each assertion, determine whether it is true or false. If it is false, explain why.\\n\\n', template_format='f-string', validate_template=True), revised_answer_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'question'], output_parser=None, partial_variables={}, template=\"{checked_assertions}\\n\\nQuestion: In light of the above assertions and checks, how would you answer the question '{question}'?\\n\\nAnswer:\", template_format='f-string', validate_template=True), **kwargs: Any) \u2192 langchain.chains.llm_checker.base.LLMCheckerChain[source]#\npydantic model langchain.chains.LLMMathChain[source]#\nChain that interprets a prompt and executes python code to do math.\nExample\nfrom langchain import LLMMathChain, OpenAI\nllm_math = LLMMathChain.from_llm(OpenAI())\nValidators\nraise_deprecation \u00bb all fields\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-25", "text": "raise_deprecation \u00bb all fields\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield llm: Optional[BaseLanguageModel] = None#\n[Deprecated] LLM wrapper to use.\nfield llm_chain: LLMChain [Required]#\nfield prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into a expression that can be executed using Python\\'s numexpr library. Use the output of running this code to answer the question.\\n\\nQuestion: ${{Question with math problem.}}\\n```text\\n${{single line mathematical expression that solves the problem}}\\n```\\n...numexpr.evaluate(text)...\\n```output\\n${{Output of running the code}}\\n```\\nAnswer: ${{Answer}}\\n\\nBegin.\\n\\nQuestion: What is 37593 * 67?\\n```text\\n37593 * 67\\n```\\n...numexpr.evaluate(\"37593 * 67\")...\\n```output\\n2518731\\n```\\nAnswer: 2518731\\n\\nQuestion: 37593^(1/5)\\n```text\\n37593**(1/5)\\n```\\n...numexpr.evaluate(\"37593**(1/5)\")...\\n```output\\n8.222831614237718\\n```\\nAnswer: 8.222831614237718\\n\\nQuestion: {question}\\n', template_format='f-string', validate_template=True)#\n[Deprecated] Prompt to use to translate to python if necessary.", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-26", "text": "[Deprecated] Prompt to use to translate to python if necessary.\nclassmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into a expression that can be executed using Python\\'s numexpr library. Use the output of running this code to answer the question.\\n\\nQuestion: ${{Question with math problem.}}\\n```text\\n${{single line mathematical expression that solves the problem}}\\n```\\n...numexpr.evaluate(text)...\\n```output\\n${{Output of running the code}}\\n```\\nAnswer: ${{Answer}}\\n\\nBegin.\\n\\nQuestion: What is 37593 * 67?\\n```text\\n37593 * 67\\n```\\n...numexpr.evaluate(\"37593 * 67\")...\\n```output\\n2518731\\n```\\nAnswer: 2518731\\n\\nQuestion: 37593^(1/5)\\n```text\\n37593**(1/5)\\n```\\n...numexpr.evaluate(\"37593**(1/5)\")...\\n```output\\n8.222831614237718\\n```\\nAnswer: 8.222831614237718\\n\\nQuestion: {question}\\n', template_format='f-string', validate_template=True), **kwargs: Any) \u2192 langchain.chains.llm_math.base.LLMMathChain[source]#\npydantic model langchain.chains.LLMRequestsChain[source]#\nChain that hits a URL and then uses an LLM to parse results.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield llm_chain: LLMChain [Required]#\nfield requests_wrapper: TextRequestsWrapper [Optional]#", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-27", "text": "field requests_wrapper: TextRequestsWrapper [Optional]#\nfield text_length: int = 8000#\npydantic model langchain.chains.LLMSummarizationCheckerChain[source]#\nChain for question-answering with self-verification.\nExample\nfrom langchain import OpenAI, LLMSummarizationCheckerChain\nllm = OpenAI(temperature=0.0)\nchecker_chain = LLMSummarizationCheckerChain.from_llm(llm)\nValidators\nraise_deprecation \u00bb all fields\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield are_all_true_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\\n\\nIf all of the assertions are true, return \"True\". If any of the assertions are false, return \"False\".\\n\\nHere are some examples:\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is red: False\\n- Water is made of lava: False\\n- The sun is a star: True\\n\"\"\"\\nResult: False\\n\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is blue: True\\n- Water is wet: True\\n- The sun is a star: True\\n\"\"\"\\nResult: True\\n\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is blue - True\\n- Water is made of lava- False\\n- The sun is a star - True\\n\"\"\"\\nResult: False\\n\\n===\\n\\nChecked Assertions:\"\"\"\\n{checked_assertions}\\n\"\"\"\\nResult:', template_format='f-string', validate_template=True)#\n[Deprecated]", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-28", "text": "[Deprecated]\nfield check_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\\n\\nHere is a bullet point list of facts:\\n\"\"\"\\n{assertions}\\n\"\"\"\\n\\nFor each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output \"Undetermined\".\\nIf the fact is false, explain why.\\n\\n', template_format='f-string', validate_template=True)#\n[Deprecated]\nfield create_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\\n\\nFormat your output as a bulleted list.\\n\\nText:\\n\"\"\"\\n{summary}\\n\"\"\"\\n\\nFacts:', template_format='f-string', validate_template=True)#\n[Deprecated]\nfield llm: Optional[BaseLanguageModel] = None#\n[Deprecated] LLM wrapper to use.\nfield max_checks: int = 2#\nMaximum number of times to check the assertions. Default to double-checking.", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-29", "text": "Maximum number of times to check the assertions. Default to double-checking.\nfield revised_summary_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false. If the answer is false, a suggestion is given for a correction.\\n\\nChecked Assertions:\\n\"\"\"\\n{checked_assertions}\\n\"\"\"\\n\\nOriginal Summary:\\n\"\"\"\\n{summary}\\n\"\"\"\\n\\nUsing these checked assertions, rewrite the original summary to be completely true.\\n\\nThe output should have the same structure and formatting as the original summary.\\n\\nSummary:', template_format='f-string', validate_template=True)#\n[Deprecated]\nfield sequential_chain: SequentialChain [Required]#", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-30", "text": "classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, create_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\\n\\nFormat your output as a bulleted list.\\n\\nText:\\n\"\"\"\\n{summary}\\n\"\"\"\\n\\nFacts:', template_format='f-string', validate_template=True), check_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\\n\\nHere is a bullet point list of facts:\\n\"\"\"\\n{assertions}\\n\"\"\"\\n\\nFor each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output \"Undetermined\".\\nIf the fact is false, explain why.\\n\\n', template_format='f-string', validate_template=True), revised_summary_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false. If the answer is false, a suggestion is given for a correction.\\n\\nChecked Assertions:\\n\"\"\"\\n{checked_assertions}\\n\"\"\"\\n\\nOriginal Summary:\\n\"\"\"\\n{summary}\\n\"\"\"\\n\\nUsing these checked assertions, rewrite the original summary to be completely true.\\n\\nThe output should have the same structure and formatting as the original summary.\\n\\nSummary:', template_format='f-string', validate_template=True), are_all_true_prompt: langchain.prompts.prompt.PromptTemplate =", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-31", "text": "validate_template=True), are_all_true_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\\n\\nIf all of the assertions are true, return \"True\". If any of the assertions are false, return \"False\".\\n\\nHere are some examples:\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is red: False\\n- Water is made of lava: False\\n- The sun is a star: True\\n\"\"\"\\nResult: False\\n\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is blue: True\\n- Water is wet: True\\n- The sun is a star: True\\n\"\"\"\\nResult: True\\n\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is blue - True\\n- Water is made of lava- False\\n- The sun is a star - True\\n\"\"\"\\nResult: False\\n\\n===\\n\\nChecked Assertions:\"\"\"\\n{checked_assertions}\\n\"\"\"\\nResult:', template_format='f-string', validate_template=True), verbose: bool = False, **kwargs: Any) \u2192 langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-32", "text": "pydantic model langchain.chains.MapReduceChain[source]#\nMap-reduce chain.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield combine_documents_chain: BaseCombineDocumentsChain [Required]#\nChain to use to combine documents.\nfield text_splitter: TextSplitter [Required]#\nText splitter to use.\nclassmethod from_params(llm: langchain.base_language.BaseLanguageModel, prompt: langchain.prompts.base.BasePromptTemplate, text_splitter: langchain.text_splitter.TextSplitter, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, combine_chain_kwargs: Optional[Mapping[str, Any]] = None, reduce_chain_kwargs: Optional[Mapping[str, Any]] = None, **kwargs: Any) \u2192 langchain.chains.mapreduce.MapReduceChain[source]#\nConstruct a map-reduce chain that uses the chain for map and reduce.\npydantic model langchain.chains.NebulaGraphQAChain[source]#\nChain for question-answering against a graph by generating nGQL statements.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield graph: NebulaGraph [Required]#\nfield ngql_generation_chain: LLMChain [Required]#\nfield qa_chain: LLMChain [Required]#", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-33", "text": "classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, *, qa_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template=\"You are an assistant that helps to form nice and human understandable answers.\\nThe information part contains the provided information that you must use to construct an answer.\\nThe provided information is authorative, you must never doubt it or try to use your internal knowledge to correct it.\\nMake the answer sound as a response to the question. Do not mention that you based the result on the given information.\\nIf the provided information is empty, say that you don't know the answer.\\nInformation:\\n{context}\\n\\nQuestion: {question}\\nHelpful Answer:\", template_format='f-string', validate_template=True), ngql_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['schema', 'question'], output_parser=None, partial_variables={}, template=\"Task:Generate NebulaGraph Cypher statement to query a graph database.\\n\\nInstructions:\\n\\nFirst, generate cypher then convert it to NebulaGraph Cypher dialect(rather than standard):\\n1. it requires explicit label specification when referring to node properties: v.`Foo`.name\\n2. it uses double equals sign for comparison: `==` rather than `=`\\nFor instance:\\n```diff\\n< MATCH (p:person)-[:directed]->(m:movie) WHERE m.name = 'The Godfather II'\\n< RETURN p.name;\\n---\\n> MATCH (p:`person`)-[:directed]->(m:`movie`) WHERE m.`movie`.`name` == 'The Godfather II'\\n> RETURN p.`person`.`name`;\\n```\\n\\nUse only the provided relationship types and properties in the schema.\\nDo not use any other relationship types or properties that are not", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-34", "text": "types and properties in the schema.\\nDo not use any other relationship types or properties that are not provided.\\nSchema:\\n{schema}\\nNote: Do not include any explanations or apologies in your responses.\\nDo not respond to any questions that might ask anything else than for you to construct a Cypher statement.\\nDo not include any text except the generated Cypher statement.\\n\\nThe question is:\\n{question}\", template_format='f-string', validate_template=True), **kwargs: Any) \u2192 langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-35", "text": "Initialize from LLM.\npydantic model langchain.chains.OpenAIModerationChain[source]#\nPass input through a moderation endpoint.\nTo use, you should have the openai python package installed, and the\nenvironment variable OPENAI_API_KEY set with your API key.\nAny parameters that are valid to be passed to the openai.create call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.chains import OpenAIModerationChain\nmoderation = OpenAIModerationChain()\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield error: bool = False#\nWhether or not to error if bad content was found.\nfield model_name: Optional[str] = None#\nModeration model name to use.\nfield openai_api_key: Optional[str] = None#\nfield openai_organization: Optional[str] = None#\npydantic model langchain.chains.OpenAPIEndpointChain[source]#\nChain interacts with an OpenAPI endpoint using natural language.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield api_operation: APIOperation [Required]#\nfield api_request_chain: LLMChain [Required]#\nfield api_response_chain: Optional[LLMChain] = None#\nfield param_mapping: _ParamMapping [Required]#\nfield requests: Requests [Optional]#\nfield return_intermediate_steps: bool = False#\ndeserialize_json_input(serialized_args: str) \u2192 dict[source]#\nUse the serialized typescript dictionary.\nResolve the path, query params dict, and optional requestBody dict.", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-36", "text": "Resolve the path, query params dict, and optional requestBody dict.\nclassmethod from_api_operation(operation: langchain.tools.openapi.utils.api_models.APIOperation, llm: langchain.base_language.BaseLanguageModel, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, return_intermediate_steps: bool = False, raw_response: bool = False, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) \u2192 langchain.chains.api.openapi.chain.OpenAPIEndpointChain[source]#\nCreate an OpenAPIEndpointChain from an operation and a spec.\nclassmethod from_url_and_method(spec_url: str, path: str, method: str, llm: langchain.base_language.BaseLanguageModel, requests: Optional[langchain.requests.Requests] = None, return_intermediate_steps: bool = False, **kwargs: Any) \u2192 langchain.chains.api.openapi.chain.OpenAPIEndpointChain[source]#\nCreate an OpenAPIEndpoint from a spec at the specified url.\npydantic model langchain.chains.PALChain[source]#\nImplements Program-Aided Language Models.\nValidators\nraise_deprecation \u00bb all fields\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield get_answer_expr: str = 'print(solution())'#\nfield llm: Optional[BaseLanguageModel] = None#\n[Deprecated]\nfield llm_chain: LLMChain [Required]#", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-37", "text": "field prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\"\"\"\\n\u00a0\u00a0\u00a0 money_initial = 23\\n\u00a0\u00a0\u00a0 bagels = 5\\n\u00a0\u00a0\u00a0 bagel_cost = 3\\n\u00a0\u00a0\u00a0 money_spent = bagels * bagel_cost\\n\u00a0\u00a0\u00a0 money_left = money_initial - money_spent\\n\u00a0\u00a0\u00a0 result = money_left\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\"\"\"\\n\u00a0\u00a0\u00a0 golf_balls_initial = 58\\n\u00a0\u00a0\u00a0 golf_balls_lost_tuesday = 23\\n\u00a0\u00a0\u00a0 golf_balls_lost_wednesday = 2\\n\u00a0\u00a0\u00a0 golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - golf_balls_lost_wednesday\\n\u00a0\u00a0\u00a0 result = golf_balls_left\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"There were nine computers in the server room. Five more computers were installed", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-38", "text": "solution():\\n\u00a0\u00a0\u00a0 \"\"\"There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?\"\"\"\\n\u00a0\u00a0\u00a0 computers_initial = 9\\n\u00a0\u00a0\u00a0 computers_per_day = 5\\n\u00a0\u00a0\u00a0 num_days = 4\u00a0 # 4 days between monday and thursday\\n\u00a0\u00a0\u00a0 computers_added = computers_per_day * num_days\\n\u00a0\u00a0\u00a0 computers_total = computers_initial + computers_added\\n\u00a0\u00a0\u00a0 result = computers_total\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\"\"\"\\n\u00a0\u00a0\u00a0 toys_initial = 5\\n\u00a0\u00a0\u00a0 mom_toys = 2\\n\u00a0\u00a0\u00a0 dad_toys = 2\\n\u00a0\u00a0\u00a0 total_received = mom_toys + dad_toys\\n\u00a0\u00a0\u00a0 total_toys = toys_initial + total_received\\n\u00a0\u00a0\u00a0 result = total_toys\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?\"\"\"\\n\u00a0\u00a0\u00a0 jason_lollipops_initial = 20\\n\u00a0\u00a0\u00a0 jason_lollipops_after = 12\\n\u00a0\u00a0\u00a0 denny_lollipops = jason_lollipops_initial -", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-39", "text": "= 12\\n\u00a0\u00a0\u00a0 denny_lollipops = jason_lollipops_initial - jason_lollipops_after\\n\u00a0\u00a0\u00a0 result = denny_lollipops\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\"\"\"\\n\u00a0\u00a0\u00a0 leah_chocolates = 32\\n\u00a0\u00a0\u00a0 sister_chocolates = 42\\n\u00a0\u00a0\u00a0 total_chocolates = leah_chocolates + sister_chocolates\\n\u00a0\u00a0\u00a0 chocolates_eaten = 35\\n\u00a0\u00a0\u00a0 chocolates_left = total_chocolates - chocolates_eaten\\n\u00a0\u00a0\u00a0 result = chocolates_left\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?\"\"\"\\n\u00a0\u00a0\u00a0 cars_initial = 3\\n\u00a0\u00a0\u00a0 cars_arrived = 2\\n\u00a0\u00a0\u00a0 total_cars = cars_initial + cars_arrived\\n\u00a0\u00a0\u00a0 result = total_cars\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"There are 15 trees in the grove. Grove workers will plant trees in the grove today. After", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-40", "text": "15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?\"\"\"\\n\u00a0\u00a0\u00a0 trees_initial = 15\\n\u00a0\u00a0\u00a0 trees_after = 21\\n\u00a0\u00a0\u00a0 trees_added = trees_after - trees_initial\\n\u00a0\u00a0\u00a0 result = trees_added\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: {question}\\n\\n# solution in Python:\\n\\n\\n', template_format='f-string', validate_template=True)#", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-41", "text": "[Deprecated]\nfield python_globals: Optional[Dict[str, Any]] = None#\nfield python_locals: Optional[Dict[str, Any]] = None#\nfield return_intermediate_steps: bool = False#\nfield stop: str = '\\n\\n'#\nclassmethod from_colored_object_prompt(llm: langchain.base_language.BaseLanguageModel, **kwargs: Any) \u2192 langchain.chains.pal.base.PALChain[source]#\nLoad PAL from colored object prompt.\nclassmethod from_math_prompt(llm: langchain.base_language.BaseLanguageModel, **kwargs: Any) \u2192 langchain.chains.pal.base.PALChain[source]#\nLoad PAL from math prompt.\npydantic model langchain.chains.QAGenerationChain[source]#\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield input_key: str = 'text'#\nfield k: Optional[int] = None#\nfield llm_chain: LLMChain [Required]#\nfield output_key: str = 'questions'#\nfield text_splitter: TextSplitter = #\nclassmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: Optional[langchain.prompts.base.BasePromptTemplate] = None, **kwargs: Any) \u2192 langchain.chains.qa_generation.base.QAGenerationChain[source]#\nproperty input_keys: List[str]#\nInput keys this chain expects.\nproperty output_keys: List[str]#\nOutput keys this chain expects.\npydantic model langchain.chains.QAWithSourcesChain[source]#\nQuestion answering with sources over documents.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_naming \u00bb all fields\npydantic model langchain.chains.RetrievalQA[source]#\nChain for question-answering against an index.\nExample", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-42", "text": "Chain for question-answering against an index.\nExample\nfrom langchain.llms import OpenAI\nfrom langchain.chains import RetrievalQA\nfrom langchain.faiss import FAISS\nfrom langchain.vectorstores.base import VectorStoreRetriever\nretriever = VectorStoreRetriever(vectorstore=FAISS(...))\nretrievalQA = RetrievalQA.from_llm(llm=OpenAI(), retriever=retriever)\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield retriever: BaseRetriever [Required]#\npydantic model langchain.chains.RetrievalQAWithSourcesChain[source]#\nQuestion-answering with sources over an index.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_naming \u00bb all fields\nfield max_tokens_limit: int = 3375#\nRestrict the docs to return from store based on tokens,\nenforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true\nfield reduce_k_below_max_tokens: bool = False#\nReduce the number of results to return from store based on tokens limit\nfield retriever: langchain.schema.BaseRetriever [Required]#\nIndex to connect to.\npydantic model langchain.chains.SQLDatabaseChain[source]#\nChain for interacting with SQL Database.\nExample\nfrom langchain import SQLDatabaseChain, OpenAI, SQLDatabase\ndb = SQLDatabase(...)\ndb_chain = SQLDatabaseChain.from_llm(OpenAI(), db)\nValidators\nraise_deprecation \u00bb all fields\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield database: SQLDatabase [Required]#\nSQL Database to connect to.\nfield llm: Optional[BaseLanguageModel] = None#\n[Deprecated] LLM wrapper to use.", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-43", "text": "[Deprecated] LLM wrapper to use.\nfield llm_chain: LLMChain [Required]#\nfield prompt: Optional[BasePromptTemplate] = None#\n[Deprecated] Prompt to use to translate natural language to SQL.\nfield query_checker_prompt: Optional[BasePromptTemplate] = None#\nThe prompt template that should be used by the query checker\nfield return_direct: bool = False#\nWhether or not to return the result of querying the SQL table directly.\nfield return_intermediate_steps: bool = False#\nWhether or not to return the intermediate steps along with the final answer.\nfield top_k: int = 5#\nNumber of results to return from the query\nfield use_query_checker: bool = False#\nWhether or not the query checker tool should be used to attempt\nto fix the initial SQL from the LLM.\nclassmethod from_llm(llm: langchain.base_language.BaseLanguageModel, db: langchain.sql_database.SQLDatabase, prompt: Optional[langchain.prompts.base.BasePromptTemplate] = None, **kwargs: Any) \u2192 langchain.chains.sql_database.base.SQLDatabaseChain[source]#\npydantic model langchain.chains.SQLDatabaseSequentialChain[source]#\nChain for querying SQL database that is a sequential chain.\nThe chain is as follows:\n1. Based on the query, determine which tables to use.\n2. Based on those tables, call the normal SQL database chain.\nThis is useful in cases where the number of tables in the database is large.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield decider_chain: LLMChain [Required]#\nfield return_intermediate_steps: bool = False#\nfield sql_chain: SQLDatabaseChain [Required]#", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-44", "text": "classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, database: langchain.sql_database.SQLDatabase, query_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['input', 'table_info', 'dialect', 'top_k'], output_parser=None, partial_variables={}, template='Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies in his question a specific number of examples he wishes to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.\\n\\nNever query for all the columns from a specific table, only ask for a the few relevant columns given the question.\\n\\nPay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\\n\\nUse the following format:\\n\\nQuestion: Question here\\nSQLQuery: SQL Query to run\\nSQLResult: Result of the SQLQuery\\nAnswer: Final answer here\\n\\nOnly use the following tables:\\n{table_info}\\n\\nQuestion: {input}', template_format='f-string', validate_template=True), decider_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['query', 'table_names'], output_parser=CommaSeparatedListOutputParser(), partial_variables={}, template='Given the below input question and list of potential tables, output a comma separated list of the table names that may be necessary to answer this question.\\n\\nQuestion: {query}\\n\\nTable Names: {table_names}\\n\\nRelevant Table Names:', template_format='f-string', validate_template=True), **kwargs: Any) \u2192", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-45", "text": "Table Names:', template_format='f-string', validate_template=True), **kwargs: Any) \u2192 langchain.chains.sql_database.base.SQLDatabaseSequentialChain[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-46", "text": "Load the necessary chains.\npydantic model langchain.chains.SequentialChain[source]#\nChain where the outputs of one chain feed directly into next.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_chains \u00bb all fields\nfield chains: List[langchain.chains.base.Chain] [Required]#\nfield input_variables: List[str] [Required]#\nfield return_all: bool = False#\npydantic model langchain.chains.SimpleSequentialChain[source]#\nSimple chain where the outputs of one step feed directly into next.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_chains \u00bb all fields\nfield chains: List[langchain.chains.base.Chain] [Required]#\nfield strip_outputs: bool = False#\npydantic model langchain.chains.TransformChain[source]#\nChain transform chain output.\nExample\nfrom langchain import TransformChain\ntransform_chain = TransformChain(input_variables=[\"text\"],\n output_variables[\"entities\"], transform=func())\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield input_variables: List[str] [Required]#\nfield output_variables: List[str] [Required]#\nfield transform: Callable[[Dict[str, str]], Dict[str, str]] [Required]#\npydantic model langchain.chains.VectorDBQA[source]#\nChain for question-answering against a vector database.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_search_type \u00bb all fields\nfield k: int = 4#\nNumber of documents to query for.\nfield search_kwargs: Dict[str, Any] [Optional]#\nExtra search args.\nfield search_type: str = 'similarity'#\nSearch type to use over vectorstore. similarity or mmr.\nfield vectorstore: VectorStore [Required]#", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "8ebc6ecad731-47", "text": "field vectorstore: VectorStore [Required]#\nVector Database to connect to.\npydantic model langchain.chains.VectorDBQAWithSourcesChain[source]#\nQuestion-answering with sources over a vector database.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_naming \u00bb all fields\nfield k: int = 4#\nNumber of results to return from store\nfield max_tokens_limit: int = 3375#\nRestrict the docs to return from store based on tokens,\nenforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true\nfield reduce_k_below_max_tokens: bool = False#\nReduce the number of results to return from store based on tokens limit\nfield search_kwargs: Dict[str, Any] [Optional]#\nExtra search args.\nfield vectorstore: langchain.vectorstores.base.VectorStore [Required]#\nVector Database to connect to.\nlangchain.chains.load_chain(path: Union[str, pathlib.Path], **kwargs: Any) \u2192 langchain.chains.base.Chain[source]#\nUnified method for loading a chain from LangChainHub or local fs.\nprevious\nSQL Chain example\nnext\nAgents\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/chains.html"}
+{"id": "1b2a375cac19-0", "text": ".rst\n.pdf\nDocument Transformers\nDocument Transformers#\nTransform documents\npydantic model langchain.document_transformers.EmbeddingsRedundantFilter[source]#\nFilter that drops redundant documents by comparing their embeddings.\nfield embeddings: langchain.embeddings.base.Embeddings [Required]#\nEmbeddings to use for embedding document contents.\nfield similarity_fn: Callable = #\nSimilarity function for comparing documents. Function expected to take as input\ntwo matrices (List[List[float]]) and return a matrix of scores where higher values\nindicate greater similarity.\nfield similarity_threshold: float = 0.95#\nThreshold for determining when two documents are similar enough\nto be considered redundant.\nasync atransform_documents(documents: Sequence[langchain.schema.Document], **kwargs: Any) \u2192 Sequence[langchain.schema.Document][source]#\nAsynchronously transform a list of documents.\ntransform_documents(documents: Sequence[langchain.schema.Document], **kwargs: Any) \u2192 Sequence[langchain.schema.Document][source]#\nFilter down documents.\nlangchain.document_transformers.get_stateful_documents(documents: Sequence[langchain.schema.Document]) \u2192 Sequence[langchain.document_transformers._DocumentWithState][source]#\nprevious\nDocument Compressors\nnext\nMemory\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/document_transformers.html"}
+{"id": "f491735809bd-0", "text": ".rst\n.pdf\nDocstore\nDocstore#\nWrappers on top of docstores.\nclass langchain.docstore.InMemoryDocstore(_dict: Dict[str, langchain.schema.Document])[source]#\nSimple in memory docstore in the form of a dict.\nadd(texts: Dict[str, langchain.schema.Document]) \u2192 None[source]#\nAdd texts to in memory dictionary.\nsearch(search: str) \u2192 Union[str, langchain.schema.Document][source]#\nSearch via direct lookup.\nclass langchain.docstore.Wikipedia[source]#\nWrapper around wikipedia API.\nsearch(search: str) \u2192 Union[str, langchain.schema.Document][source]#\nTry to search for wiki page.\nIf page exists, return the page summary, and a PageWithLookups object.\nIf page does not exist, return similar entries.\nprevious\nIndexes\nnext\nText Splitter\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/docstore.html"}
+{"id": "9da9fd4ef926-0", "text": ".rst\n.pdf\nExample Selector\nExample Selector#\nLogic for selecting examples to include in prompts.\npydantic model langchain.prompts.example_selector.LengthBasedExampleSelector[source]#\nSelect examples based on length.\nValidators\ncalculate_example_text_lengths \u00bb example_text_lengths\nfield example_prompt: langchain.prompts.prompt.PromptTemplate [Required]#\nPrompt template used to format the examples.\nfield examples: List[dict] [Required]#\nA list of the examples that the prompt template expects.\nfield get_text_length: Callable[[str], int] = #\nFunction to measure prompt length. Defaults to word count.\nfield max_length: int = 2048#\nMax length for the prompt, beyond which examples are cut.\nadd_example(example: Dict[str, str]) \u2192 None[source]#\nAdd new example to list.\nselect_examples(input_variables: Dict[str, str]) \u2192 List[dict][source]#\nSelect which examples to use based on the input lengths.\npydantic model langchain.prompts.example_selector.MaxMarginalRelevanceExampleSelector[source]#\nExampleSelector that selects examples based on Max Marginal Relevance.\nThis was shown to improve performance in this paper:\nhttps://arxiv.org/pdf/2211.13892.pdf\nfield fetch_k: int = 20#\nNumber of examples to fetch to rerank.\nclassmethod from_examples(examples: List[dict], embeddings: langchain.embeddings.base.Embeddings, vectorstore_cls: Type[langchain.vectorstores.base.VectorStore], k: int = 4, input_keys: Optional[List[str]] = None, fetch_k: int = 20, **vectorstore_cls_kwargs: Any) \u2192 langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector[source]#\nCreate k-shot example selector using example list and embeddings.", "source": "https://python.langchain.com/en/latest/reference/modules/example_selector.html"}
+{"id": "9da9fd4ef926-1", "text": "Create k-shot example selector using example list and embeddings.\nReshuffles examples dynamically based on query similarity.\nParameters\nexamples \u2013 List of examples to use in the prompt.\nembeddings \u2013 An iniialized embedding API interface, e.g. OpenAIEmbeddings().\nvectorstore_cls \u2013 A vector store DB interface class, e.g. FAISS.\nk \u2013 Number of examples to select\ninput_keys \u2013 If provided, the search is based on the input variables\ninstead of all variables.\nvectorstore_cls_kwargs \u2013 optional kwargs containing url for vector store\nReturns\nThe ExampleSelector instantiated, backed by a vector store.\nselect_examples(input_variables: Dict[str, str]) \u2192 List[dict][source]#\nSelect which examples to use based on semantic similarity.\npydantic model langchain.prompts.example_selector.SemanticSimilarityExampleSelector[source]#\nExample selector that selects examples based on SemanticSimilarity.\nfield example_keys: Optional[List[str]] = None#\nOptional keys to filter examples to.\nfield input_keys: Optional[List[str]] = None#\nOptional keys to filter input to. If provided, the search is based on\nthe input variables instead of all variables.\nfield k: int = 4#\nNumber of examples to select.\nfield vectorstore: langchain.vectorstores.base.VectorStore [Required]#\nVectorStore than contains information about examples.\nadd_example(example: Dict[str, str]) \u2192 str[source]#\nAdd new example to vectorstore.\nclassmethod from_examples(examples: List[dict], embeddings: langchain.embeddings.base.Embeddings, vectorstore_cls: Type[langchain.vectorstores.base.VectorStore], k: int = 4, input_keys: Optional[List[str]] = None, **vectorstore_cls_kwargs: Any) \u2192 langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/example_selector.html"}
+{"id": "9da9fd4ef926-2", "text": "Create k-shot example selector using example list and embeddings.\nReshuffles examples dynamically based on query similarity.\nParameters\nexamples \u2013 List of examples to use in the prompt.\nembeddings \u2013 An initialized embedding API interface, e.g. OpenAIEmbeddings().\nvectorstore_cls \u2013 A vector store DB interface class, e.g. FAISS.\nk \u2013 Number of examples to select\ninput_keys \u2013 If provided, the search is based on the input variables\ninstead of all variables.\nvectorstore_cls_kwargs \u2013 optional kwargs containing url for vector store\nReturns\nThe ExampleSelector instantiated, backed by a vector store.\nselect_examples(input_variables: Dict[str, str]) \u2192 List[dict][source]#\nSelect which examples to use based on semantic similarity.\nprevious\nPromptTemplates\nnext\nOutput Parsers\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/example_selector.html"}
+{"id": "f491b7cb1df8-0", "text": ".rst\n.pdf\nPython REPL\nPython REPL#\nFor backwards compatibility.\npydantic model langchain.python.PythonREPL[source]#\nSimulates a standalone Python REPL.\nfield globals: Optional[Dict] [Optional] (alias '_globals')#\nfield locals: Optional[Dict] [Optional] (alias '_locals')#\nrun(command: str) \u2192 str[source]#\nRun command with own globals/locals and returns anything printed.\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/python.html"}
+{"id": "2b223368723b-0", "text": ".rst\n.pdf\nAgents\nAgents#\nInterface for agents.\npydantic model langchain.agents.Agent[source]#\nClass responsible for calling the language model and deciding the action.\nThis is driven by an LLMChain. The prompt in the LLMChain MUST include\na variable called \u201cagent_scratchpad\u201d where the agent can put its\nintermediary work.\nfield allowed_tools: Optional[List[str]] = None#\nfield llm_chain: langchain.chains.llm.LLMChain [Required]#\nfield output_parser: langchain.agents.agent.AgentOutputParser [Required]#\nasync aplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]#\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nabstract classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool]) \u2192 langchain.prompts.base.BasePromptTemplate[source]#\nCreate a prompt for this class.\ndict(**kwargs: Any) \u2192 Dict[source]#\nReturn dictionary representation of agent.\nclassmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, **kwargs: Any) \u2192 langchain.agents.agent.Agent[source]#\nConstruct an agent from an LLM and tools.", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-1", "text": "Construct an agent from an LLM and tools.\nget_allowed_tools() \u2192 Optional[List[str]][source]#\nget_full_inputs(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) \u2192 Dict[str, Any][source]#\nCreate the full inputs for the LLMChain from intermediate steps.\nplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]#\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nreturn_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) \u2192 langchain.schema.AgentFinish[source]#\nReturn response when agent has been stopped due to max iterations.\ntool_run_logging_kwargs() \u2192 Dict[source]#\nabstract property llm_prefix: str#\nPrefix to append the LLM call with.\nabstract property observation_prefix: str#\nPrefix to append the observation with.\nproperty return_values: List[str]#\nReturn values of the agent.\npydantic model langchain.agents.AgentExecutor[source]#\nConsists of an agent using tools.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_return_direct_tool \u00bb all fields\nvalidate_tools \u00bb all fields\nfield agent: Union[BaseSingleActionAgent, BaseMultiActionAgent] [Required]#\nfield early_stopping_method: str = 'force'#", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-2", "text": "field early_stopping_method: str = 'force'#\nfield handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False#\nfield max_execution_time: Optional[float] = None#\nfield max_iterations: Optional[int] = 15#\nfield return_intermediate_steps: bool = False#\nfield tools: Sequence[BaseTool] [Required]#\nclassmethod from_agent_and_tools(agent: Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent], tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, **kwargs: Any) \u2192 langchain.agents.agent.AgentExecutor[source]#\nCreate from agent and tools.\nlookup_tool(name: str) \u2192 langchain.tools.base.BaseTool[source]#\nLookup tool by name.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None[source]#\nRaise error - saving not supported for Agent Executors.\nsave_agent(file_path: Union[pathlib.Path, str]) \u2192 None[source]#\nSave the underlying agent.\npydantic model langchain.agents.AgentOutputParser[source]#\nabstract parse(text: str) \u2192 Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]#\nParse text into agent action/finish.\nclass langchain.agents.AgentType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]#\nCHAT_CONVERSATIONAL_REACT_DESCRIPTION = 'chat-conversational-react-description'#\nCHAT_ZERO_SHOT_REACT_DESCRIPTION = 'chat-zero-shot-react-description'#\nCONVERSATIONAL_REACT_DESCRIPTION = 'conversational-react-description'#\nREACT_DOCSTORE = 'react-docstore'#\nSELF_ASK_WITH_SEARCH = 'self-ask-with-search'#", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-3", "text": "SELF_ASK_WITH_SEARCH = 'self-ask-with-search'#\nSTRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION = 'structured-chat-zero-shot-react-description'#\nZERO_SHOT_REACT_DESCRIPTION = 'zero-shot-react-description'#\npydantic model langchain.agents.BaseMultiActionAgent[source]#\nBase Agent class.\nabstract async aplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[List[langchain.schema.AgentAction], langchain.schema.AgentFinish][source]#\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nActions specifying what tool to use.\ndict(**kwargs: Any) \u2192 Dict[source]#\nReturn dictionary representation of agent.\nget_allowed_tools() \u2192 Optional[List[str]][source]#\nabstract plan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[List[langchain.schema.AgentAction], langchain.schema.AgentFinish][source]#\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nActions specifying what tool to use.\nreturn_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) \u2192 langchain.schema.AgentFinish[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-4", "text": "Return response when agent has been stopped due to max iterations.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None[source]#\nSave the agent.\nParameters\nfile_path \u2013 Path to file to save the agent to.\nExample:\n.. code-block:: python\n# If working with agent executor\nagent.agent.save(file_path=\u201dpath/agent.yaml\u201d)\ntool_run_logging_kwargs() \u2192 Dict[source]#\nproperty return_values: List[str]#\nReturn values of the agent.\npydantic model langchain.agents.BaseSingleActionAgent[source]#\nBase Agent class.\nabstract async aplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]#\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\ndict(**kwargs: Any) \u2192 Dict[source]#\nReturn dictionary representation of agent.\nclassmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, **kwargs: Any) \u2192 langchain.agents.agent.BaseSingleActionAgent[source]#\nget_allowed_tools() \u2192 Optional[List[str]][source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-5", "text": "get_allowed_tools() \u2192 Optional[List[str]][source]#\nabstract plan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]#\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nreturn_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) \u2192 langchain.schema.AgentFinish[source]#\nReturn response when agent has been stopped due to max iterations.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None[source]#\nSave the agent.\nParameters\nfile_path \u2013 Path to file to save the agent to.\nExample:\n.. code-block:: python\n# If working with agent executor\nagent.agent.save(file_path=\u201dpath/agent.yaml\u201d)\ntool_run_logging_kwargs() \u2192 Dict[source]#\nproperty return_values: List[str]#\nReturn values of the agent.\npydantic model langchain.agents.ConversationalAgent[source]#\nAn agent designed to hold a conversation in addition to using tools.\nfield ai_prefix: str = 'AI'#\nfield output_parser: langchain.agents.agent.AgentOutputParser [Optional]#", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-6", "text": "classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], prefix: str = 'Assistant is a large language model trained by OpenAI.\\n\\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\\n\\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\\n\\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\\n\\nTOOLS:\\n------\\n\\nAssistant has access to the following tools:', suffix: str = 'Begin!\\n\\nPrevious conversation history:\\n{chat_history}\\n\\nNew input: {input}\\n{agent_scratchpad}', format_instructions: str = 'To use a tool, please use the following format:\\n\\n```\\nThought: Do I need to use a tool? Yes\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n```\\n\\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-7", "text": "say to the Human, or if you do not need to use a tool, you MUST use the format:\\n\\n```\\nThought: Do I need to use a tool? No\\n{ai_prefix}: [your response here]\\n```', ai_prefix: str = 'AI', human_prefix: str = 'Human', input_variables: Optional[List[str]] = None) \u2192 langchain.prompts.prompt.PromptTemplate[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-8", "text": "Create prompt in the style of the zero shot agent.\nParameters\ntools \u2013 List of tools the agent will have access to, used to format the\nprompt.\nprefix \u2013 String to put before the list of tools.\nsuffix \u2013 String to put after the list of tools.\nai_prefix \u2013 String to use before AI output.\nhuman_prefix \u2013 String to use before human output.\ninput_variables \u2013 List of input variables the final prompt will expect.\nReturns\nA PromptTemplate with the template assembled from the pieces here.", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-9", "text": "classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Assistant is a large language model trained by OpenAI.\\n\\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\\n\\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\\n\\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\\n\\nTOOLS:\\n------\\n\\nAssistant has access to the following tools:', suffix: str = 'Begin!\\n\\nPrevious conversation history:\\n{chat_history}\\n\\nNew input: {input}\\n{agent_scratchpad}', format_instructions: str = 'To use a tool, please use the following format:\\n\\n```\\nThought: Do I need to use a tool? Yes\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-10", "text": "the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n```\\n\\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\\n\\n```\\nThought: Do I need to use a tool? No\\n{ai_prefix}: [your response here]\\n```', ai_prefix: str = 'AI', human_prefix: str = 'Human', input_variables: Optional[List[str]] = None, **kwargs: Any) \u2192 langchain.agents.agent.Agent[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-11", "text": "Construct an agent from an LLM and tools.\nproperty llm_prefix: str#\nPrefix to append the llm call with.\nproperty observation_prefix: str#\nPrefix to append the observation with.\npydantic model langchain.agents.ConversationalChatAgent[source]#\nAn agent designed to hold a conversation in addition to using tools.\nfield output_parser: langchain.agents.agent.AgentOutputParser [Optional]#\nfield template_tool_response: str = \"TOOL RESPONSE: \\n---------------------\\n{observation}\\n\\nUSER'S INPUT\\n--------------------\\n\\nOkay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else.\"#", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-12", "text": "classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], system_message: str = 'Assistant is a large language model trained by OpenAI.\\n\\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\\n\\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\\n\\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.', human_message: str = \"TOOLS\\n------\\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\\n\\n{{tools}}\\n\\n{format_instructions}\\n\\nUSER'S INPUT\\n--------------------\\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\\n\\n{{{{input}}}}\", input_variables: Optional[List[str]] = None, output_parser: Optional[langchain.schema.BaseOutputParser] = None) \u2192 langchain.prompts.base.BasePromptTemplate[source]#\nCreate a prompt for this class.", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-13", "text": "classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, system_message: str = 'Assistant is a large language model trained by OpenAI.\\n\\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\\n\\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\\n\\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.', human_message: str = \"TOOLS\\n------\\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\\n\\n{{tools}}\\n\\n{format_instructions}\\n\\nUSER'S INPUT\\n--------------------\\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\\n\\n{{{{input}}}}\", input_variables:", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-14", "text": "with a single action, and NOTHING else):\\n\\n{{{{input}}}}\", input_variables: Optional[List[str]] = None, **kwargs: Any) \u2192 langchain.agents.agent.Agent[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-15", "text": "Construct an agent from an LLM and tools.\nproperty llm_prefix: str#\nPrefix to append the llm call with.\nproperty observation_prefix: str#\nPrefix to append the observation with.\npydantic model langchain.agents.LLMSingleActionAgent[source]#\nfield llm_chain: langchain.chains.llm.LLMChain [Required]#\nfield output_parser: langchain.agents.agent.AgentOutputParser [Required]#\nfield stop: List[str] [Required]#\nasync aplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]#\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\ndict(**kwargs: Any) \u2192 Dict[source]#\nReturn dictionary representation of agent.\nplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]#\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\ntool_run_logging_kwargs() \u2192 Dict[source]#\npydantic model langchain.agents.MRKLChain[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-16", "text": "pydantic model langchain.agents.MRKLChain[source]#\nChain that implements the MRKL system.\nExample\nfrom langchain import OpenAI, MRKLChain\nfrom langchain.chains.mrkl.base import ChainConfig\nllm = OpenAI(temperature=0)\nprompt = PromptTemplate(...)\nchains = [...]\nmrkl = MRKLChain.from_chains(llm=llm, prompt=prompt)\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_return_direct_tool \u00bb all fields\nvalidate_tools \u00bb all fields\nclassmethod from_chains(llm: langchain.base_language.BaseLanguageModel, chains: List[langchain.agents.mrkl.base.ChainConfig], **kwargs: Any) \u2192 langchain.agents.agent.AgentExecutor[source]#\nUser friendly way to initialize the MRKL chain.\nThis is intended to be an easy way to get up and running with the\nMRKL chain.\nParameters\nllm \u2013 The LLM to use as the agent LLM.\nchains \u2013 The chains the MRKL system has access to.\n**kwargs \u2013 parameters to be passed to initialization.\nReturns\nAn initialized MRKL chain.\nExample\nfrom langchain import LLMMathChain, OpenAI, SerpAPIWrapper, MRKLChain\nfrom langchain.chains.mrkl.base import ChainConfig\nllm = OpenAI(temperature=0)\nsearch = SerpAPIWrapper()\nllm_math_chain = LLMMathChain(llm=llm)\nchains = [\n ChainConfig(\n action_name = \"Search\",\n action=search.search,\n action_description=\"useful for searching\"\n ),\n ChainConfig(\n action_name=\"Calculator\",\n action=llm_math_chain.run,\n action_description=\"useful for doing math\"\n )\n]", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-17", "text": "action_description=\"useful for doing math\"\n )\n]\nmrkl = MRKLChain.from_chains(llm, chains)\npydantic model langchain.agents.ReActChain[source]#\nChain that implements the ReAct paper.\nExample\nfrom langchain import ReActChain, OpenAI\nreact = ReAct(llm=OpenAI())\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_return_direct_tool \u00bb all fields\nvalidate_tools \u00bb all fields\npydantic model langchain.agents.ReActTextWorldAgent[source]#\nAgent for the ReAct TextWorld chain.\nclassmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool]) \u2192 langchain.prompts.base.BasePromptTemplate[source]#\nReturn default prompt.\npydantic model langchain.agents.SelfAskWithSearchChain[source]#\nChain that does self ask with search.\nExample\nfrom langchain import SelfAskWithSearchChain, OpenAI, GoogleSerperAPIWrapper\nsearch_chain = GoogleSerperAPIWrapper()\nself_ask = SelfAskWithSearchChain(llm=OpenAI(), search_chain=search_chain)\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_return_direct_tool \u00bb all fields\nvalidate_tools \u00bb all fields\npydantic model langchain.agents.StructuredChatAgent[source]#\nfield output_parser: langchain.agents.agent.AgentOutputParser [Optional]#", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-18", "text": "field output_parser: langchain.agents.agent.AgentOutputParser [Optional]#\nclassmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], prefix: str = 'Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix: str = 'Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\\nThought:', human_message_template: str = '{input}\\n\\n{agent_scratchpad}', format_instructions: str = 'Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\\n\\nValid \"action\" values: \"Final Answer\" or {tool_names}\\n\\nProvide only ONE action per $JSON_BLOB, as shown:\\n\\n```\\n{{{{\\n\u00a0 \"action\": $TOOL_NAME,\\n\u00a0 \"action_input\": $INPUT\\n}}}}\\n```\\n\\nFollow this format:\\n\\nQuestion: input question to answer\\nThought: consider previous and subsequent steps\\nAction:\\n```\\n$JSON_BLOB\\n```\\nObservation: action result\\n... (repeat Thought/Action/Observation N times)\\nThought: I know what to respond\\nAction:\\n```\\n{{{{\\n\u00a0 \"action\": \"Final Answer\",\\n\u00a0 \"action_input\": \"Final response to human\"\\n}}}}\\n```', input_variables: Optional[List[str]] = None, memory_prompts: Optional[List[langchain.prompts.base.BasePromptTemplate]] = None) \u2192 langchain.prompts.base.BasePromptTemplate[source]#\nCreate a prompt for this class.", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-19", "text": "classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix: str = 'Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\\nThought:', human_message_template: str = '{input}\\n\\n{agent_scratchpad}', format_instructions: str = 'Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\\n\\nValid \"action\" values: \"Final Answer\" or {tool_names}\\n\\nProvide only ONE action per $JSON_BLOB, as shown:\\n\\n```\\n{{{{\\n\u00a0 \"action\": $TOOL_NAME,\\n\u00a0 \"action_input\": $INPUT\\n}}}}\\n```\\n\\nFollow this format:\\n\\nQuestion: input question to answer\\nThought: consider previous and subsequent steps\\nAction:\\n```\\n$JSON_BLOB\\n```\\nObservation: action result\\n... (repeat Thought/Action/Observation N times)\\nThought: I know what to respond\\nAction:\\n```\\n{{{{\\n\u00a0 \"action\": \"Final Answer\",\\n\u00a0 \"action_input\": \"Final response to human\"\\n}}}}\\n```', input_variables: Optional[List[str]] = None, memory_prompts: Optional[List[langchain.prompts.base.BasePromptTemplate]] = None, **kwargs: Any) \u2192 langchain.agents.agent.Agent[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-20", "text": "Construct an agent from an LLM and tools.\nproperty llm_prefix: str#\nPrefix to append the llm call with.\nproperty observation_prefix: str#\nPrefix to append the observation with.\npydantic model langchain.agents.Tool[source]#\nTool that takes in function or coroutine directly.\nfield coroutine: Optional[Callable[[...], Awaitable[str]]] = None#\nThe asynchronous version of the function.\nfield description: str = ''#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield func: Callable[[...], str] [Required]#\nThe function to run when the tool is called.\nclassmethod from_function(func: Callable, name: str, description: str, return_direct: bool = False, args_schema: Optional[Type[pydantic.main.BaseModel]] = None, **kwargs: Any) \u2192 langchain.tools.base.Tool[source]#\nInitialize tool from a function.\nproperty args: dict#\nThe tool\u2019s input arguments.\npydantic model langchain.agents.ZeroShotAgent[source]#\nAgent for the MRKL chain.\nfield output_parser: langchain.agents.agent.AgentOutputParser [Optional]#", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-21", "text": "field output_parser: langchain.agents.agent.AgentOutputParser [Optional]#\nclassmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\\n\\nQuestion: {input}\\nThought:{agent_scratchpad}', format_instructions: str = 'Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None) \u2192 langchain.prompts.prompt.PromptTemplate[source]#\nCreate prompt in the style of the zero shot agent.\nParameters\ntools \u2013 List of tools the agent will have access to, used to format the\nprompt.\nprefix \u2013 String to put before the list of tools.\nsuffix \u2013 String to put after the list of tools.\ninput_variables \u2013 List of input variables the final prompt will expect.\nReturns\nA PromptTemplate with the template assembled from the pieces here.", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-22", "text": "Returns\nA PromptTemplate with the template assembled from the pieces here.\nclassmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\\n\\nQuestion: {input}\\nThought:{agent_scratchpad}', format_instructions: str = 'Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, **kwargs: Any) \u2192 langchain.agents.agent.Agent[source]#\nConstruct an agent from an LLM and tools.\nproperty llm_prefix: str#\nPrefix to append the llm call with.\nproperty observation_prefix: str#\nPrefix to append the observation with.\nlangchain.agents.create_csv_agent(llm: langchain.base_language.BaseLanguageModel, path: Union[str, List[str]], pandas_kwargs: Optional[dict] = None, **kwargs: Any) \u2192 langchain.agents.agent.AgentExecutor[source]#\nCreate csv agent by loading to a dataframe and using pandas agent.", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-23", "text": "langchain.agents.create_json_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.json.toolkit.JsonToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with JSON.\\nYour goal is to return a final answer by interacting with the JSON.\\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\\nDo not make up any information that is not contained in the JSON.\\nYour input to the tools should be in the form of `data[\"key\"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \\nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \\nIf you have not seen a key in one of those responses, you cannot use it.\\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\\nIf you encounter a \"KeyError\", go back to the previous key, look at the available keys, and try again.\\n\\nIf the question does not seem to be related to the JSON, just return \"I don\\'t know\" as the answer.\\nAlways begin your interaction with the `json_spec_list_keys` tool with input \"data\" to see what keys exist in the JSON.\\n\\nNote that sometimes the value at a given path is large. In this case, you will get an error \"Value is a large dictionary, should explore its keys directly\".\\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-24", "text": "ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\\n', suffix: str = 'Begin!\"\\n\\nQuestion: {input}\\nThought: I should look at the keys that exist in data to see what I have access to\\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 langchain.agents.agent.AgentExecutor[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-25", "text": "Construct a json agent from an LLM and tools.", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-26", "text": "langchain.agents.create_openapi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = \"You are an agent designed to answer questions by making web requests to an API given the openapi spec.\\n\\nIf the question does not seem related to the API, return I don't know. Do not make up an answer.\\nOnly use information provided by the tools to construct your response.\\n\\nFirst, find the base URL needed to make the request.\\n\\nSecond, find the relevant paths needed to answer the question. Take note that, sometimes, you might need to make more than one request to more than one path to answer the question.\\n\\nThird, find the required parameters needed to make the request. For GET requests, these are usually URL parameters and for POST requests, these are request body parameters.\\n\\nFourth, make the requests needed to answer the question. Ensure that you are sending the correct parameters to the request by checking which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\\n\\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\\nIf you get a not found error, ensure that you are using a path that actually exists in the spec.\\n\", suffix: str = 'Begin!\\n\\nQuestion: {input}\\nThought: I should explore the spec to find the base url for the API.\\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-27", "text": "do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, return_intermediate_steps: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 langchain.agents.agent.AgentExecutor[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-28", "text": "Construct a json agent from an LLM and tools.\nlangchain.agents.create_pandas_dataframe_agent(llm: langchain.base_language.BaseLanguageModel, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: Optional[str] = None, suffix: Optional[str] = None, input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', agent_executor_kwargs: Optional[Dict[str, Any]] = None, include_df_in_prompt: Optional[bool] = True, **kwargs: Dict[str, Any]) \u2192 langchain.agents.agent.AgentExecutor[source]#\nConstruct a pandas agent from an LLM and dataframe.", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-29", "text": "langchain.agents.create_pbi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to help users interact with a PowerBI Dataset.\\n\\nAgent has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return \"This does not appear to be part of this dataset.\" as the answer.\\n\\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readible format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\\n', suffix: str = 'Begin!\\n\\nQuestion: {input}\\nThought: I can first ask which tables I have, then how each table is defined and then ask the query tool the question I need, and finally create a nice sentence that answers the question.\\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-30", "text": "do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', examples: Optional[str] = None, input_variables: Optional[List[str]] = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 langchain.agents.agent.AgentExecutor[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-31", "text": "Construct a pbi agent from an LLM and tools.", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-32", "text": "langchain.agents.create_pbi_chat_agent(llm: langchain.chat_models.base.BaseChatModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Assistant is a large language model built to help users interact with a PowerBI Dataset.\\n\\nAssistant has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return \"This does not appear to be part of this dataset.\" as the answer.\\n\\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readible format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\\n', suffix: str = \"TOOLS\\n------\\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\\n\\n{{tools}}\\n\\n{format_instructions}\\n\\nUSER'S INPUT\\n--------------------\\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-33", "text": "(remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\\n\\n{{{{input}}}}\\n\", examples: Optional[str] = None, input_variables: Optional[List[str]] = None, memory: Optional[langchain.memory.chat_memory.BaseChatMemory] = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 langchain.agents.agent.AgentExecutor[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-34", "text": "Construct a pbi agent from an Chat LLM and tools.\nIf you supply only a toolkit and no powerbi dataset, the same LLM is used for both.\nlangchain.agents.create_spark_dataframe_agent(llm: langchain.llms.base.BaseLLM, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = '\\nYou are working with a spark dataframe in Python. The name of the dataframe is `df`.\\nYou should use the tools below to answer the question posed of you:', suffix: str = '\\nThis is the result of `print(df.first())`:\\n{df}\\n\\nBegin!\\nQuestion: {input}\\n{agent_scratchpad}', input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 langchain.agents.agent.AgentExecutor[source]#\nConstruct a spark agent from an LLM and dataframe.", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-35", "text": "langchain.agents.create_spark_sql_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with Spark SQL.\\nGiven an input question, create a syntactically correct Spark SQL query to run, then look at the results of the query and return the answer.\\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\\nYou can order the results by a relevant column to return the most interesting examples in the database.\\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\\nYou have access to tools for interacting with the database.\\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\\n\\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\\n\\nIf the question does not seem related to the database, just return \"I don\\'t know\" as the answer.\\n', suffix: str = 'Begin!\\n\\nQuestion: {input}\\nThought: I should look at the tables in the database to see what I can query.\\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-36", "text": "Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 langchain.agents.agent.AgentExecutor[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-37", "text": "Construct a sql agent from an LLM and tools.", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-38", "text": "langchain.agents.create_sql_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with a SQL database.\\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\\nYou can order the results by a relevant column to return the most interesting examples in the database.\\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\\nYou have access to tools for interacting with the database.\\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\\n\\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\\n\\nIf the question does not seem related to the database, just return \"I don\\'t know\" as the answer.\\n', suffix: str = 'Begin!\\n\\nQuestion: {input}\\nThought: I should look at the tables in the database to see what I can query.\u00a0 Then I should query the schema of the most relevant tables.\\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-39", "text": "of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 langchain.agents.agent.AgentExecutor[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-40", "text": "Construct a sql agent from an LLM and tools.\nlangchain.agents.create_vectorstore_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions about sets of documents.\\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\\nIf the question does not seem relevant to any of the tools provided, just return \"I don\\'t know\" as the answer.\\n', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 langchain.agents.agent.AgentExecutor[source]#\nConstruct a vectorstore agent from an LLM and tools.\nlangchain.agents.create_vectorstore_router_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions.\\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\\nYour main task is to decide which of the tools is relevant for answering question at hand.\\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\\n', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 langchain.agents.agent.AgentExecutor[source]#\nConstruct a vectorstore router agent from an LLM and tools.", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-41", "text": "Construct a vectorstore router agent from an LLM and tools.\nlangchain.agents.get_all_tool_names() \u2192 List[str][source]#\nGet a list of all possible tool names.\nlangchain.agents.initialize_agent(tools: Sequence[langchain.tools.base.BaseTool], llm: langchain.base_language.BaseLanguageModel, agent: Optional[langchain.agents.agent_types.AgentType] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, agent_path: Optional[str] = None, agent_kwargs: Optional[dict] = None, **kwargs: Any) \u2192 langchain.agents.agent.AgentExecutor[source]#\nLoad an agent executor given tools and LLM.\nParameters\ntools \u2013 List of tools this agent has access to.\nllm \u2013 Language model to use as the agent.\nagent \u2013 Agent type to use. If None and agent_path is also None, will default to\nAgentType.ZERO_SHOT_REACT_DESCRIPTION.\ncallback_manager \u2013 CallbackManager to use. Global callback manager is used if\nnot provided. Defaults to None.\nagent_path \u2013 Path to serialized agent to use.\nagent_kwargs \u2013 Additional key word arguments to pass to the underlying agent\n**kwargs \u2013 Additional key word arguments passed to the agent executor\nReturns\nAn agent executor\nlangchain.agents.load_agent(path: Union[str, pathlib.Path], **kwargs: Any) \u2192 langchain.agents.agent.BaseSingleActionAgent[source]#\nUnified method for loading a agent from LangChainHub or local fs.\nlangchain.agents.load_huggingface_tool(task_or_repo_id: str, model_repo_id: Optional[str] = None, token: Optional[str] = None, remote: bool = False, **kwargs: Any) \u2192 langchain.tools.base.BaseTool[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-42", "text": "langchain.agents.load_tools(tool_names: List[str], llm: Optional[langchain.base_language.BaseLanguageModel] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) \u2192 List[langchain.tools.base.BaseTool][source]#\nLoad tools based on their name.\nParameters\ntool_names \u2013 name of tools to load.\nllm \u2013 Optional language model, may be needed to initialize certain tools.\ncallbacks \u2013 Optional callback manager or list of callback handlers.\nIf not provided, default global callback manager will be used.\nReturns\nList of tools.\nlangchain.agents.tool(*args: Union[str, Callable], return_direct: bool = False, args_schema: Optional[Type[pydantic.main.BaseModel]] = None, infer_schema: bool = True) \u2192 Callable[source]#\nMake tools out of functions, can be used with or without arguments.\nParameters\n*args \u2013 The arguments to the tool.\nreturn_direct \u2013 Whether to return directly from the tool rather\nthan continuing the agent loop.\nargs_schema \u2013 optional argument schema for user to specify\ninfer_schema \u2013 Whether to infer the schema of the arguments from\nthe function\u2019s signature. This also makes the resultant tool\naccept a dictionary input to its run() function.\nRequires:\nFunction must be of type (str) -> str\nFunction must have a docstring\nExamples\n@tool\ndef search_api(query: str) -> str:\n # Searches the API for the query.\n return\n@tool(\"search\", return_direct=True)\ndef search_api(query: str) -> str:\n # Searches the API for the query.\n return\nprevious\nAgents\nnext\nTools\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "2b223368723b-43", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/agents.html"}
+{"id": "ddf0ae931e13-0", "text": ".rst\n.pdf\nDocument Compressors\nDocument Compressors#\npydantic model langchain.retrievers.document_compressors.CohereRerank[source]#\nfield client: Client [Required]#\nfield model: str = 'rerank-english-v2.0'#\nfield top_n: int = 3#\nasync acompress_documents(documents: Sequence[langchain.schema.Document], query: str) \u2192 Sequence[langchain.schema.Document][source]#\nCompress retrieved documents given the query context.\ncompress_documents(documents: Sequence[langchain.schema.Document], query: str) \u2192 Sequence[langchain.schema.Document][source]#\nCompress retrieved documents given the query context.\npydantic model langchain.retrievers.document_compressors.DocumentCompressorPipeline[source]#\nDocument compressor that uses a pipeline of transformers.\nfield transformers: List[Union[langchain.schema.BaseDocumentTransformer, langchain.retrievers.document_compressors.base.BaseDocumentCompressor]] [Required]#\nList of document filters that are chained together and run in sequence.\nasync acompress_documents(documents: Sequence[langchain.schema.Document], query: str) \u2192 Sequence[langchain.schema.Document][source]#\nCompress retrieved documents given the query context.\ncompress_documents(documents: Sequence[langchain.schema.Document], query: str) \u2192 Sequence[langchain.schema.Document][source]#\nTransform a list of documents.\npydantic model langchain.retrievers.document_compressors.EmbeddingsFilter[source]#\nfield embeddings: langchain.embeddings.base.Embeddings [Required]#\nEmbeddings to use for embedding document contents and queries.\nfield k: Optional[int] = 20#\nThe number of relevant documents to return. Can be set to None, in which case\nsimilarity_threshold must be specified. Defaults to 20.", "source": "https://python.langchain.com/en/latest/reference/modules/document_compressors.html"}
+{"id": "ddf0ae931e13-1", "text": "similarity_threshold must be specified. Defaults to 20.\nfield similarity_fn: Callable = #\nSimilarity function for comparing documents. Function expected to take as input\ntwo matrices (List[List[float]]) and return a matrix of scores where higher values\nindicate greater similarity.\nfield similarity_threshold: Optional[float] = None#\nThreshold for determining when two documents are similar enough\nto be considered redundant. Defaults to None, must be specified if k is set\nto None.\nasync acompress_documents(documents: Sequence[langchain.schema.Document], query: str) \u2192 Sequence[langchain.schema.Document][source]#\nFilter down documents.\ncompress_documents(documents: Sequence[langchain.schema.Document], query: str) \u2192 Sequence[langchain.schema.Document][source]#\nFilter documents based on similarity of their embeddings to the query.\npydantic model langchain.retrievers.document_compressors.LLMChainExtractor[source]#\nfield get_input: Callable[[str, langchain.schema.Document], dict] = #\nCallable for constructing the chain input from the query and a Document.\nfield llm_chain: langchain.chains.llm.LLMChain [Required]#\nLLM wrapper to use for compressing documents.\nasync acompress_documents(documents: Sequence[langchain.schema.Document], query: str) \u2192 Sequence[langchain.schema.Document][source]#\nCompress page content of raw documents asynchronously.\ncompress_documents(documents: Sequence[langchain.schema.Document], query: str) \u2192 Sequence[langchain.schema.Document][source]#\nCompress page content of raw documents.", "source": "https://python.langchain.com/en/latest/reference/modules/document_compressors.html"}
+{"id": "ddf0ae931e13-2", "text": "Compress page content of raw documents.\nclassmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: Optional[langchain.prompts.prompt.PromptTemplate] = None, get_input: Optional[Callable[[str, langchain.schema.Document], str]] = None, llm_chain_kwargs: Optional[dict] = None) \u2192 langchain.retrievers.document_compressors.chain_extract.LLMChainExtractor[source]#\nInitialize from LLM.\npydantic model langchain.retrievers.document_compressors.LLMChainFilter[source]#\nFilter that drops documents that aren\u2019t relevant to the query.\nfield get_input: Callable[[str, langchain.schema.Document], dict] = #\nCallable for constructing the chain input from the query and a Document.\nfield llm_chain: langchain.chains.llm.LLMChain [Required]#\nLLM wrapper to use for filtering documents.\nThe chain prompt is expected to have a BooleanOutputParser.\nasync acompress_documents(documents: Sequence[langchain.schema.Document], query: str) \u2192 Sequence[langchain.schema.Document][source]#\nFilter down documents.\ncompress_documents(documents: Sequence[langchain.schema.Document], query: str) \u2192 Sequence[langchain.schema.Document][source]#\nFilter down documents based on their relevance to the query.\nclassmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: Optional[langchain.prompts.base.BasePromptTemplate] = None, **kwargs: Any) \u2192 langchain.retrievers.document_compressors.chain_filter.LLMChainFilter[source]#\nprevious\nRetrievers\nnext\nDocument Transformers\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/document_compressors.html"}
+{"id": "49017d91b4bb-0", "text": ".rst\n.pdf\nTools\nTools#\nCore toolkit implementations.\npydantic model langchain.tools.AIPluginTool[source]#\nfield api_spec: str [Required]#\nfield args_schema: Type[AIPluginToolSchema] = #\nPydantic model class to validate and parse the tool\u2019s input arguments.\nfield plugin: AIPlugin [Required]#\nclassmethod from_plugin_url(url: str) \u2192 langchain.tools.plugin.AIPluginTool[source]#\npydantic model langchain.tools.APIOperation[source]#\nA model for a single API operation.\nfield base_url: str [Required]#\nThe base URL of the operation.\nfield description: Optional[str] = None#\nThe description of the operation.\nfield method: langchain.tools.openapi.utils.openapi_utils.HTTPVerb [Required]#\nThe HTTP method of the operation.\nfield operation_id: str [Required]#\nThe unique identifier of the operation.\nfield path: str [Required]#\nThe path of the operation.\nfield properties: Sequence[langchain.tools.openapi.utils.api_models.APIProperty] [Required]#\nfield request_body: Optional[langchain.tools.openapi.utils.api_models.APIRequestBody] = None#\nThe request body of the operation.\nclassmethod from_openapi_spec(spec: langchain.tools.openapi.utils.openapi_utils.OpenAPISpec, path: str, method: str) \u2192 langchain.tools.openapi.utils.api_models.APIOperation[source]#\nCreate an APIOperation from an OpenAPI spec.\nclassmethod from_openapi_url(spec_url: str, path: str, method: str) \u2192 langchain.tools.openapi.utils.api_models.APIOperation[source]#\nCreate an APIOperation from an OpenAPI URL.\nto_typescript() \u2192 str[source]#\nGet typescript string representation of the operation.", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-1", "text": "to_typescript() \u2192 str[source]#\nGet typescript string representation of the operation.\nstatic ts_type_from_python(type_: Union[str, Type, tuple, None, enum.Enum]) \u2192 str[source]#\nproperty body_params: List[str]#\nproperty path_params: List[str]#\nproperty query_params: List[str]#\npydantic model langchain.tools.AzureCogsFormRecognizerTool[source]#\nTool that queries the Azure Cognitive Services Form Recognizer API.\nIn order to set this up, follow instructions at:\nhttps://learn.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/quickstarts/get-started-sdks-rest-api?view=form-recog-3.0.0&pivots=programming-language-python\npydantic model langchain.tools.AzureCogsImageAnalysisTool[source]#\nTool that queries the Azure Cognitive Services Image Analysis API.\nIn order to set this up, follow instructions at:\nhttps://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40\npydantic model langchain.tools.AzureCogsSpeech2TextTool[source]#\nTool that queries the Azure Cognitive Services Speech2Text API.\nIn order to set this up, follow instructions at:\nhttps://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-speech-to-text?pivots=programming-language-python\npydantic model langchain.tools.AzureCogsText2SpeechTool[source]#\nTool that queries the Azure Cognitive Services Text2Speech API.\nIn order to set this up, follow instructions at:\nhttps://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-text-to-speech?pivots=programming-language-python\npydantic model langchain.tools.BaseTool[source]#\nInterface LangChain tools must implement.", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-2", "text": "Interface LangChain tools must implement.\nfield args_schema: Optional[Type[pydantic.main.BaseModel]] = None#\nPydantic model class to validate and parse the tool\u2019s input arguments.\nfield callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None#\nDeprecated. Please use callbacks instead.\nfield callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None#\nCallbacks to be called during tool execution.\nfield description: str [Required]#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield handle_tool_error: Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]] = False#\nHandle the content of the ToolException thrown.\nfield name: str [Required]#\nThe unique name of the tool that clearly communicates its purpose.\nfield return_direct: bool = False#\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nfield verbose: bool = False#\nWhether to log the tool\u2019s progress.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) \u2192 Any[source]#\nRun the tool asynchronously.", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-3", "text": "Run the tool asynchronously.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) \u2192 Any[source]#\nRun the tool.\nproperty args: dict#\nproperty is_single_input: bool#\nWhether the tool only accepts a single input.\npydantic model langchain.tools.BingSearchResults[source]#\nTool that has capability to query the Bing Search API and get back json.\nfield api_wrapper: langchain.utilities.bing_search.BingSearchAPIWrapper [Required]#\nfield num_results: int = 4#\npydantic model langchain.tools.BingSearchRun[source]#\nTool that adds the capability to query the Bing search API.\nfield api_wrapper: langchain.utilities.bing_search.BingSearchAPIWrapper [Required]#\npydantic model langchain.tools.BraveSearch[source]#\nfield search_wrapper: BraveSearchWrapper [Required]#\nclassmethod from_api_key(api_key: str, search_kwargs: Optional[dict] = None, **kwargs: Any) \u2192 langchain.tools.brave_search.tool.BraveSearch[source]#\npydantic model langchain.tools.ClickTool[source]#\nfield args_schema: Type[BaseModel] = #\nPydantic model class to validate and parse the tool\u2019s input arguments.\nfield description: str = 'Click on an element with the given CSS selector'#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield name: str = 'click_element'#", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-4", "text": "field name: str = 'click_element'#\nThe unique name of the tool that clearly communicates its purpose.\nfield playwright_strict: bool = False#\nWhether to employ Playwright\u2019s strict mode when clicking on elements.\nfield playwright_timeout: float = 1000#\nTimeout (in ms) for Playwright to wait for element to be ready.\nfield visible_only: bool = True#\nWhether to consider only visible elements.\npydantic model langchain.tools.CopyFileTool[source]#\nfield args_schema: Type[pydantic.main.BaseModel] = #\nPydantic model class to validate and parse the tool\u2019s input arguments.\nfield description: str = 'Create a copy of a file in a specified location'#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield name: str = 'copy_file'#\nThe unique name of the tool that clearly communicates its purpose.\npydantic model langchain.tools.CurrentWebPageTool[source]#\nfield args_schema: Type[BaseModel] = #\nPydantic model class to validate and parse the tool\u2019s input arguments.\nfield description: str = 'Returns the URL of the current page'#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield name: str = 'current_webpage'#\nThe unique name of the tool that clearly communicates its purpose.\npydantic model langchain.tools.DeleteFileTool[source]#\nfield args_schema: Type[pydantic.main.BaseModel] = #", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-5", "text": "Pydantic model class to validate and parse the tool\u2019s input arguments.\nfield description: str = 'Delete a file'#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield name: str = 'file_delete'#\nThe unique name of the tool that clearly communicates its purpose.\npydantic model langchain.tools.DuckDuckGoSearchResults[source]#\nTool that queries the Duck Duck Go Search API and get back json.\nfield api_wrapper: langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper [Optional]#\nfield num_results: int = 4#\npydantic model langchain.tools.DuckDuckGoSearchRun[source]#\nTool that adds the capability to query the DuckDuckGo search API.\nfield api_wrapper: langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper [Optional]#\npydantic model langchain.tools.ExtractHyperlinksTool[source]#\nExtract all hyperlinks on the page.\nfield args_schema: Type[BaseModel] = #\nPydantic model class to validate and parse the tool\u2019s input arguments.\nfield description: str = 'Extract all hyperlinks on the current webpage'#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield name: str = 'extract_hyperlinks'#\nThe unique name of the tool that clearly communicates its purpose.\nstatic scrape_page(page: Any, html_content: str, absolute_urls: bool) \u2192 str[source]#\npydantic model langchain.tools.ExtractTextTool[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-6", "text": "pydantic model langchain.tools.ExtractTextTool[source]#\nfield args_schema: Type[BaseModel] = #\nPydantic model class to validate and parse the tool\u2019s input arguments.\nfield description: str = 'Extract all the text on the current webpage'#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield name: str = 'extract_text'#\nThe unique name of the tool that clearly communicates its purpose.\npydantic model langchain.tools.FileSearchTool[source]#\nfield args_schema: Type[pydantic.main.BaseModel] = #\nPydantic model class to validate and parse the tool\u2019s input arguments.\nfield description: str = 'Recursively search for files in a subdirectory that match the regex pattern'#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield name: str = 'file_search'#\nThe unique name of the tool that clearly communicates its purpose.\npydantic model langchain.tools.GetElementsTool[source]#\nfield args_schema: Type[BaseModel] = #\nPydantic model class to validate and parse the tool\u2019s input arguments.\nfield description: str = 'Retrieve elements in the current web page matching the given CSS selector'#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield name: str = 'get_elements'#\nThe unique name of the tool that clearly communicates its purpose.", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-7", "text": "The unique name of the tool that clearly communicates its purpose.\npydantic model langchain.tools.GmailCreateDraft[source]#\nfield args_schema: Type[langchain.tools.gmail.create_draft.CreateDraftSchema] = #\nPydantic model class to validate and parse the tool\u2019s input arguments.\nfield description: str = 'Use this tool to create a draft email with the provided message fields.'#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield name: str = 'create_gmail_draft'#\nThe unique name of the tool that clearly communicates its purpose.\npydantic model langchain.tools.GmailGetMessage[source]#\nfield args_schema: Type[langchain.tools.gmail.get_message.SearchArgsSchema] = #\nPydantic model class to validate and parse the tool\u2019s input arguments.\nfield description: str = 'Use this tool to fetch an email by message ID. Returns the thread ID, snipet, body, subject, and sender.'#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield name: str = 'get_gmail_message'#\nThe unique name of the tool that clearly communicates its purpose.\npydantic model langchain.tools.GmailGetThread[source]#\nfield args_schema: Type[langchain.tools.gmail.get_thread.GetThreadSchema] = #\nPydantic model class to validate and parse the tool\u2019s input arguments.", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-8", "text": "Pydantic model class to validate and parse the tool\u2019s input arguments.\nfield description: str = 'Use this tool to search for email messages. The input must be a valid Gmail query. The output is a JSON list of messages.'#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield name: str = 'get_gmail_thread'#\nThe unique name of the tool that clearly communicates its purpose.\npydantic model langchain.tools.GmailSearch[source]#\nfield args_schema: Type[langchain.tools.gmail.search.SearchArgsSchema] = #\nPydantic model class to validate and parse the tool\u2019s input arguments.\nfield description: str = 'Use this tool to search for email messages or threads. The input must be a valid Gmail query. The output is a JSON list of the requested resource.'#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield name: str = 'search_gmail'#\nThe unique name of the tool that clearly communicates its purpose.\npydantic model langchain.tools.GmailSendMessage[source]#\nfield description: str = 'Use this tool to send email messages. The input is the message, recipents'#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield name: str = 'send_gmail_message'#\nThe unique name of the tool that clearly communicates its purpose.\npydantic model langchain.tools.GooglePlacesTool[source]#\nTool that adds the capability to query the Google places API.\nfield api_wrapper: langchain.utilities.google_places_api.GooglePlacesAPIWrapper [Optional]#", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-9", "text": "field api_wrapper: langchain.utilities.google_places_api.GooglePlacesAPIWrapper [Optional]#\nfield args_schema: Type[pydantic.main.BaseModel] = #\nPydantic model class to validate and parse the tool\u2019s input arguments.\npydantic model langchain.tools.GoogleSearchResults[source]#\nTool that has capability to query the Google Search API and get back json.\nfield api_wrapper: langchain.utilities.google_search.GoogleSearchAPIWrapper [Required]#\nfield num_results: int = 4#\npydantic model langchain.tools.GoogleSearchRun[source]#\nTool that adds the capability to query the Google search API.\nfield api_wrapper: langchain.utilities.google_search.GoogleSearchAPIWrapper [Required]#\npydantic model langchain.tools.GoogleSerperResults[source]#\nTool that has capability to query the Serper.dev Google Search API\nand get back json.\nfield api_wrapper: langchain.utilities.google_serper.GoogleSerperAPIWrapper [Optional]#\npydantic model langchain.tools.GoogleSerperRun[source]#\nTool that adds the capability to query the Serper.dev Google search API.\nfield api_wrapper: langchain.utilities.google_serper.GoogleSerperAPIWrapper [Required]#\npydantic model langchain.tools.HumanInputRun[source]#\nTool that adds the capability to ask user for input.\nfield input_func: Callable [Optional]#\nfield prompt_func: Callable[[str], None] [Optional]#\npydantic model langchain.tools.IFTTTWebhook[source]#\nIFTTT Webhook.\nParameters\nname \u2013 name of the tool\ndescription \u2013 description of the tool\nurl \u2013 url to hit with the json event.\nfield url: str [Required]#\npydantic model langchain.tools.InfoPowerBITool[source]#", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-10", "text": "pydantic model langchain.tools.InfoPowerBITool[source]#\nTool for getting metadata about a PowerBI Dataset.\nfield powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]#\npydantic model langchain.tools.ListDirectoryTool[source]#\nfield args_schema: Type[pydantic.main.BaseModel] = #\nPydantic model class to validate and parse the tool\u2019s input arguments.\nfield description: str = 'List files and directories in a specified folder'#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield name: str = 'list_directory'#\nThe unique name of the tool that clearly communicates its purpose.\npydantic model langchain.tools.ListPowerBITool[source]#\nTool for getting tables names.\nfield powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]#\npydantic model langchain.tools.MetaphorSearchResults[source]#\nTool that has capability to query the Metaphor Search API and get back json.\nfield api_wrapper: langchain.utilities.metaphor_search.MetaphorSearchAPIWrapper [Required]#\npydantic model langchain.tools.MoveFileTool[source]#\nfield args_schema: Type[pydantic.main.BaseModel] = #\nPydantic model class to validate and parse the tool\u2019s input arguments.\nfield description: str = 'Move or rename a file from one location to another'#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield name: str = 'move_file'#", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-11", "text": "field name: str = 'move_file'#\nThe unique name of the tool that clearly communicates its purpose.\npydantic model langchain.tools.NavigateBackTool[source]#\nNavigate back to the previous page in the browser history.\nfield args_schema: Type[BaseModel] = #\nPydantic model class to validate and parse the tool\u2019s input arguments.\nfield description: str = 'Navigate back to the previous page in the browser history'#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield name: str = 'previous_webpage'#\nThe unique name of the tool that clearly communicates its purpose.\npydantic model langchain.tools.NavigateTool[source]#\nfield args_schema: Type[BaseModel] = #\nPydantic model class to validate and parse the tool\u2019s input arguments.\nfield description: str = 'Navigate a browser to the specified URL'#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield name: str = 'navigate_browser'#\nThe unique name of the tool that clearly communicates its purpose.\npydantic model langchain.tools.OpenAPISpec[source]#\nOpenAPI Model that removes misformatted parts of the spec.\nclassmethod from_file(path: Union[str, pathlib.Path]) \u2192 langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]#\nGet an OpenAPI spec from a file path.\nclassmethod from_spec_dict(spec_dict: dict) \u2192 langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]#\nGet an OpenAPI spec from a dict.", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-12", "text": "Get an OpenAPI spec from a dict.\nclassmethod from_text(text: str) \u2192 langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]#\nGet an OpenAPI spec from a text.\nclassmethod from_url(url: str) \u2192 langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]#\nGet an OpenAPI spec from a URL.\nstatic get_cleaned_operation_id(operation: openapi_schema_pydantic.v3.v3_1_0.operation.Operation, path: str, method: str) \u2192 str[source]#\nGet a cleaned operation id from an operation id.\nget_methods_for_path(path: str) \u2192 List[str][source]#\nReturn a list of valid methods for the specified path.\nget_operation(path: str, method: str) \u2192 openapi_schema_pydantic.v3.v3_1_0.operation.Operation[source]#\nGet the operation object for a given path and HTTP method.\nget_parameters_for_operation(operation: openapi_schema_pydantic.v3.v3_1_0.operation.Operation) \u2192 List[openapi_schema_pydantic.v3.v3_1_0.parameter.Parameter][source]#\nGet the components for a given operation.\nget_referenced_schema(ref: openapi_schema_pydantic.v3.v3_1_0.reference.Reference) \u2192 openapi_schema_pydantic.v3.v3_1_0.schema.Schema[source]#\nGet a schema (or nested reference) or err.\nget_request_body_for_operation(operation: openapi_schema_pydantic.v3.v3_1_0.operation.Operation) \u2192 Optional[openapi_schema_pydantic.v3.v3_1_0.request_body.RequestBody][source]#\nGet the request body for a given operation.\nclassmethod parse_obj(obj: dict) \u2192 langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]#\nproperty base_url: str#", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-13", "text": "property base_url: str#\nGet the base url.\npydantic model langchain.tools.OpenWeatherMapQueryRun[source]#\nTool that adds the capability to query using the OpenWeatherMap API.\nfield api_wrapper: langchain.utilities.openweathermap.OpenWeatherMapAPIWrapper [Optional]#\npydantic model langchain.tools.PubmedQueryRun[source]#\nTool that adds the capability to search using the PubMed API.\nfield api_wrapper: langchain.utilities.pupmed.PubMedAPIWrapper [Optional]#\npydantic model langchain.tools.QueryPowerBITool[source]#\nTool for querying a Power BI Dataset.\nValidators\nraise_deprecation \u00bb all fields\nvalidate_llm_chain_input_variables \u00bb llm_chain\nfield examples: Optional[str] = '\\nQuestion: How many rows are in the table ?\\nDAX: EVALUATE ROW(\"Number of rows\", COUNTROWS())\\n----\\nQuestion: How many rows are in the table where is not empty?\\nDAX: EVALUATE ROW(\"Number of rows\", COUNTROWS(FILTER(, [] <> \"\")))\\n----\\nQuestion: What was the average of in ?\\nDAX: EVALUATE ROW(\"Average\", AVERAGE([]))\\n----\\n'#\nfield llm_chain: langchain.chains.llm.LLMChain [Required]#\nfield max_iterations: int = 5#\nfield powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]#\nfield session_cache: Dict[str, Any] [Optional]#", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-14", "text": "field template: Optional[str] = '\\nAnswer the question below with a DAX query that can be sent to Power BI. DAX queries have a simple syntax comprised of just one required keyword, EVALUATE, and several optional keywords: ORDER BY, START AT, DEFINE, MEASURE, VAR, TABLE, and COLUMN. Each keyword defines a statement used for the duration of the query. Any time < or > are used in the text below it means that those values need to be replaced by table, columns or other things. If the question is not something you can answer with a DAX query, reply with \"I cannot answer this\" and the question will be escalated to a human.\\n\\nSome DAX functions return a table instead of a scalar, and must be wrapped in a function that evaluates the table and returns a scalar; unless the table is a single column, single row table, then it is treated as a scalar value. Most DAX functions require one or more arguments, which can include tables, columns, expressions, and values. However, some functions, such as PI, do not require any arguments, but always require parentheses to indicate the null argument. For example, you must always type PI(), not PI. You can also nest functions within other functions. \\n\\nSome commonly used functions are:\\nEVALUATE - At the most basic level, a DAX query is an EVALUATE statement containing a table expression. At least one EVALUATE statement is required, however, a query can contain any number of EVALUATE statements.\\nEVALUATE ORDER BY ASC or DESC - The optional ORDER BY keyword defines one or more expressions used to sort query results. Any expression that can be evaluated for each row of the result is valid.\\nEVALUATE ORDER BY ASC or DESC START AT or - The optional", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-15", "text": "ORDER BY ASC or DESC START AT or - The optional START AT keyword is used inside an ORDER BY clause. It defines the value at which the query results begin.\\nDEFINE MEASURE | VAR; EVALUATE - The optional DEFINE keyword introduces one or more calculated entity definitions that exist only for the duration of the query. Definitions precede the EVALUATE statement and are valid for all EVALUATE statements in the query. Definitions can be variables, measures, tables1, and columns1. Definitions can reference other definitions that appear before or after the current definition. At least one definition is required if the DEFINE keyword is included in a query.\\nMEASURE [] = - Introduces a measure definition in a DEFINE statement of a DAX query.\\nVAR = - Stores the result of an expression as a named variable, which can then be passed as an argument to other measure expressions. Once resultant values have been calculated for a variable expression, those values do not change, even if the variable is referenced in another expression.\\n\\nFILTER(,) - Returns a table that represents a subset of another table or expression, where is a Boolean expression that is to be evaluated for each row of the table. For example, [Amount] > 0 or [Region] = \"France\"\\nROW(, ) - Returns a table with a single row containing values that result from the expressions given to each column.\\nDISTINCT() - Returns a one-column table that contains the distinct values from the specified column. In other words, duplicate values are removed and only unique values are returned. This function cannot be used to Return values into a cell or column on a worksheet; rather, you nest the DISTINCT function within a formula, to get a list of distinct values that can be passed", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-16", "text": "you nest the DISTINCT function within a formula, to get a list of distinct values that can be passed to another function and then counted, summed, or used for other operations.\\nDISTINCT() - Returns a table by removing duplicate rows from another table or expression.\\n\\nAggregation functions, names with a A in it, handle booleans and empty strings in appropriate ways, while the same function without A only uses the numeric values in a column. Functions names with an X in it can include a expression as an argument, this will be evaluated for each row in the table and the result will be used in the regular function calculation, these are the functions:\\nCOUNT(), COUNTA(), COUNTX(,), COUNTAX(,), COUNTROWS([]), COUNTBLANK(), DISTINCTCOUNT(), DISTINCTCOUNTNOBLANK () - these are all variantions of count functions.\\nAVERAGE(), AVERAGEA(), AVERAGEX(,) - these are all variantions of average functions.\\nMAX(), MAXA(), MAXX(,) - these are all variantions of max functions.\\nMIN(), MINA(), MINX(,) - these are all variantions of min functions.\\nPRODUCT(), PRODUCTX(,) - these are all variantions of product functions.\\nSUM(), SUMX(,) - these are all variantions of sum functions.\\n\\nDate and time functions:\\nDATE(year, month, day) - Returns a date value that represents the specified year, month, and day.\\nDATEDIFF(date1, date2, ) - Returns the difference between two date values, in the specified", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-17", "text": "date2, ) - Returns the difference between two date values, in the specified interval, that can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, YEAR.\\nDATEVALUE() - Returns a date value that represents the specified date.\\nYEAR(), QUARTER(), MONTH(), DAY(), HOUR(), MINUTE(), SECOND() - Returns the part of the date for the specified date.\\n\\nFinally, make sure to escape double quotes with a single backslash, and make sure that only table names have single quotes around them, while names of measures or the values of columns that you want to compare against are in escaped double quotes. Newlines are not necessary and can be skipped. The queries are serialized as json and so will have to fit be compliant with json syntax. Sometimes you will get a question, a DAX query and a error, in that case you need to rewrite the DAX query to get the correct answer.\\n\\nThe following tables exist: {tables}\\n\\nand the schema\\'s for some are given here:\\n{schemas}\\n\\nExamples:\\n{examples}\\n\\nQuestion: {tool_input}\\nDAX: \\n'#", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-18", "text": "pydantic model langchain.tools.ReadFileTool[source]#\nfield args_schema: Type[pydantic.main.BaseModel] = #\nPydantic model class to validate and parse the tool\u2019s input arguments.\nfield description: str = 'Read file from disk'#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield name: str = 'read_file'#\nThe unique name of the tool that clearly communicates its purpose.\npydantic model langchain.tools.SceneXplainTool[source]#\nTool that adds the capability to explain images.\nfield api_wrapper: langchain.utilities.scenexplain.SceneXplainAPIWrapper [Optional]#\npydantic model langchain.tools.ShellTool[source]#\nTool to run shell commands.\nfield args_schema: Type[pydantic.main.BaseModel] = #\nSchema for input arguments.\nfield description: str = 'Run shell commands on this Linux machine.'#\nDescription of tool.\nfield name: str = 'terminal'#\nName of tool.\nfield process: langchain.utilities.bash.BashProcess [Optional]#\nBash process to run commands.\npydantic model langchain.tools.SteamshipImageGenerationTool[source]#\nfield model_name: ModelName [Required]#\nfield return_urls: Optional[bool] = False#\nfield size: Optional[str] = '512x512'#\nfield steamship: Steamship [Required]#\npydantic model langchain.tools.StructuredTool[source]#\nTool that can operate on any number of inputs.\nfield args_schema: Type[pydantic.main.BaseModel] [Required]#\nThe input arguments\u2019 schema.\nThe tool schema.", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-19", "text": "The input arguments\u2019 schema.\nThe tool schema.\nfield coroutine: Optional[Callable[[...], Awaitable[Any]]] = None#\nThe asynchronous version of the function.\nfield description: str = ''#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield func: Callable[[...], Any] [Required]#\nThe function to run when the tool is called.\nclassmethod from_function(func: Callable, name: Optional[str] = None, description: Optional[str] = None, return_direct: bool = False, args_schema: Optional[Type[pydantic.main.BaseModel]] = None, infer_schema: bool = True, **kwargs: Any) \u2192 langchain.tools.base.StructuredTool[source]#\nproperty args: dict#\nThe tool\u2019s input arguments.\npydantic model langchain.tools.Tool[source]#\nTool that takes in function or coroutine directly.\nfield args_schema: Optional[Type[pydantic.main.BaseModel]] = None#\nPydantic model class to validate and parse the tool\u2019s input arguments.\nfield callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None#\nDeprecated. Please use callbacks instead.\nfield callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None#\nCallbacks to be called during tool execution.\nfield coroutine: Optional[Callable[[...], Awaitable[str]]] = None#\nThe asynchronous version of the function.\nfield description: str = ''#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield func: Callable[[...], str] [Required]#\nThe function to run when the tool is called.", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-20", "text": "The function to run when the tool is called.\nfield handle_tool_error: Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]] = False#\nHandle the content of the ToolException thrown.\nfield name: str [Required]#\nThe unique name of the tool that clearly communicates its purpose.\nfield return_direct: bool = False#\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nfield verbose: bool = False#\nWhether to log the tool\u2019s progress.\nclassmethod from_function(func: Callable, name: str, description: str, return_direct: bool = False, args_schema: Optional[Type[pydantic.main.BaseModel]] = None, **kwargs: Any) \u2192 langchain.tools.base.Tool[source]#\nInitialize tool from a function.\nproperty args: dict#\nThe tool\u2019s input arguments.\npydantic model langchain.tools.VectorStoreQATool[source]#\nTool for the VectorDBQA chain. To be initialized with name and chain.\nstatic get_description(name: str, description: str) \u2192 str[source]#\npydantic model langchain.tools.VectorStoreQAWithSourcesTool[source]#\nTool for the VectorDBQAWithSources chain.\nstatic get_description(name: str, description: str) \u2192 str[source]#\npydantic model langchain.tools.WikipediaQueryRun[source]#\nTool that adds the capability to search using the Wikipedia API.\nfield api_wrapper: langchain.utilities.wikipedia.WikipediaAPIWrapper [Required]#\npydantic model langchain.tools.WolframAlphaQueryRun[source]#\nTool that adds the capability to query using the Wolfram Alpha SDK.\nfield api_wrapper: langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper [Required]#", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-21", "text": "pydantic model langchain.tools.WriteFileTool[source]#\nfield args_schema: Type[pydantic.main.BaseModel] = #\nPydantic model class to validate and parse the tool\u2019s input arguments.\nfield description: str = 'Write file to disk'#\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nfield name: str = 'write_file'#\nThe unique name of the tool that clearly communicates its purpose.\npydantic model langchain.tools.YouTubeSearchTool[source]#\npydantic model langchain.tools.ZapierNLAListActions[source]#\nReturns a list of all exposed (enabled) actions associated withcurrent user (associated with the set api_key). Change your exposed\nactions here: https://nla.zapier.com/demo/start/\nThe return list can be empty if no actions exposed. Else will contain\na list of action objects:\n[{\u201cid\u201d: str,\n\u201cdescription\u201d: str,\n\u201cparams\u201d: Dict[str, str]\n}]\nparams will always contain an instructions key, the only required\nparam. All others optional and if provided will override any AI guesses\n(see \u201cunderstanding the AI guessing flow\u201d here:\nhttps://nla.zapier.com/api/v1/docs)\nParameters\nNone \u2013 \nfield api_wrapper: langchain.utilities.zapier.ZapierNLAWrapper [Optional]#\npydantic model langchain.tools.ZapierNLARunAction[source]#\nExecutes an action that is identified by action_id, must be exposed(enabled) by the current user (associated with the set api_key). Change\nyour exposed actions here: https://nla.zapier.com/demo/start/", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-22", "text": "your exposed actions here: https://nla.zapier.com/demo/start/\nThe return JSON is guaranteed to be less than ~500 words (350\ntokens) making it safe to inject into the prompt of another LLM\ncall.\nParameters\naction_id \u2013 a specific action ID (from list actions) of the action to execute\n(the set api_key must be associated with the action owner)\ninstructions \u2013 a natural language instruction string for using the action\n(eg. \u201cget the latest email from Mike Knoop\u201d for \u201cGmail: find email\u201d action)\nparams \u2013 a dict, optional. Any params provided will override AI guesses\nfrom instructions (see \u201cunderstanding the AI guessing flow\u201d here:\nhttps://nla.zapier.com/api/v1/docs)\nfield action_id: str [Required]#\nfield api_wrapper: langchain.utilities.zapier.ZapierNLAWrapper [Optional]#", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-23", "text": "field base_prompt: str = 'A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example \"get the latest email from my bank\" or \"send a slack message to the #general channel\". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\\'Message_Text\\', \\'Channel\\'], your instruction should be something like \\'send a slack message to the #general channel with the text hello world\\'. Another example: if the params are [\\'Calendar\\', \\'Search_Term\\'], your instruction should be something like \\'find the meeting in my personal calendar at 3pm\\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \\'not enough information provided in the instruction, missing \\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}'#\nfield params: Optional[dict] = None#\nfield params_schema: Dict[str, str] [Optional]#\nfield zapier_description: str [Required]#\nlangchain.tools.tool(*args: Union[str, Callable], return_direct: bool = False, args_schema: Optional[Type[pydantic.main.BaseModel]] = None, infer_schema: bool = True) \u2192 Callable[source]#\nMake tools out of functions, can be used with or without arguments.\nParameters\n*args \u2013 The arguments to the tool.\nreturn_direct \u2013 Whether to return directly from the tool rather\nthan continuing the agent loop.\nargs_schema \u2013 optional argument schema for user to specify\ninfer_schema \u2013 Whether to infer the schema of the arguments from", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "49017d91b4bb-24", "text": "infer_schema \u2013 Whether to infer the schema of the arguments from\nthe function\u2019s signature. This also makes the resultant tool\naccept a dictionary input to its run() function.\nRequires:\nFunction must be of type (str) -> str\nFunction must have a docstring\nExamples\n@tool\ndef search_api(query: str) -> str:\n # Searches the API for the query.\n return\n@tool(\"search\", return_direct=True)\ndef search_api(query: str) -> str:\n # Searches the API for the query.\n return\nprevious\nAgents\nnext\nAgent Toolkits\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/tools.html"}
+{"id": "2aa15396c9dd-0", "text": ".rst\n.pdf\nLLMs\nLLMs#\nWrappers on top of large language models APIs.\npydantic model langchain.llms.AI21[source]#\nWrapper around AI21 large language models.\nTo use, you should have the environment variable AI21_API_KEY\nset with your API key.\nExample\nfrom langchain.llms import AI21\nai21 = AI21(model=\"j2-jumbo-instruct\")\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield base_url: Optional[str] = None#\nBase url to use, if None decides based on model name.\nfield countPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#\nPenalizes repeated tokens according to count.\nfield frequencyPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#\nPenalizes repeated tokens according to frequency.\nfield logitBias: Optional[Dict[str, float]] = None#\nAdjust the probability of specific tokens being generated.\nfield maxTokens: int = 256#\nThe maximum number of tokens to generate in the completion.\nfield minTokens: int = 0#\nThe minimum number of tokens to generate in the completion.\nfield model: str = 'j2-jumbo-instruct'#\nModel name to use.\nfield numResults: int = 1#\nHow many completions to generate for each prompt.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-1", "text": "How many completions to generate for each prompt.\nfield presencePenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#\nPenalizes repeated tokens.\nfield temperature: float = 0.7#\nWhat sampling temperature to use.\nfield topP: float = 1.0#\nTotal probability mass of tokens to consider at each step.\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-2", "text": "Predict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-3", "text": "Take in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-4", "text": "Try to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.AlephAlpha[source]#\nWrapper around Aleph Alpha large language models.\nTo use, you should have the aleph_alpha_client python package installed, and the\nenvironment variable ALEPH_ALPHA_API_KEY set with your API key, or pass\nit as a named parameter to the constructor.\nParameters are explained more in depth here:\nAleph-Alpha/aleph-alpha-client\nExample\nfrom langchain.llms import AlephAlpha\nalpeh_alpha = AlephAlpha(aleph_alpha_api_key=\"my-api-key\")\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield aleph_alpha_api_key: Optional[str] = None#\nAPI key for Aleph Alpha API.\nfield best_of: Optional[int] = None#\nreturns the one with the \u201cbest of\u201d results\n(highest log probability per token)\nfield completion_bias_exclusion_first_token_only: bool = False#\nOnly consider the first token for the completion_bias_exclusion.\nfield contextual_control_threshold: Optional[float] = None#\nIf set to None, attention control parameters only apply to those tokens that have\nexplicitly been set in the request.\nIf set to a non-None value, control parameters are also applied to similar tokens.\nfield control_log_additive: Optional[bool] = True#\nTrue: apply control by adding the log(control_factor) to attention scores.\nFalse: (attention_scores - - attention_scores.min(-1)) * control_factor\nfield echo: bool = False#\nEcho the prompt in the completion.\nfield frequency_penalty: float = 0.0#\nPenalizes repeated tokens according to frequency.\nfield log_probs: Optional[int] = None#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-5", "text": "field log_probs: Optional[int] = None#\nNumber of top log probabilities to be returned for each generated token.\nfield logit_bias: Optional[Dict[int, float]] = None#\nThe logit bias allows to influence the likelihood of generating tokens.\nfield maximum_tokens: int = 64#\nThe maximum number of tokens to be generated.\nfield minimum_tokens: Optional[int] = 0#\nGenerate at least this number of tokens.\nfield model: Optional[str] = 'luminous-base'#\nModel name to use.\nfield n: int = 1#\nHow many completions to generate for each prompt.\nfield penalty_bias: Optional[str] = None#\nPenalty bias for the completion.\nfield penalty_exceptions: Optional[List[str]] = None#\nList of strings that may be generated without penalty,\nregardless of other penalty settings\nfield penalty_exceptions_include_stop_sequences: Optional[bool] = None#\nShould stop_sequences be included in penalty_exceptions.\nfield presence_penalty: float = 0.0#\nPenalizes repeated tokens.\nfield raw_completion: bool = False#\nForce the raw completion of the model to be returned.\nfield repetition_penalties_include_completion: bool = True#\nFlag deciding whether presence penalty or frequency penalty\nare updated from the completion.\nfield repetition_penalties_include_prompt: Optional[bool] = False#\nFlag deciding whether presence penalty or frequency penalty are\nupdated from the prompt.\nfield stop_sequences: Optional[List[str]] = None#\nStop sequences to use.\nfield temperature: float = 0.0#\nA non-negative float that tunes the degree of randomness in generation.\nfield tokens: Optional[bool] = False#\nreturn tokens of completion.\nfield top_k: int = 0#\nNumber of most likely tokens to consider at each step.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-6", "text": "Number of most likely tokens to consider at each step.\nfield top_p: float = 0.0#\nTotal probability mass of tokens to consider at each step.\nfield use_multiplicative_presence_penalty: Optional[bool] = False#\nFlag deciding whether presence penalty is applied\nmultiplicatively (True) or additively (False).\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-7", "text": "Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-8", "text": "Get the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.Anthropic[source]#\nWrapper around Anthropic\u2019s large language models.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-9", "text": "Wrapper around Anthropic\u2019s large language models.\nTo use, you should have the anthropic python package installed, and the\nenvironment variable ANTHROPIC_API_KEY set with your API key, or pass\nit as a named parameter to the constructor.\nExample\nimport anthropic\nfrom langchain.llms import Anthropic\nmodel = Anthropic(model=\"\", anthropic_api_key=\"my-api-key\")\n# Simplest invocation, automatically wrapped with HUMAN_PROMPT\n# and AI_PROMPT.\nresponse = model(\"What are the biggest risks facing humanity?\")\n# Or if you want to use the chat mode, build a few-shot-prompt, or\n# put words in the Assistant's mouth, use HUMAN_PROMPT and AI_PROMPT:\nraw_prompt = \"What are the biggest risks facing humanity?\"\nprompt = f\"{anthropic.HUMAN_PROMPT} {prompt}{anthropic.AI_PROMPT}\"\nresponse = model(prompt)\nValidators\nraise_deprecation \u00bb all fields\nraise_warning \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield default_request_timeout: Optional[Union[float, Tuple[float, float]]] = None#\nTimeout for requests to Anthropic Completion API. Default is 600 seconds.\nfield max_tokens_to_sample: int = 256#\nDenotes the number of tokens to predict per generation.\nfield model: str = 'claude-v1'#\nModel name to use.\nfield streaming: bool = False#\nWhether to stream the results.\nfield temperature: Optional[float] = None#\nA non-negative float that tunes the degree of randomness in generation.\nfield top_k: Optional[int] = None#\nNumber of most likely tokens to consider at each step.\nfield top_p: Optional[float] = None#\nTotal probability mass of tokens to consider at each step.\nfield verbose: bool [Optional]#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-10", "text": "field verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-11", "text": "Behaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int[source]#\nCalculate number of tokens.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-12", "text": "get_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nstream(prompt: str, stop: Optional[List[str]] = None) \u2192 Generator[source]#\nCall Anthropic completion_stream and return the resulting generator.\nBETA: this is a beta feature while we figure out the right abstraction.\nOnce that happens, this interface could change.\nParameters\nprompt \u2013 The prompt to pass into the model.\nstop \u2013 Optional list of stop words to use when generating.\nReturns\nA generator representing the stream of tokens from Anthropic.\nExample", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-13", "text": "Returns\nA generator representing the stream of tokens from Anthropic.\nExample\nprompt = \"Write a poem about a stream.\"\nprompt = f\"\\n\\nHuman: {prompt}\\n\\nAssistant:\"\ngenerator = anthropic.stream(prompt)\nfor token in generator:\n yield token\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.Anyscale[source]#\nWrapper around Anyscale Services.\nTo use, you should have the environment variable ANYSCALE_SERVICE_URL,\nANYSCALE_SERVICE_ROUTE and ANYSCALE_SERVICE_TOKEN set with your Anyscale\nService, or pass it as a named parameter to the constructor.\nExample\nfrom langchain.llms import Anyscale\nanyscale = Anyscale(anyscale_service_url=\"SERVICE_URL\",\n anyscale_service_route=\"SERVICE_ROUTE\",\n anyscale_service_token=\"SERVICE_TOKEN\")\n# Use Ray for distributed processing\nimport ray\nprompt_list=[]\n@ray.remote\ndef send_query(llm, prompt):\n resp = llm(prompt)\n return resp\nfutures = [send_query.remote(anyscale, prompt) for prompt in prompt_list]\nresults = ray.get(futures)\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield model_kwargs: Optional[dict] = None#\nKey word arguments to pass to the model. Reserved for future use\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-14", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-15", "text": "Parameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-16", "text": "Get the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.Aviary[source]#\nAllow you to use an Aviary.\nAviary is a backend for hosted models. You can\nfind out more about aviary at\nray-project/aviary\nHas no dependencies, since it connects to backend\ndirectly.\nTo get a list of the models supported on an", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-17", "text": "directly.\nTo get a list of the models supported on an\naviary, follow the instructions on the web site to\ninstall the aviary CLI and then use:\naviary models\nYou must at least specify the environment\nvariable or parameter AVIARY_URL.\nYou may optionally specify the environment variable\nor parameter AVIARY_TOKEN.\nExample\nfrom langchain.llms import Aviary\nlight = Aviary(aviary_url='AVIARY_URL',\n model='amazon/LightGPT')\nresult = light.predict('How do you make fried rice?')\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-18", "text": "Predict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-19", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-20", "text": "Save the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.AzureOpenAI[source]#\nWrapper around Azure-specific OpenAI large language models.\nTo use, you should have the openai python package installed, and the\nenvironment variable OPENAI_API_KEY set with your API key.\nAny parameters that are valid to be passed to the openai.create call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import AzureOpenAI\nopenai = AzureOpenAI(model_name=\"text-davinci-003\")\nValidators\nbuild_extra \u00bb all fields\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_azure_settings \u00bb all fields\nvalidate_environment \u00bb all fields\nfield allowed_special: Union[Literal['all'], AbstractSet[str]] = {}#\nSet of special tokens that are allowed\u3002\nfield batch_size: int = 20#\nBatch size to use when passing multiple documents to generate.\nfield best_of: int = 1#\nGenerates best_of completions server-side and returns the \u201cbest\u201d.\nfield deployment_name: str = ''#\nDeployment name to use.\nfield disallowed_special: Union[Literal['all'], Collection[str]] = 'all'#\nSet of special tokens that are not allowed\u3002\nfield frequency_penalty: float = 0#\nPenalizes repeated tokens according to frequency.\nfield logit_bias: Optional[Dict[str, float]] [Optional]#\nAdjust the probability of specific tokens being generated.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-21", "text": "Adjust the probability of specific tokens being generated.\nfield max_retries: int = 6#\nMaximum number of retries to make when generating.\nfield max_tokens: int = 256#\nThe maximum number of tokens to generate in the completion.\n-1 returns as many tokens as possible given the prompt and\nthe models maximal context size.\nfield model_kwargs: Dict[str, Any] [Optional]#\nHolds any model parameters valid for create call not explicitly specified.\nfield model_name: str = 'text-davinci-003' (alias 'model')#\nModel name to use.\nfield n: int = 1#\nHow many completions to generate for each prompt.\nfield presence_penalty: float = 0#\nPenalizes repeated tokens.\nfield request_timeout: Optional[Union[float, Tuple[float, float]]] = None#\nTimeout for requests to OpenAI completion API. Default is 600 seconds.\nfield streaming: bool = False#\nWhether to stream the results or not.\nfield temperature: float = 0.7#\nWhat sampling temperature to use.\nfield top_p: float = 1#\nTotal probability mass of tokens to consider at each step.\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-22", "text": "Run the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-23", "text": "deep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ncreate_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) \u2192 langchain.schema.LLMResult#\nCreate the LLMResult from the choices and prompts.\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) \u2192 List[List[str]]#\nGet the sub prompts for llm call.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token IDs using the tiktoken package.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-24", "text": "Get the token IDs using the tiktoken package.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nmax_tokens_for_prompt(prompt: str) \u2192 int#\nCalculate the maximum number of tokens possible to generate for a prompt.\nParameters\nprompt \u2013 The prompt to pass into the model.\nReturns\nThe maximum number of tokens to generate for a prompt.\nExample\nmax_tokens = openai.max_token_for_prompt(\"Tell me a joke.\")\nmodelname_to_contextsize(modelname: str) \u2192 int#\nCalculate the maximum number of tokens possible to generate for a model.\nParameters\nmodelname \u2013 The modelname we want to know the context size for.\nReturns\nThe maximum context size\nExample\nmax_tokens = openai.modelname_to_contextsize(\"text-davinci-003\")\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nprep_streaming_params(stop: Optional[List[str]] = None) \u2192 Dict[str, Any]#\nPrepare the params for streaming.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-25", "text": "Prepare the params for streaming.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nstream(prompt: str, stop: Optional[List[str]] = None) \u2192 Generator#\nCall OpenAI with streaming flag and return the resulting generator.\nBETA: this is a beta feature while we figure out the right abstraction.\nOnce that happens, this interface could change.\nParameters\nprompt \u2013 The prompts to pass into the model.\nstop \u2013 Optional list of stop words to use when generating.\nReturns\nA generator representing the stream of tokens from OpenAI.\nExample\ngenerator = openai.stream(\"Tell me a joke.\")\nfor token in generator:\n yield token\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.Banana[source]#\nWrapper around Banana large language models.\nTo use, you should have the banana-dev python package installed,\nand the environment variable BANANA_API_KEY set with your API key.\nAny parameters that are valid to be passed to the call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import Banana\nbanana = Banana(model_key=\"\")\nValidators\nbuild_extra \u00bb all fields\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield model_key: str = ''#\nmodel endpoint to use\nfield model_kwargs: Dict[str, Any] [Optional]#\nHolds any model parameters valid for create call not\nexplicitly specified.\nfield verbose: bool [Optional]#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-26", "text": "explicitly specified.\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-27", "text": "Behaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-28", "text": "get_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.Baseten[source]#\nUse your Baseten models in Langchain\nTo use, you should have the baseten python package installed,\nand run baseten.login() with your Baseten API key.\nThe required model param can be either a model id or model", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-29", "text": "The required model param can be either a model id or model\nversion id. Using a model version ID will result in\nslightly faster invocation.\nAny other model parameters can also\nbe passed in with the format input={model_param: value, \u2026}\nThe Baseten model must accept a dictionary of input with the key\n\u201cprompt\u201d and return a dictionary with a key \u201cdata\u201d which maps\nto a list of response strings.\nExample\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-30", "text": "Predict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-31", "text": "Take in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-32", "text": "Try to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.Beam[source]#\nWrapper around Beam API for gpt2 large language model.\nTo use, you should have the beam-sdk python package installed,\nand the environment variable BEAM_CLIENT_ID set with your client id\nand BEAM_CLIENT_SECRET set with your client secret. Information on how\nto get these is available here: https://docs.beam.cloud/account/api-keys.\nThe wrapper can then be called as follows, where the name, cpu, memory, gpu,\npython version, and python packages can be updated accordingly. Once deployed,\nthe instance can be called.\nExample\nllm = Beam(model_name=\"gpt2\",\n name=\"langchain-gpt2\",\n cpu=8,\n memory=\"32Gi\",\n gpu=\"A10G\",\n python_version=\"python3.8\",\n python_packages=[\n \"diffusers[torch]>=0.10\",\n \"transformers\",\n \"torch\",\n \"pillow\",\n \"accelerate\",\n \"safetensors\",\n \"xformers\",],\n max_length=50)\nllm._deploy()\ncall_result = llm._call(input)\nValidators\nbuild_extra \u00bb all fields\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield model_kwargs: Dict[str, Any] [Optional]#\nHolds any model parameters valid for create call not\nexplicitly specified.\nfield url: str = ''#\nmodel endpoint to use\nfield verbose: bool [Optional]#\nWhether to print out response text.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-33", "text": "field verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\napp_creation() \u2192 None[source]#\nCreates a Python file which will contain your Beam app definition.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-34", "text": "Behaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-35", "text": "get_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nrun_creation() \u2192 None[source]#\nCreates a Python file which will be deployed on beam.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.Bedrock[source]#\nLLM provider to invoke Bedrock models.\nTo authenticate, the AWS client uses the following methods to\nautomatically load credentials:", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-36", "text": "To authenticate, the AWS client uses the following methods to\nautomatically load credentials:\nhttps://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\nIf a specific credential profile should be used, you must pass\nthe name of the profile from the ~/.aws/credentials file that is to be used.\nMake sure the credentials / roles used have the required policies to\naccess the Bedrock service.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield credentials_profile_name: Optional[str] = None#\nThe name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\nhas either access keys or role information specified.\nIf not specified, the default credential profile or, if on an EC2 instance,\ncredentials from IMDS will be used.\nSee: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\nfield model_id: str [Required]#\nId of the model to call, e.g., amazon.titan-tg1-large, this is\nequivalent to the modelId property in the list-foundation-models api\nfield model_kwargs: Optional[Dict] = None#\nKey word arguments to pass to the model.\nfield region_name: Optional[str] = None#\nThe aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable\nor region specified in ~/.aws/config in case it is not provided here.\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-37", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-38", "text": "Parameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-39", "text": "Get the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.CTransformers[source]#\nWrapper around the C Transformers LLM interface.\nTo use, you should have the ctransformers python package installed.\nSee marella/ctransformers\nExample\nfrom langchain.llms import CTransformers", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-40", "text": "Example\nfrom langchain.llms import CTransformers\nllm = CTransformers(model=\"/path/to/ggml-gpt-2.bin\", model_type=\"gpt2\")\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield config: Optional[Dict[str, Any]] = None#\nThe config parameters.\nSee marella/ctransformers\nfield lib: Optional[str] = None#\nThe path to a shared library or one of avx2, avx, basic.\nfield model: str [Required]#\nThe path to a model file or directory or the name of a Hugging Face Hub\nmodel repo.\nfield model_file: Optional[str] = None#\nThe name of the model file in repo or directory.\nfield model_type: Optional[str] = None#\nThe model type.\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-41", "text": "Take in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-42", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-43", "text": "Save the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.CerebriumAI[source]#\nWrapper around CerebriumAI large language models.\nTo use, you should have the cerebrium python package installed, and the\nenvironment variable CEREBRIUMAI_API_KEY set with your API key.\nAny parameters that are valid to be passed to the call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import CerebriumAI\ncerebrium = CerebriumAI(endpoint_url=\"\")\nValidators\nbuild_extra \u00bb all fields\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield endpoint_url: str = ''#\nmodel endpoint to use\nfield model_kwargs: Dict[str, Any] [Optional]#\nHolds any model parameters valid for create call not\nexplicitly specified.\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-44", "text": "Run the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-45", "text": "Returns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-46", "text": "predict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.Cohere[source]#\nWrapper around Cohere large language models.\nTo use, you should have the cohere python package installed, and the\nenvironment variable COHERE_API_KEY set with your API key, or pass\nit as a named parameter to the constructor.\nExample\nfrom langchain.llms import Cohere\ncohere = Cohere(model=\"gptd-instruct-tft\", cohere_api_key=\"my-api-key\")\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield frequency_penalty: float = 0.0#\nPenalizes repeated tokens according to frequency. Between 0 and 1.\nfield k: int = 0#\nNumber of most likely tokens to consider at each step.\nfield max_retries: int = 10#\nMaximum number of retries to make when generating.\nfield max_tokens: int = 256#\nDenotes the number of tokens to predict per generation.\nfield model: Optional[str] = None#\nModel name to use.\nfield p: int = 1#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-47", "text": "Model name to use.\nfield p: int = 1#\nTotal probability mass of tokens to consider at each step.\nfield presence_penalty: float = 0.0#\nPenalizes repeated tokens. Between 0 and 1.\nfield temperature: float = 0.75#\nA non-negative float that tunes the degree of randomness in generation.\nfield truncate: Optional[str] = None#\nSpecify how the client handles inputs longer than the maximum token\nlength: Truncate from START, END or NONE\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-48", "text": "Predict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-49", "text": "Take in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-50", "text": "Try to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.Databricks[source]#\nLLM wrapper around a Databricks serving endpoint or a cluster driver proxy app.\nIt supports two endpoint types:\nServing endpoint (recommended for both production and development).\nWe assume that an LLM was registered and deployed to a serving endpoint.\nTo wrap it as an LLM you must have \u201cCan Query\u201d permission to the endpoint.\nSet endpoint_name accordingly and do not set cluster_id and\ncluster_driver_port.\nThe expected model signature is:\ninputs:\n[{\"name\": \"prompt\", \"type\": \"string\"},\n {\"name\": \"stop\", \"type\": \"list[string]\"}]\noutputs: [{\"type\": \"string\"}]\nCluster driver proxy app (recommended for interactive development).\nOne can load an LLM on a Databricks interactive cluster and start a local HTTP\nserver on the driver node to serve the model at / using HTTP POST method\nwith JSON input/output.\nPlease use a port number between [3000, 8000] and let the server listen to\nthe driver IP address or simply 0.0.0.0 instead of localhost only.\nTo wrap it as an LLM you must have \u201cCan Attach To\u201d permission to the cluster.\nSet cluster_id and cluster_driver_port and do not set endpoint_name.\nThe expected server schema (using JSON schema) is:\ninputs:\n{\"type\": \"object\",\n \"properties\": {\n \"prompt\": {\"type\": \"string\"},\n \"stop\": {\"type\": \"array\", \"items\": {\"type\": \"string\"}}},\n \"required\": [\"prompt\"]}`\noutputs: {\"type\": \"string\"}\nIf the endpoint model signature is different or you want to set extra params,", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-51", "text": "If the endpoint model signature is different or you want to set extra params,\nyou can use transform_input_fn and transform_output_fn to apply necessary\ntransformations before and after the query.\nValidators\nraise_deprecation \u00bb all fields\nset_cluster_driver_port \u00bb cluster_driver_port\nset_cluster_id \u00bb cluster_id\nset_model_kwargs \u00bb model_kwargs\nset_verbose \u00bb verbose\nfield api_token: str [Optional]#\nDatabricks personal access token.\nIf not provided, the default value is determined by\nthe DATABRICKS_TOKEN environment variable if present, or\nan automatically generated temporary token if running inside a Databricks\nnotebook attached to an interactive cluster in \u201csingle user\u201d or\n\u201cno isolation shared\u201d mode.\nfield cluster_driver_port: Optional[str] = None#\nThe port number used by the HTTP server running on the cluster driver node.\nThe server should listen on the driver IP address or simply 0.0.0.0 to connect.\nWe recommend the server using a port number between [3000, 8000].\nfield cluster_id: Optional[str] = None#\nID of the cluster if connecting to a cluster driver proxy app.\nIf neither endpoint_name nor cluster_id is not provided and the code runs\ninside a Databricks notebook attached to an interactive cluster in \u201csingle user\u201d\nor \u201cno isolation shared\u201d mode, the current cluster ID is used as default.\nYou must not set both endpoint_name and cluster_id.\nfield endpoint_name: Optional[str] = None#\nName of the model serving endpont.\nYou must specify the endpoint name to connect to a model serving endpoint.\nYou must not set both endpoint_name and cluster_id.\nfield host: str [Optional]#\nDatabricks workspace hostname.\nIf not provided, the default value is determined by\nthe DATABRICKS_HOST environment variable if present, or", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-52", "text": "the DATABRICKS_HOST environment variable if present, or\nthe hostname of the current Databricks workspace if running inside\na Databricks notebook attached to an interactive cluster in \u201csingle user\u201d\nor \u201cno isolation shared\u201d mode.\nfield model_kwargs: Optional[Dict[str, Any]] = None#\nExtra parameters to pass to the endpoint.\nfield transform_input_fn: Optional[Callable] = None#\nA function that transforms {prompt, stop, **kwargs} into a JSON-compatible\nrequest object that the endpoint accepts.\nFor example, you can apply a prompt template to the input prompt.\nfield transform_output_fn: Optional[Callable[[...], str]] = None#\nA function that transforms the output from the endpoint to the generated text.\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-53", "text": "Take in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-54", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-55", "text": "Save the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.DeepInfra[source]#\nWrapper around DeepInfra deployed models.\nTo use, you should have the requests python package installed, and the\nenvironment variable DEEPINFRA_API_TOKEN set with your API token, or pass\nit as a named parameter to the constructor.\nOnly supports text-generation and text2text-generation for now.\nExample\nfrom langchain.llms import DeepInfra\ndi = DeepInfra(model_id=\"google/flan-t5-xl\",\n deepinfra_api_token=\"my-api-key\")\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-56", "text": "Run the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-57", "text": "Returns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-58", "text": "predict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.FakeListLLM[source]#\nFake LLM wrapper for testing purposes.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-59", "text": "Take in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-60", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-61", "text": "Save the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.ForefrontAI[source]#\nWrapper around ForefrontAI large language models.\nTo use, you should have the environment variable FOREFRONTAI_API_KEY\nset with your API key.\nExample\nfrom langchain.llms import ForefrontAI\nforefrontai = ForefrontAI(endpoint_url=\"\")\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield base_url: Optional[str] = None#\nBase url to use, if None decides based on model name.\nfield endpoint_url: str = ''#\nModel name to use.\nfield length: int = 256#\nThe maximum number of tokens to generate in the completion.\nfield repetition_penalty: int = 1#\nPenalizes repeated tokens according to frequency.\nfield temperature: float = 0.7#\nWhat sampling temperature to use.\nfield top_k: int = 40#\nThe number of highest probability vocabulary tokens to\nkeep for top-k-filtering.\nfield top_p: float = 1.0#\nTotal probability mass of tokens to consider at each step.\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-62", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-63", "text": "Parameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-64", "text": "Get the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.GPT4All[source]#\nWrapper around GPT4All language models.\nTo use, you should have the gpt4all python package installed, the\npre-trained model file, and the model\u2019s config information.\nExample\nfrom langchain.llms import GPT4All", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-65", "text": "Example\nfrom langchain.llms import GPT4All\nmodel = GPT4All(model=\"./models/gpt4all-model.bin\", n_ctx=512, n_threads=8)\n# Simplest invocation\nresponse = model(\"Once upon a time, \")\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield allow_download: bool = False#\nIf model does not exist in ~/.cache/gpt4all/, download it.\nfield context_erase: float = 0.5#\nLeave (n_ctx * context_erase) tokens\nstarting from beginning if the context has run out.\nfield echo: Optional[bool] = False#\nWhether to echo the prompt.\nfield embedding: bool = False#\nUse embedding mode only.\nfield f16_kv: bool = False#\nUse half-precision for key/value cache.\nfield logits_all: bool = False#\nReturn logits for all tokens, not just the last token.\nfield model: str [Required]#\nPath to the pre-trained GPT4All model file.\nfield n_batch: int = 1#\nBatch size for prompt processing.\nfield n_ctx: int = 512#\nToken context window.\nfield n_parts: int = -1#\nNumber of parts to split the model into.\nIf -1, the number of parts is automatically determined.\nfield n_predict: Optional[int] = 256#\nThe maximum number of tokens to generate.\nfield n_threads: Optional[int] = 4#\nNumber of threads to use.\nfield repeat_last_n: Optional[int] = 64#\nLast n tokens to penalize\nfield repeat_penalty: Optional[float] = 1.3#\nThe penalty to apply to repeated tokens.\nfield seed: int = 0#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-66", "text": "The penalty to apply to repeated tokens.\nfield seed: int = 0#\nSeed. If -1, a random seed is used.\nfield stop: Optional[List[str]] = []#\nA list of strings to stop generation when encountered.\nfield streaming: bool = False#\nWhether to stream the results or not.\nfield temp: Optional[float] = 0.8#\nThe temperature to use for sampling.\nfield top_k: Optional[int] = 40#\nThe top-k value to use for sampling.\nfield top_p: Optional[float] = 0.95#\nThe top-p value to use for sampling.\nfield use_mlock: bool = False#\nForce system to keep model in RAM.\nfield verbose: bool [Optional]#\nWhether to print out response text.\nfield vocab_only: bool = False#\nOnly load the vocabulary, no weights.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-67", "text": "Take in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-68", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-69", "text": "Save the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.GooglePalm[source]#\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield max_output_tokens: Optional[int] = None#\nMaximum number of tokens to include in a candidate. Must be greater than zero.\nIf unset, will default to 64.\nfield model_name: str = 'models/text-bison-001'#\nModel name to use.\nfield n: int = 1#\nNumber of chat completions to generate for each prompt. Note that the API may\nnot return the full n completions if duplicates are generated.\nfield temperature: float = 0.7#\nRun inference with this temperature. Must by in the closed interval\n[0.0, 1.0].\nfield top_k: Optional[int] = None#\nDecode using top-k sampling: consider the set of top_k most probable tokens.\nMust be positive.\nfield top_p: Optional[float] = None#\nDecode using nucleus sampling: consider the smallest set of tokens whose\nprobability sum is at least top_p. Must be in the closed interval [0.0, 1.0].\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-70", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-71", "text": "Parameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-72", "text": "Get the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.GooseAI[source]#\nWrapper around OpenAI large language models.\nTo use, you should have the openai python package installed, and the\nenvironment variable GOOSEAI_API_KEY set with your API key.\nAny parameters that are valid to be passed to the openai.create call can be passed\nin, even if not explicitly saved on this class.\nExample", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-73", "text": "in, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import GooseAI\ngooseai = GooseAI(model_name=\"gpt-neo-20b\")\nValidators\nbuild_extra \u00bb all fields\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield frequency_penalty: float = 0#\nPenalizes repeated tokens according to frequency.\nfield logit_bias: Optional[Dict[str, float]] [Optional]#\nAdjust the probability of specific tokens being generated.\nfield max_tokens: int = 256#\nThe maximum number of tokens to generate in the completion.\n-1 returns as many tokens as possible given the prompt and\nthe models maximal context size.\nfield min_tokens: int = 1#\nThe minimum number of tokens to generate in the completion.\nfield model_kwargs: Dict[str, Any] [Optional]#\nHolds any model parameters valid for create call not explicitly specified.\nfield model_name: str = 'gpt-neo-20b'#\nModel name to use\nfield n: int = 1#\nHow many completions to generate for each prompt.\nfield presence_penalty: float = 0#\nPenalizes repeated tokens.\nfield temperature: float = 0.7#\nWhat sampling temperature to use\nfield top_p: float = 1#\nTotal probability mass of tokens to consider at each step.\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-74", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-75", "text": "Parameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-76", "text": "Get the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.HuggingFaceEndpoint[source]#\nWrapper around HuggingFaceHub Inference Endpoints.\nTo use, you should have the huggingface_hub python package installed, and the\nenvironment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass\nit as a named parameter to the constructor.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-77", "text": "it as a named parameter to the constructor.\nOnly supports text-generation and text2text-generation for now.\nExample\nfrom langchain.llms import HuggingFaceEndpoint\nendpoint_url = (\n \"https://abcdefghijklmnop.us-east-1.aws.endpoints.huggingface.cloud\"\n)\nhf = HuggingFaceEndpoint(\n endpoint_url=endpoint_url,\n huggingfacehub_api_token=\"my-api-key\"\n)\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield endpoint_url: str = ''#\nEndpoint URL to use.\nfield model_kwargs: Optional[dict] = None#\nKey word arguments to pass to the model.\nfield task: Optional[str] = None#\nTask to call the model with.\nShould be a task that returns generated_text or summary_text.\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-78", "text": "Take in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-79", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-80", "text": "Save the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.HuggingFaceHub[source]#\nWrapper around HuggingFaceHub models.\nTo use, you should have the huggingface_hub python package installed, and the\nenvironment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass\nit as a named parameter to the constructor.\nOnly supports text-generation, text2text-generation and summarization for now.\nExample\nfrom langchain.llms import HuggingFaceHub\nhf = HuggingFaceHub(repo_id=\"gpt2\", huggingfacehub_api_token=\"my-api-key\")\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield model_kwargs: Optional[dict] = None#\nKey word arguments to pass to the model.\nfield repo_id: str = 'gpt2'#\nModel name to use.\nfield task: Optional[str] = None#\nTask to call the model with.\nShould be a task that returns generated_text or summary_text.\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-81", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-82", "text": "Parameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-83", "text": "Get the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.HuggingFacePipeline[source]#\nWrapper around HuggingFace Pipeline API.\nTo use, you should have the transformers python package installed.\nOnly supports text-generation, text2text-generation and summarization for now.\nExample using from_model_id:from langchain.llms import HuggingFacePipeline\nhf = HuggingFacePipeline.from_model_id(", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-84", "text": "hf = HuggingFacePipeline.from_model_id(\n model_id=\"gpt2\",\n task=\"text-generation\",\n pipeline_kwargs={\"max_new_tokens\": 10},\n)\nExample passing pipeline in directly:from langchain.llms import HuggingFacePipeline\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\nmodel_id = \"gpt2\"\ntokenizer = AutoTokenizer.from_pretrained(model_id)\nmodel = AutoModelForCausalLM.from_pretrained(model_id)\npipe = pipeline(\n \"text-generation\", model=model, tokenizer=tokenizer, max_new_tokens=10\n)\nhf = HuggingFacePipeline(pipeline=pipe)\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield model_id: str = 'gpt2'#\nModel name to use.\nfield model_kwargs: Optional[dict] = None#\nKey word arguments passed to the model.\nfield pipeline_kwargs: Optional[dict] = None#\nKey word arguments passed to the pipeline.\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-85", "text": "Run the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-86", "text": "Returns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\nclassmethod from_model_id(model_id: str, task: str, device: int = - 1, model_kwargs: Optional[dict] = None, pipeline_kwargs: Optional[dict] = None, **kwargs: Any) \u2192 langchain.llms.base.LLM[source]#\nConstruct the pipeline object from model_id and task.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-87", "text": "Get the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.HuggingFaceTextGenInference[source]#\nHuggingFace text generation inference API.\nThis class is a wrapper around the HuggingFace text generation inference API.\nIt is used to generate text from a given prompt.\nAttributes:\n- max_new_tokens: The maximum number of tokens to generate.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-88", "text": "Attributes:\n- max_new_tokens: The maximum number of tokens to generate.\n- top_k: The number of top-k tokens to consider when generating text.\n- top_p: The cumulative probability threshold for generating text.\n- typical_p: The typical probability threshold for generating text.\n- temperature: The temperature to use when generating text.\n- repetition_penalty: The repetition penalty to use when generating text.\n- stop_sequences: A list of stop sequences to use when generating text.\n- seed: The seed to use when generating text.\n- inference_server_url: The URL of the inference server to use.\n- timeout: The timeout value in seconds to use while connecting to inference server.\n- client: The client object used to communicate with the inference server.\nMethods:\n- _call: Generates text based on a given prompt and stop sequences.\n- _llm_type: Returns the type of LLM.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-89", "text": "Run the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-90", "text": "Returns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-91", "text": "predict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.HumanInputLLM[source]#\nA LLM wrapper which returns user input as the response.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-92", "text": "Run the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-93", "text": "Returns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-94", "text": "predict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.LlamaCpp[source]#\nWrapper around the llama.cpp model.\nTo use, you should have the llama-cpp-python library installed, and provide the\npath to the Llama model as a named parameter to the constructor.\nCheck out: abetlen/llama-cpp-python\nExample\nfrom langchain.llms import LlamaCppEmbeddings\nllm = LlamaCppEmbeddings(model_path=\"/path/to/llama/model\")\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield echo: Optional[bool] = False#\nWhether to echo the prompt.\nfield f16_kv: bool = True#\nUse half-precision for key/value cache.\nfield last_n_tokens_size: Optional[int] = 64#\nThe number of tokens to look back when applying the repeat_penalty.\nfield logits_all: bool = False#\nReturn logits for all tokens, not just the last token.\nfield logprobs: Optional[int] = None#\nThe number of logprobs to return. If None, no logprobs are returned.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-95", "text": "The number of logprobs to return. If None, no logprobs are returned.\nfield lora_base: Optional[str] = None#\nThe path to the Llama LoRA base model.\nfield lora_path: Optional[str] = None#\nThe path to the Llama LoRA. If None, no LoRa is loaded.\nfield max_tokens: Optional[int] = 256#\nThe maximum number of tokens to generate.\nfield model_path: str [Required]#\nThe path to the Llama model file.\nfield n_batch: Optional[int] = 8#\nNumber of tokens to process in parallel.\nShould be a number between 1 and n_ctx.\nfield n_ctx: int = 512#\nToken context window.\nfield n_gpu_layers: Optional[int] = None#\nNumber of layers to be loaded into gpu memory. Default None.\nfield n_parts: int = -1#\nNumber of parts to split the model into.\nIf -1, the number of parts is automatically determined.\nfield n_threads: Optional[int] = None#\nNumber of threads to use.\nIf None, the number of threads is automatically determined.\nfield repeat_penalty: Optional[float] = 1.1#\nThe penalty to apply to repeated tokens.\nfield seed: int = -1#\nSeed. If -1, a random seed is used.\nfield stop: Optional[List[str]] = []#\nA list of strings to stop generation when encountered.\nfield streaming: bool = True#\nWhether to stream the results, token by token.\nfield suffix: Optional[str] = None#\nA suffix to append to the generated text. If None, no suffix is appended.\nfield temperature: Optional[float] = 0.8#\nThe temperature to use for sampling.\nfield top_k: Optional[int] = 40#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-96", "text": "The temperature to use for sampling.\nfield top_k: Optional[int] = 40#\nThe top-k value to use for sampling.\nfield top_p: Optional[float] = 0.95#\nThe top-p value to use for sampling.\nfield use_mlock: bool = False#\nForce system to keep model in RAM.\nfield use_mmap: Optional[bool] = True#\nWhether to keep the model loaded in RAM\nfield verbose: bool [Optional]#\nWhether to print out response text.\nfield vocab_only: bool = False#\nOnly load the vocabulary, no weights.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-97", "text": "Predict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-98", "text": "Take in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int[source]#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-99", "text": ".. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nstream(prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[langchain.callbacks.manager.CallbackManagerForLLMRun] = None) \u2192 Generator[Dict, None, None][source]#\nYields results objects as they are generated in real time.\nBETA: this is a beta feature while we figure out the right abstraction.\nOnce that happens, this interface could change.\nIt also calls the callback manager\u2019s on_llm_new_token event with\nsimilar parameters to the OpenAI LLM class method of the same name.\nArgs:prompt: The prompts to pass into the model.\nstop: Optional list of stop words to use when generating.\nReturns:A generator representing the stream of tokens being generated.\nYields:A dictionary like objects containing a string token and metadata.\nSee llama-cpp-python docs and below for more.\nExample:from langchain.llms import LlamaCpp\nllm = LlamaCpp(\n model_path=\"/path/to/local/model.bin\",\n temperature = 0.5\n)\nfor chunk in llm.stream(\"Ask 'Hi, how are you?' like a pirate:'\",\n stop=[\"'\",\"\n\u201c]):result = chunk[\u201cchoices\u201d][0]\nprint(result[\u201ctext\u201d], end=\u2019\u2019, flush=True)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.Modal[source]#\nWrapper around Modal large language models.\nTo use, you should have the modal-client python package installed.\nAny parameters that are valid to be passed to the call can be passed\nin, even if not explicitly saved on this class.\nExample", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-100", "text": "in, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import Modal\nmodal = Modal(endpoint_url=\"\")\nValidators\nbuild_extra \u00bb all fields\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield endpoint_url: str = ''#\nmodel endpoint to use\nfield model_kwargs: Dict[str, Any] [Optional]#\nHolds any model parameters valid for create call not\nexplicitly specified.\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-101", "text": "Predict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-102", "text": "Take in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-103", "text": "Try to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.MosaicML[source]#\nWrapper around MosaicML\u2019s LLM inference service.\nTo use, you should have the\nenvironment variable MOSAICML_API_TOKEN set with your API token, or pass\nit as a named parameter to the constructor.\nExample\nfrom langchain.llms import MosaicML\nendpoint_url = (\n \"https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict\"\n)\nmosaic_llm = MosaicML(\n endpoint_url=endpoint_url,\n mosaicml_api_token=\"my-api-key\"\n)\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict'#\nEndpoint URL to use.\nfield inject_instruction_format: bool = False#\nWhether to inject the instruction format into the prompt.\nfield model_kwargs: Optional[dict] = None#\nKey word arguments to pass to the model.\nfield retry_sleep: float = 1.0#\nHow long to try sleeping for if a rate limit is encountered\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-104", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-105", "text": "Parameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-106", "text": "Get the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.NLPCloud[source]#\nWrapper around NLPCloud large language models.\nTo use, you should have the nlpcloud python package installed, and the\nenvironment variable NLPCLOUD_API_KEY set with your API key.\nExample\nfrom langchain.llms import NLPCloud\nnlpcloud = NLPCloud(model=\"gpt-neox-20b\")", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-107", "text": "nlpcloud = NLPCloud(model=\"gpt-neox-20b\")\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield bad_words: List[str] = []#\nList of tokens not allowed to be generated.\nfield do_sample: bool = True#\nWhether to use sampling (True) or greedy decoding.\nfield early_stopping: bool = False#\nWhether to stop beam search at num_beams sentences.\nfield length_no_input: bool = True#\nWhether min_length and max_length should include the length of the input.\nfield length_penalty: float = 1.0#\nExponential penalty to the length.\nfield max_length: int = 256#\nThe maximum number of tokens to generate in the completion.\nfield min_length: int = 1#\nThe minimum number of tokens to generate in the completion.\nfield model_name: str = 'finetuned-gpt-neox-20b'#\nModel name to use.\nfield num_beams: int = 1#\nNumber of beams for beam search.\nfield num_return_sequences: int = 1#\nHow many completions to generate for each prompt.\nfield remove_end_sequence: bool = True#\nWhether or not to remove the end sequence token.\nfield remove_input: bool = True#\nRemove input text from API response\nfield repetition_penalty: float = 1.0#\nPenalizes repeated tokens. 1.0 means no penalty.\nfield temperature: float = 0.7#\nWhat sampling temperature to use.\nfield top_k: int = 50#\nThe number of highest probability tokens to keep for top-k filtering.\nfield top_p: int = 1#\nTotal probability mass of tokens to consider at each step.\nfield verbose: bool [Optional]#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-108", "text": "field verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-109", "text": "Behaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-110", "text": "get_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.OpenAI[source]#\nWrapper around OpenAI large language models.\nTo use, you should have the openai python package installed, and the\nenvironment variable OPENAI_API_KEY set with your API key.\nAny parameters that are valid to be passed to the openai.create call can be passed", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-111", "text": "Any parameters that are valid to be passed to the openai.create call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import OpenAI\nopenai = OpenAI(model_name=\"text-davinci-003\")\nValidators\nbuild_extra \u00bb all fields\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield allowed_special: Union[Literal['all'], AbstractSet[str]] = {}#\nSet of special tokens that are allowed\u3002\nfield batch_size: int = 20#\nBatch size to use when passing multiple documents to generate.\nfield best_of: int = 1#\nGenerates best_of completions server-side and returns the \u201cbest\u201d.\nfield disallowed_special: Union[Literal['all'], Collection[str]] = 'all'#\nSet of special tokens that are not allowed\u3002\nfield frequency_penalty: float = 0#\nPenalizes repeated tokens according to frequency.\nfield logit_bias: Optional[Dict[str, float]] [Optional]#\nAdjust the probability of specific tokens being generated.\nfield max_retries: int = 6#\nMaximum number of retries to make when generating.\nfield max_tokens: int = 256#\nThe maximum number of tokens to generate in the completion.\n-1 returns as many tokens as possible given the prompt and\nthe models maximal context size.\nfield model_kwargs: Dict[str, Any] [Optional]#\nHolds any model parameters valid for create call not explicitly specified.\nfield model_name: str = 'text-davinci-003' (alias 'model')#\nModel name to use.\nfield n: int = 1#\nHow many completions to generate for each prompt.\nfield presence_penalty: float = 0#\nPenalizes repeated tokens.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-112", "text": "field presence_penalty: float = 0#\nPenalizes repeated tokens.\nfield request_timeout: Optional[Union[float, Tuple[float, float]]] = None#\nTimeout for requests to OpenAI completion API. Default is 600 seconds.\nfield streaming: bool = False#\nWhether to stream the results or not.\nfield temperature: float = 0.7#\nWhat sampling temperature to use.\nfield top_p: float = 1#\nTotal probability mass of tokens to consider at each step.\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-113", "text": "Predict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ncreate_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) \u2192 langchain.schema.LLMResult#\nCreate the LLMResult from the choices and prompts.\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-114", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) \u2192 List[List[str]]#\nGet the sub prompts for llm call.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token IDs using the tiktoken package.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nmax_tokens_for_prompt(prompt: str) \u2192 int#\nCalculate the maximum number of tokens possible to generate for a prompt.\nParameters\nprompt \u2013 The prompt to pass into the model.\nReturns", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-115", "text": "Parameters\nprompt \u2013 The prompt to pass into the model.\nReturns\nThe maximum number of tokens to generate for a prompt.\nExample\nmax_tokens = openai.max_token_for_prompt(\"Tell me a joke.\")\nmodelname_to_contextsize(modelname: str) \u2192 int#\nCalculate the maximum number of tokens possible to generate for a model.\nParameters\nmodelname \u2013 The modelname we want to know the context size for.\nReturns\nThe maximum context size\nExample\nmax_tokens = openai.modelname_to_contextsize(\"text-davinci-003\")\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nprep_streaming_params(stop: Optional[List[str]] = None) \u2192 Dict[str, Any]#\nPrepare the params for streaming.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nstream(prompt: str, stop: Optional[List[str]] = None) \u2192 Generator#\nCall OpenAI with streaming flag and return the resulting generator.\nBETA: this is a beta feature while we figure out the right abstraction.\nOnce that happens, this interface could change.\nParameters\nprompt \u2013 The prompts to pass into the model.\nstop \u2013 Optional list of stop words to use when generating.\nReturns\nA generator representing the stream of tokens from OpenAI.\nExample\ngenerator = openai.stream(\"Tell me a joke.\")\nfor token in generator:\n yield token", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-116", "text": "for token in generator:\n yield token\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.OpenAIChat[source]#\nWrapper around OpenAI Chat large language models.\nTo use, you should have the openai python package installed, and the\nenvironment variable OPENAI_API_KEY set with your API key.\nAny parameters that are valid to be passed to the openai.create call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import OpenAIChat\nopenaichat = OpenAIChat(model_name=\"gpt-3.5-turbo\")\nValidators\nbuild_extra \u00bb all fields\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield allowed_special: Union[Literal['all'], AbstractSet[str]] = {}#\nSet of special tokens that are allowed\u3002\nfield disallowed_special: Union[Literal['all'], Collection[str]] = 'all'#\nSet of special tokens that are not allowed\u3002\nfield max_retries: int = 6#\nMaximum number of retries to make when generating.\nfield model_kwargs: Dict[str, Any] [Optional]#\nHolds any model parameters valid for create call not explicitly specified.\nfield model_name: str = 'gpt-3.5-turbo'#\nModel name to use.\nfield prefix_messages: List [Optional]#\nSeries of messages for Chat input.\nfield streaming: bool = False#\nWhether to stream the results or not.\nfield verbose: bool [Optional]#\nWhether to print out response text.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-117", "text": "field verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-118", "text": "Behaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int][source]#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-119", "text": "get_token_ids(text: str) \u2192 List[int][source]#\nGet the token IDs using the tiktoken package.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.OpenLM[source]#\nValidators\nbuild_extra \u00bb all fields\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield allowed_special: Union[Literal['all'], AbstractSet[str]] = {}#\nSet of special tokens that are allowed\u3002", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-120", "text": "Set of special tokens that are allowed\u3002\nfield batch_size: int = 20#\nBatch size to use when passing multiple documents to generate.\nfield best_of: int = 1#\nGenerates best_of completions server-side and returns the \u201cbest\u201d.\nfield disallowed_special: Union[Literal['all'], Collection[str]] = 'all'#\nSet of special tokens that are not allowed\u3002\nfield frequency_penalty: float = 0#\nPenalizes repeated tokens according to frequency.\nfield logit_bias: Optional[Dict[str, float]] [Optional]#\nAdjust the probability of specific tokens being generated.\nfield max_retries: int = 6#\nMaximum number of retries to make when generating.\nfield max_tokens: int = 256#\nThe maximum number of tokens to generate in the completion.\n-1 returns as many tokens as possible given the prompt and\nthe models maximal context size.\nfield model_kwargs: Dict[str, Any] [Optional]#\nHolds any model parameters valid for create call not explicitly specified.\nfield model_name: str = 'text-davinci-003' (alias 'model')#\nModel name to use.\nfield n: int = 1#\nHow many completions to generate for each prompt.\nfield presence_penalty: float = 0#\nPenalizes repeated tokens.\nfield request_timeout: Optional[Union[float, Tuple[float, float]]] = None#\nTimeout for requests to OpenAI completion API. Default is 600 seconds.\nfield streaming: bool = False#\nWhether to stream the results or not.\nfield temperature: float = 0.7#\nWhat sampling temperature to use.\nfield top_p: float = 1#\nTotal probability mass of tokens to consider at each step.\nfield verbose: bool [Optional]#\nWhether to print out response text.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-121", "text": "field verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-122", "text": "Behaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ncreate_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) \u2192 langchain.schema.LLMResult#\nCreate the LLMResult from the choices and prompts.\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-123", "text": "Get the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) \u2192 List[List[str]]#\nGet the sub prompts for llm call.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token IDs using the tiktoken package.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nmax_tokens_for_prompt(prompt: str) \u2192 int#\nCalculate the maximum number of tokens possible to generate for a prompt.\nParameters\nprompt \u2013 The prompt to pass into the model.\nReturns\nThe maximum number of tokens to generate for a prompt.\nExample\nmax_tokens = openai.max_token_for_prompt(\"Tell me a joke.\")\nmodelname_to_contextsize(modelname: str) \u2192 int#\nCalculate the maximum number of tokens possible to generate for a model.\nParameters\nmodelname \u2013 The modelname we want to know the context size for.\nReturns\nThe maximum context size\nExample\nmax_tokens = openai.modelname_to_contextsize(\"text-davinci-003\")", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-124", "text": "max_tokens = openai.modelname_to_contextsize(\"text-davinci-003\")\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nprep_streaming_params(stop: Optional[List[str]] = None) \u2192 Dict[str, Any]#\nPrepare the params for streaming.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nstream(prompt: str, stop: Optional[List[str]] = None) \u2192 Generator#\nCall OpenAI with streaming flag and return the resulting generator.\nBETA: this is a beta feature while we figure out the right abstraction.\nOnce that happens, this interface could change.\nParameters\nprompt \u2013 The prompts to pass into the model.\nstop \u2013 Optional list of stop words to use when generating.\nReturns\nA generator representing the stream of tokens from OpenAI.\nExample\ngenerator = openai.stream(\"Tell me a joke.\")\nfor token in generator:\n yield token\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.Petals[source]#\nWrapper around Petals Bloom models.\nTo use, you should have the petals python package installed, and the\nenvironment variable HUGGINGFACE_API_KEY set with your API key.\nAny parameters that are valid to be passed to the call can be passed", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-125", "text": "Any parameters that are valid to be passed to the call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import petals\npetals = Petals()\nValidators\nbuild_extra \u00bb all fields\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield client: Any = None#\nThe client to use for the API calls.\nfield do_sample: bool = True#\nWhether or not to use sampling; use greedy decoding otherwise.\nfield max_length: Optional[int] = None#\nThe maximum length of the sequence to be generated.\nfield max_new_tokens: int = 256#\nThe maximum number of new tokens to generate in the completion.\nfield model_kwargs: Dict[str, Any] [Optional]#\nHolds any model parameters valid for create call\nnot explicitly specified.\nfield model_name: str = 'bigscience/bloom-petals'#\nThe model to use.\nfield temperature: float = 0.7#\nWhat sampling temperature to use\nfield tokenizer: Any = None#\nThe tokenizer to use for the API calls.\nfield top_k: Optional[int] = None#\nThe number of highest probability vocabulary tokens\nto keep for top-k-filtering.\nfield top_p: float = 0.9#\nThe cumulative probability for top-p sampling.\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-126", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-127", "text": "Parameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-128", "text": "Get the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.PipelineAI[source]#\nWrapper around PipelineAI large language models.\nTo use, you should have the pipeline-ai python package installed,\nand the environment variable PIPELINE_API_KEY set with your API key.\nAny parameters that are valid to be passed to the call can be passed\nin, even if not explicitly saved on this class.\nExample", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-129", "text": "in, even if not explicitly saved on this class.\nExample\nfrom langchain import PipelineAI\npipeline = PipelineAI(pipeline_key=\"\")\nValidators\nbuild_extra \u00bb all fields\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield pipeline_key: str = ''#\nThe id or tag of the target pipeline\nfield pipeline_kwargs: Dict[str, Any] [Optional]#\nHolds any pipeline parameters valid for create call not\nexplicitly specified.\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-130", "text": "Predict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-131", "text": "Take in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-132", "text": "Try to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.PredictionGuard[source]#\nWrapper around Prediction Guard large language models.\nTo use, you should have the predictionguard python package installed, and the\nenvironment variable PREDICTIONGUARD_TOKEN set with your access token, or pass\nit as a named parameter to the constructor. To use Prediction Guard\u2019s API along\nwith OpenAI models, set the environment variable OPENAI_API_KEY with your\nOpenAI API key as well.\nExample\npgllm = PredictionGuard(model=\"MPT-7B-Instruct\",\n token=\"my-access-token\",\n output={\n \"type\": \"boolean\"\n })\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield max_tokens: int = 256#\nDenotes the number of tokens to predict per generation.\nfield model: Optional[str] = 'MPT-7B-Instruct'#\nModel name to use.\nfield output: Optional[Dict[str, Any]] = None#\nThe output type or structure for controlling the LLM output.\nfield temperature: float = 0.75#\nA non-negative float that tunes the degree of randomness in generation.\nfield token: Optional[str] = None#\nYour Prediction Guard access token.\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-133", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-134", "text": "Parameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-135", "text": "Get the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.PromptLayerOpenAI[source]#\nWrapper around OpenAI large language models.\nTo use, you should have the openai and promptlayer python\npackage installed, and the environment variable OPENAI_API_KEY\nand PROMPTLAYER_API_KEY set with your openAI API key and\npromptlayer key respectively.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-136", "text": "promptlayer key respectively.\nAll parameters that can be passed to the OpenAI LLM can also\nbe passed here. The PromptLayerOpenAI LLM adds two optional\nParameters\npl_tags \u2013 List of strings to tag the request with.\nreturn_pl_id \u2013 If True, the PromptLayer request ID will be\nreturned in the generation_info field of the\nGeneration object.\nExample\nfrom langchain.llms import PromptLayerOpenAI\nopenai = PromptLayerOpenAI(model_name=\"text-davinci-003\")\nValidators\nbuild_extra \u00bb all fields\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-137", "text": "Predict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ncreate_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) \u2192 langchain.schema.LLMResult#\nCreate the LLMResult from the choices and prompts.\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-138", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) \u2192 List[List[str]]#\nGet the sub prompts for llm call.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token IDs using the tiktoken package.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nmax_tokens_for_prompt(prompt: str) \u2192 int#\nCalculate the maximum number of tokens possible to generate for a prompt.\nParameters\nprompt \u2013 The prompt to pass into the model.\nReturns", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-139", "text": "Parameters\nprompt \u2013 The prompt to pass into the model.\nReturns\nThe maximum number of tokens to generate for a prompt.\nExample\nmax_tokens = openai.max_token_for_prompt(\"Tell me a joke.\")\nmodelname_to_contextsize(modelname: str) \u2192 int#\nCalculate the maximum number of tokens possible to generate for a model.\nParameters\nmodelname \u2013 The modelname we want to know the context size for.\nReturns\nThe maximum context size\nExample\nmax_tokens = openai.modelname_to_contextsize(\"text-davinci-003\")\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nprep_streaming_params(stop: Optional[List[str]] = None) \u2192 Dict[str, Any]#\nPrepare the params for streaming.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nstream(prompt: str, stop: Optional[List[str]] = None) \u2192 Generator#\nCall OpenAI with streaming flag and return the resulting generator.\nBETA: this is a beta feature while we figure out the right abstraction.\nOnce that happens, this interface could change.\nParameters\nprompt \u2013 The prompts to pass into the model.\nstop \u2013 Optional list of stop words to use when generating.\nReturns\nA generator representing the stream of tokens from OpenAI.\nExample\ngenerator = openai.stream(\"Tell me a joke.\")\nfor token in generator:\n yield token", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-140", "text": "for token in generator:\n yield token\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.PromptLayerOpenAIChat[source]#\nWrapper around OpenAI large language models.\nTo use, you should have the openai and promptlayer python\npackage installed, and the environment variable OPENAI_API_KEY\nand PROMPTLAYER_API_KEY set with your openAI API key and\npromptlayer key respectively.\nAll parameters that can be passed to the OpenAIChat LLM can also\nbe passed here. The PromptLayerOpenAIChat adds two optional\nParameters\npl_tags \u2013 List of strings to tag the request with.\nreturn_pl_id \u2013 If True, the PromptLayer request ID will be\nreturned in the generation_info field of the\nGeneration object.\nExample\nfrom langchain.llms import PromptLayerOpenAIChat\nopenaichat = PromptLayerOpenAIChat(model_name=\"gpt-3.5-turbo\")\nValidators\nbuild_extra \u00bb all fields\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield allowed_special: Union[Literal['all'], AbstractSet[str]] = {}#\nSet of special tokens that are allowed\u3002\nfield disallowed_special: Union[Literal['all'], Collection[str]] = 'all'#\nSet of special tokens that are not allowed\u3002\nfield max_retries: int = 6#\nMaximum number of retries to make when generating.\nfield model_kwargs: Dict[str, Any] [Optional]#\nHolds any model parameters valid for create call not explicitly specified.\nfield model_name: str = 'gpt-3.5-turbo'#\nModel name to use.\nfield prefix_messages: List [Optional]#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-141", "text": "Model name to use.\nfield prefix_messages: List [Optional]#\nSeries of messages for Chat input.\nfield streaming: bool = False#\nWhether to stream the results or not.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-142", "text": "Behaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-143", "text": "get_token_ids(text: str) \u2192 List[int]#\nGet the token IDs using the tiktoken package.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.RWKV[source]#\nWrapper around RWKV language models.\nTo use, you should have the rwkv python package installed, the\npre-trained model file, and the model\u2019s config information.\nExample\nfrom langchain.llms import RWKV", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-144", "text": "Example\nfrom langchain.llms import RWKV\nmodel = RWKV(model=\"./models/rwkv-3b-fp16.bin\", strategy=\"cpu fp32\")\n# Simplest invocation\nresponse = model(\"Once upon a time, \")\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield CHUNK_LEN: int = 256#\nBatch size for prompt processing.\nfield max_tokens_per_generation: int = 256#\nMaximum number of tokens to generate.\nfield model: str [Required]#\nPath to the pre-trained RWKV model file.\nfield penalty_alpha_frequency: float = 0.4#\nPositive values penalize new tokens based on their existing frequency\nin the text so far, decreasing the model\u2019s likelihood to repeat the same\nline verbatim..\nfield penalty_alpha_presence: float = 0.4#\nPositive values penalize new tokens based on whether they appear\nin the text so far, increasing the model\u2019s likelihood to talk about\nnew topics..\nfield rwkv_verbose: bool = True#\nPrint debug information.\nfield strategy: str = 'cpu fp32'#\nToken context window.\nfield temperature: float = 1.0#\nThe temperature to use for sampling.\nfield tokens_path: str [Required]#\nPath to the RWKV tokens file.\nfield top_p: float = 0.5#\nThe top-p value to use for sampling.\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-145", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-146", "text": "Parameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-147", "text": "Get the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.Replicate[source]#\nWrapper around Replicate models.\nTo use, you should have the replicate python package installed,\nand the environment variable REPLICATE_API_TOKEN set with your API token.\nYou can find your token here: https://replicate.com/account\nThe model param is required, but any other model parameters can also", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-148", "text": "The model param is required, but any other model parameters can also\nbe passed in with the format input={model_param: value, \u2026}\nExample\nfrom langchain.llms import Replicate\nreplicate = Replicate(model=\"stability-ai/stable-diffusion: 27b93a2413e7f36cd83da926f365628 0b2931564ff050bf9575f1fdf9bcd7478\",\n input={\"image_dimensions\": \"512x512\"})\nValidators\nbuild_extra \u00bb all fields\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-149", "text": "Predict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-150", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-151", "text": "Save the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.SagemakerEndpoint[source]#\nWrapper around custom Sagemaker Inference Endpoints.\nTo use, you must supply the endpoint name from your deployed\nSagemaker model & the region where it is deployed.\nTo authenticate, the AWS client uses the following methods to\nautomatically load credentials:\nhttps://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\nIf a specific credential profile should be used, you must pass\nthe name of the profile from the ~/.aws/credentials file that is to be used.\nMake sure the credentials / roles used have the required policies to\naccess the Sagemaker endpoint.\nSee: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield content_handler: langchain.llms.sagemaker_endpoint.LLMContentHandler [Required]#\nThe content handler class that provides an input and\noutput transform functions to handle formats between LLM\nand the endpoint.\nfield credentials_profile_name: Optional[str] = None#\nThe name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\nhas either access keys or role information specified.\nIf not specified, the default credential profile or, if on an EC2 instance,\ncredentials from IMDS will be used.\nSee: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-152", "text": "See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\nfield endpoint_kwargs: Optional[Dict] = None#\nOptional attributes passed to the invoke_endpoint\nfunction. See `boto3`_. docs for more info.\n.. _boto3: \nfield endpoint_name: str = ''#\nThe name of the endpoint from the deployed Sagemaker model.\nMust be unique within an AWS Region.\nfield model_kwargs: Optional[Dict] = None#\nKey word arguments to pass to the model.\nfield region_name: str = ''#\nThe aws region where the Sagemaker model is deployed, eg. us-west-2.\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-153", "text": "Take in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-154", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-155", "text": "Save the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.SelfHostedHuggingFaceLLM[source]#\nWrapper around HuggingFace Pipeline API to run on self-hosted remote hardware.\nSupported hardware includes auto-launched instances on AWS, GCP, Azure,\nand Lambda, as well as servers specified\nby IP address and SSH credentials (such as on-prem, or another cloud\nlike Paperspace, Coreweave, etc.).\nTo use, you should have the runhouse python package installed.\nOnly supports text-generation, text2text-generation and summarization for now.\nExample using from_model_id:from langchain.llms import SelfHostedHuggingFaceLLM\nimport runhouse as rh\ngpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\nhf = SelfHostedHuggingFaceLLM(\n model_id=\"google/flan-t5-large\", task=\"text2text-generation\",\n hardware=gpu\n)\nExample passing fn that generates a pipeline (bc the pipeline is not serializable):from langchain.llms import SelfHostedHuggingFaceLLM\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\nimport runhouse as rh\ndef get_pipeline():\n model_id = \"gpt2\"\n tokenizer = AutoTokenizer.from_pretrained(model_id)\n model = AutoModelForCausalLM.from_pretrained(model_id)\n pipe = pipeline(\n \"text-generation\", model=model, tokenizer=tokenizer\n )", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-156", "text": "\"text-generation\", model=model, tokenizer=tokenizer\n )\n return pipe\nhf = SelfHostedHuggingFaceLLM(\n model_load_fn=get_pipeline, model_id=\"gpt2\", hardware=gpu)\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield device: int = 0#\nDevice to use for inference. -1 for CPU, 0 for GPU, 1 for second GPU, etc.\nfield hardware: Any = None#\nRemote hardware to send the inference function to.\nfield inference_fn: Callable = #\nInference function to send to the remote hardware.\nfield load_fn_kwargs: Optional[dict] = None#\nKey word arguments to pass to the model load function.\nfield model_id: str = 'gpt2'#\nHugging Face model_id to load the model.\nfield model_kwargs: Optional[dict] = None#\nKey word arguments to pass to the model.\nfield model_load_fn: Callable = #\nFunction to load the model remotely on the server.\nfield model_reqs: List[str] = ['./', 'transformers', 'torch']#\nRequirements to install on hardware to inference the model.\nfield task: str = 'text-generation'#\nHugging Face task (\u201ctext-generation\u201d, \u201ctext2text-generation\u201d or\n\u201csummarization\u201d).\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-157", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-158", "text": "Parameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\nclassmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) \u2192 langchain.llms.base.LLM#\nInit the SelfHostedPipeline from a pipeline object or string.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-159", "text": "Get the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.SelfHostedPipeline[source]#\nRun model inference on self-hosted remote hardware.\nSupported hardware includes auto-launched instances on AWS, GCP, Azure,\nand Lambda, as well as servers specified\nby IP address and SSH credentials (such as on-prem, or another\ncloud like Paperspace, Coreweave, etc.).", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-160", "text": "cloud like Paperspace, Coreweave, etc.).\nTo use, you should have the runhouse python package installed.\nExample for custom pipeline and inference functions:from langchain.llms import SelfHostedPipeline\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\nimport runhouse as rh\ndef load_pipeline():\n tokenizer = AutoTokenizer.from_pretrained(\"gpt2\")\n model = AutoModelForCausalLM.from_pretrained(\"gpt2\")\n return pipeline(\n \"text-generation\", model=model, tokenizer=tokenizer,\n max_new_tokens=10\n )\ndef inference_fn(pipeline, prompt, stop = None):\n return pipeline(prompt)[0][\"generated_text\"]\ngpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\nllm = SelfHostedPipeline(\n model_load_fn=load_pipeline,\n hardware=gpu,\n model_reqs=model_reqs, inference_fn=inference_fn\n)\nExample for <2GB model (can be serialized and sent directly to the server):from langchain.llms import SelfHostedPipeline\nimport runhouse as rh\ngpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\nmy_model = ...\nllm = SelfHostedPipeline.from_pipeline(\n pipeline=my_model,\n hardware=gpu,\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n)\nExample passing model path for larger models:from langchain.llms import SelfHostedPipeline\nimport runhouse as rh\nimport pickle\nfrom transformers import pipeline\ngenerator = pipeline(model=\"gpt2\")\nrh.blob(pickle.dumps(generator), path=\"models/pipeline.pkl\"\n ).save().to(gpu, path=\"models\")\nllm = SelfHostedPipeline.from_pipeline(", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-161", "text": "llm = SelfHostedPipeline.from_pipeline(\n pipeline=\"models/pipeline.pkl\",\n hardware=gpu,\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n)\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nfield hardware: Any = None#\nRemote hardware to send the inference function to.\nfield inference_fn: Callable = #\nInference function to send to the remote hardware.\nfield load_fn_kwargs: Optional[dict] = None#\nKey word arguments to pass to the model load function.\nfield model_load_fn: Callable [Required]#\nFunction to load the model remotely on the server.\nfield model_reqs: List[str] = ['./', 'torch']#\nRequirements to install on hardware to inference the model.\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-162", "text": "Take in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\nclassmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) \u2192 langchain.llms.base.LLM[source]#\nInit the SelfHostedPipeline from a pipeline object or string.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-163", "text": "Init the SelfHostedPipeline from a pipeline object or string.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-164", "text": "Predict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.StochasticAI[source]#\nWrapper around StochasticAI large language models.\nTo use, you should have the environment variable STOCHASTICAI_API_KEY\nset with your API key.\nExample\nfrom langchain.llms import StochasticAI\nstochasticai = StochasticAI(api_url=\"\")\nValidators\nbuild_extra \u00bb all fields\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield api_url: str = ''#\nModel name to use.\nfield model_kwargs: Dict[str, Any] [Optional]#\nHolds any model parameters valid for create call not\nexplicitly specified.\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-165", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-166", "text": "Parameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-167", "text": "Get the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.VertexAI[source]#\nWrapper around Google Vertex AI large language models.\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield credentials: Any = None#\nThe default custom credentials (google.auth.credentials.Credentials) to use\nfield location: str = 'us-central1'#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-168", "text": "field location: str = 'us-central1'#\nThe default location to use when making API calls.\nfield max_output_tokens: int = 128#\nToken limit determines the maximum amount of text output from one prompt.\nfield project: Optional[str] = None#\nThe default GCP project to use when making Vertex API calls.\nfield stop: Optional[List[str]] = None#\nOptional list of stop words to use when generating.\nfield temperature: float = 0.0#\nSampling temperature, it controls the degree of randomness in token selection.\nfield top_k: int = 40#\nHow the model selects tokens for output, the next token is selected from\nfield top_p: float = 0.95#\nTokens are selected from most probable to least until the sum of their\nfield tuned_model_name: Optional[str] = None#\nThe name of a tuned model, if it\u2019s provided, model_name is ignored.\nfield verbose: bool [Optional]#\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-169", "text": "Run the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-170", "text": "Returns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-171", "text": "predict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\npydantic model langchain.llms.Writer[source]#\nWrapper around Writer large language models.\nTo use, you should have the environment variable WRITER_API_KEY and\nWRITER_ORG_ID set with your API key and organization ID respectively.\nExample\nfrom langchain import Writer\nwriter = Writer(model_id=\"palmyra-base\")\nValidators\nraise_deprecation \u00bb all fields\nset_verbose \u00bb verbose\nvalidate_environment \u00bb all fields\nfield base_url: Optional[str] = None#\nBase url to use, if None decides based on model name.\nfield best_of: Optional[int] = None#\nGenerates this many completions server-side and returns the \u201cbest\u201d.\nfield logprobs: bool = False#\nWhether to return log probabilities.\nfield max_tokens: Optional[int] = None#\nMaximum number of tokens to generate.\nfield min_tokens: Optional[int] = None#\nMinimum number of tokens to generate.\nfield model_id: str = 'palmyra-instruct'#\nModel name to use.\nfield n: Optional[int] = None#\nHow many completions to generate.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-172", "text": "field n: Optional[int] = None#\nHow many completions to generate.\nfield presence_penalty: Optional[float] = None#\nPenalizes repeated tokens regardless of frequency.\nfield repetition_penalty: Optional[float] = None#\nPenalizes repeated tokens according to frequency.\nfield stop: Optional[List[str]] = None#\nSequences when completion generation will stop.\nfield temperature: Optional[float] = None#\nWhat sampling temperature to use.\nfield top_p: Optional[float] = None#\nTotal probability mass of tokens to consider at each step.\nfield verbose: bool [Optional]#\nWhether to print out response text.\nfield writer_api_key: Optional[str] = None#\nWriter API key.\nfield writer_org_id: Optional[str] = None#\nWriter organization ID.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 str#\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-173", "text": "Predict text from text.\nasync apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nclassmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) \u2192 Model#\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\ncopy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) \u2192 Model#\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude \u2013 fields to include in new model\nexclude \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep \u2013 set to True to make a deep copy of the model\nReturns\nnew model instance\ndict(**kwargs: Any) \u2192 Dict#\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nRun the LLM on the given prompt and input.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-174", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) \u2192 langchain.schema.LLMResult#\nTake in a list of prompt values and return an LLMResult.\nget_num_tokens(text: str) \u2192 int#\nGet the number of tokens present in the text.\nget_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) \u2192 int#\nGet the number of tokens in the message.\nget_token_ids(text: str) \u2192 List[int]#\nGet the token present in the text.\njson(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) \u2192 unicode#\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\npredict(text: str, *, stop: Optional[Sequence[str]] = None) \u2192 str#\nPredict text from text.\npredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) \u2192 langchain.schema.BaseMessage#\nPredict message from messages.\nsave(file_path: Union[pathlib.Path, str]) \u2192 None#\nSave the LLM.\nParameters", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "2aa15396c9dd-175", "text": "Save the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns: Any) \u2192 None#\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nprevious\nWriter\nnext\nChat Models\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/reference/modules/llms.html"}
+{"id": "ec216bc31b80-0", "text": ".rst\n.pdf\nMemory\nMemory#\nNote\nConceptual Guide\nBy default, Chains and Agents are stateless,\nmeaning that they treat each incoming query independently (as are the underlying LLMs and chat models).\nIn some applications (chatbots being a GREAT example) it is highly important\nto remember previous interactions, both at a short term but also at a long term level.\nThe Memory does exactly that.\nLangChain provides memory components in two forms.\nFirst, LangChain provides helper utilities for managing and manipulating previous chat messages.\nThese are designed to be modular and useful regardless of how they are used.\nSecondly, LangChain provides easy ways to incorporate these utilities into chains.\nGetting Started: An overview of different types of memory.\nHow-To Guides: A collection of how-to guides. These highlight different types of memory, as well as how to use memory in chains.\nprevious\nStructured Output Parser\nnext\nGetting Started\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/memory.html"}
+{"id": "5a86e412f95c-0", "text": ".rst\n.pdf\nIndexes\n Contents \nIndex Types\nIndexes#\nNote\nConceptual Guide\nIndexes refer to ways to structure documents so that LLMs can best interact with them.\nThe most common way that indexes are used in chains is in a \u201cretrieval\u201d step.\nThis step refers to taking a user\u2019s query and returning the most relevant documents.\nWe draw this distinction because (1) an index can be used for other things besides retrieval, and\n(2) retrieval can use other logic besides an index to find relevant documents.\nWe therefore have a concept of a Retriever interface - this is the interface that most chains work with.\nMost of the time when we talk about indexes and retrieval we are talking about indexing and retrieving\nunstructured data (like text documents).\nFor interacting with structured data (SQL tables, etc) or APIs, please see the corresponding use case\nsections for links to relevant functionality.\nGetting Started: An overview of the indexes.\nIndex Types#\nDocument Loaders: How to load documents from a variety of sources.\nText Splitters: An overview and different types of the Text Splitters.\nVectorStores: An overview and different types of the Vector Stores.\nRetrievers: An overview and different types of the Retrievers.\nprevious\nZep Memory\nnext\nGetting Started\n Contents\n \nIndex Types\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes.html"}
+{"id": "78f873671699-0", "text": ".rst\n.pdf\nModels\n Contents \nModel Types\nModels#\nNote\nConceptual Guide\nThis section of the documentation deals with different types of models that are used in LangChain.\nOn this page we will go over the model types at a high level,\nbut we have individual pages for each model type.\nThe pages contain more detailed \u201chow-to\u201d guides for working with that model,\nas well as a list of different model providers.\nGetting Started: An overview of the models.\nModel Types#\nLLMs: Large Language Models (LLMs) take a text string as input and return a text string as output.\nChat Models: Chat Models are usually backed by a language model, but their APIs are more structured.\nSpecifically, these models take a list of Chat Messages as input, and return a Chat Message.\nText Embedding Models: Text embedding models take text as input and return a list of floats.\nprevious\nTutorials\nnext\nGetting Started\n Contents\n \nModel Types\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/models.html"}
+{"id": "5112dde6f45a-0", "text": ".rst\n.pdf\nPrompts\nPrompts#\nNote\nConceptual Guide\nThe new way of programming models is through prompts.\nA prompt refers to the input to the model.\nThis input is often constructed from multiple components.\nA PromptTemplate is responsible for the construction of this input.\nLangChain provides several classes and functions to make constructing and working with prompts easy.\nGetting Started: An overview of the prompts.\nLLM Prompt Templates: How to use PromptTemplates to prompt Language Models.\nChat Prompt Templates: How to use PromptTemplates to prompt Chat Models.\nExample Selectors: Often times it is useful to include examples in prompts.\nThese examples can be dynamically selected. This section goes over example selection.\nOutput Parsers: Language models (and Chat Models) output text.\nBut many times you may want to get more structured information. This is where output parsers come in.\nOutput Parsers:\ninstruct the model how output should be formatted,\nparse output into the desired formatting (including retrying if necessary).\nprevious\nTensorflow Hub\nnext\nGetting Started\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts.html"}
+{"id": "001996dca2a3-0", "text": ".rst\n.pdf\nChains\nChains#\nNote\nConceptual Guide\nUsing an LLM in isolation is fine for some simple applications,\nbut more complex applications require chaining LLMs - either with each other or with other experts.\nLangChain provides a standard interface for Chains, as well as several common implementations of chains.\nGetting Started: An overview of chains.\nHow-To Guides: How-to guides about various types of chains.\nReference: API reference documentation for all Chain classes.\nprevious\nZep\nnext\nGetting Started\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/chains.html"}
+{"id": "11c286fff64d-0", "text": ".rst\n.pdf\nAgents\n Contents \nAction Agents\nPlan-and-Execute Agents\nAgents#\nNote\nConceptual Guide\nSome applications require not just a predetermined chain of calls to LLMs/other tools,\nbut potentially an unknown chain that depends on the user\u2019s input.\nIn these types of chains, there is an agent which has access to a suite of tools.\nDepending on the user input, the agent can then decide which, if any, of these tools to call.\nAt the moment, there are two main types of agents:\nAction Agents: these agents decide the actions to take and execute that actions one action at a time.\nPlan-and-Execute Agents: these agents first decide a plan of actions to take, and then execute those actions one at a time.\nWhen should you use each one? Action Agents are more conventional, and good for small tasks.\nFor more complex or long running tasks, the initial planning step helps to maintain long term objectives and focus.\nHowever, that comes at the expense of generally more calls and higher latency.\nThese two agents are also not mutually exclusive - in fact, it is often best to have an Action Agent be in charge\nof the execution for the Plan and Execute agent.\nAction Agents#\nHigh level pseudocode of the Action Agents:\nThe user input is received\nThe agent decides which tool - if any - to use, and what the tool input should be\nThat tool is then called with the tool input, and an observation is recorded (the output of this calling)\nThat history of tool, tool input, and observation is passed back into the agent, and it decides the next step\nThis is repeated until the agent decides it no longer needs to use a tool, and then it responds directly to the user.\nThe different abstractions involved in agents are:", "source": "https://python.langchain.com/en/latest/modules/agents.html"}
+{"id": "11c286fff64d-1", "text": "The different abstractions involved in agents are:\nAgent: this is where the logic of the application lives. Agents expose an interface that takes in user input\nalong with a list of previous steps the agent has taken, and returns either an AgentAction or AgentFinish\nAgentAction corresponds to the tool to use and the input to that tool\nAgentFinish means the agent is done, and has information around what to return to the user\nTools: these are the actions an agent can take. What tools you give an agent highly depend on what you want the agent to do\nToolkits: these are groups of tools designed for a specific use case. For example, in order for an agent to\ninteract with a SQL database in the best way it may need access to one tool to execute queries and another tool to inspect tables.\nAgent Executor: this wraps an agent and a list of tools. This is responsible for the loop of running the agent\niteratively until the stopping criteria is met.\nGetting Started: An overview of agents. It covers how to use all things related to agents in an end-to-end manner.\nAgent Construction:\nAlthough an agent can be constructed in many way, the typical way to construct an agent is with:\nPromptTemplate: this is responsible for taking the user input and previous steps and constructing a prompt\nto send to the language model\nLanguage Model: this takes the prompt constructed by the PromptTemplate and returns some output\nOutput Parser: this takes the output of the Language Model and parses it into an AgentAction or AgentFinish object.\nAdditional Documentation:\nTools: Different types of tools LangChain supports natively. We also cover how to add your own tools.\nAgents: Different types of agents LangChain supports natively. We also cover how to\nmodify and create your own agents.\nToolkits: Various toolkits that LangChain supports out of the box, and how to\ncreate an agent from them.", "source": "https://python.langchain.com/en/latest/modules/agents.html"}
+{"id": "11c286fff64d-2", "text": "create an agent from them.\nAgent Executor: The Agent Executor class, which is responsible for calling\nthe agent and tools in a loop. We go over different ways to customize this, and options you can use for more control.\nPlan-and-Execute Agents#\nHigh level pseudocode of the Plan-and-Execute Agents:\nThe user input is received\nThe planner lists out the steps to take\nThe executor goes through the list of steps, executing them\nThe most typical implementation is to have the planner be a language model, and the executor be an action agent.\nPlan-and-Execute Agents\nprevious\nChains\nnext\nGetting Started\n Contents\n \nAction Agents\nPlan-and-Execute Agents\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents.html"}
+{"id": "ac5572e2a572-0", "text": ".ipynb\n.pdf\nCallbacks\n Contents \nCallbacks\nHow to use callbacks\nWhen do you want to use each of these?\nUsing an existing handler\nCreating a custom handler\nAsync Callbacks\nUsing multiple handlers, passing in handlers\nTracing and Token Counting\nTracing\nToken Counting\nCallbacks#\nLangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks.\nYou can subscribe to these events by using the callbacks argument available throughout the API. This argument is list of handler objects, which are expected to implement one or more of the methods described below in more detail. There are two main callbacks mechanisms:\nConstructor callbacks will be used for all calls made on that object, and will be scoped to that object only, i.e. if you pass a handler to the LLMChain constructor, it will not be used by the model attached to that chain.\nRequest callbacks will be used for that specific request only, and all sub-requests that it contains (eg. a call to an LLMChain triggers a call to a Model, which uses the same handler passed through). These are explicitly passed through.\nAdvanced: When you create a custom chain you can easily set it up to use the same callback system as all the built-in chains.\n_call, _generate, _run, and equivalent async methods on Chains / LLMs / Chat Models / Agents / Tools now receive a 2nd argument called run_manager which is bound to that run, and contains the logging methods that can be used by that object (i.e. on_llm_new_token). This is useful when constructing a custom chain. See this guide for more information on how to create custom chains and use callbacks inside them.", "source": "https://python.langchain.com/en/latest/modules/callbacks/getting_started.html"}
+{"id": "ac5572e2a572-1", "text": "CallbackHandlers are objects that implement the CallbackHandler interface, which has a method for each event that can be subscribed to. The CallbackManager will call the appropriate method on each handler when the event is triggered.\nclass BaseCallbackHandler:\n \"\"\"Base callback handler that can be used to handle callbacks from langchain.\"\"\"\n def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> Any:\n \"\"\"Run when LLM starts running.\"\"\"\n def on_llm_new_token(self, token: str, **kwargs: Any) -> Any:\n \"\"\"Run on new LLM token. Only available when streaming is enabled.\"\"\"\n def on_llm_end(self, response: LLMResult, **kwargs: Any) -> Any:\n \"\"\"Run when LLM ends running.\"\"\"\n def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> Any:\n \"\"\"Run when LLM errors.\"\"\"\n def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> Any:\n \"\"\"Run when chain starts running.\"\"\"\n def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> Any:\n \"\"\"Run when chain ends running.\"\"\"\n def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> Any:\n \"\"\"Run when chain errors.\"\"\"\n def on_tool_start(\n self, serialized: Dict[str, Any], input_str: str, **kwargs: Any\n ) -> Any:\n \"\"\"Run when tool starts running.\"\"\"\n def on_tool_end(self, output: str, **kwargs: Any) -> Any:", "source": "https://python.langchain.com/en/latest/modules/callbacks/getting_started.html"}
+{"id": "ac5572e2a572-2", "text": "def on_tool_end(self, output: str, **kwargs: Any) -> Any:\n \"\"\"Run when tool ends running.\"\"\"\n def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> Any:\n \"\"\"Run when tool errors.\"\"\"\n def on_text(self, text: str, **kwargs: Any) -> Any:\n \"\"\"Run on arbitrary text.\"\"\"\n def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Run on agent action.\"\"\"\n def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> Any:\n \"\"\"Run on agent end.\"\"\"\nHow to use callbacks#\nThe callbacks argument is available on most objects throughout the API (Chains, Models, Tools, Agents, etc.) in two different places:\nConstructor callbacks: defined in the constructor, eg. LLMChain(callbacks=[handler]), which will be used for all calls made on that object, and will be scoped to that object only, eg. if you pass a handler to the LLMChain constructor, it will not be used by the Model attached to that chain.\nRequest callbacks: defined in the call()/run()/apply() methods used for issuing a request, eg. chain.call(inputs, callbacks=[handler]), which will be used for that specific request only, and all sub-requests that it contains (eg. a call to an LLMChain triggers a call to a Model, which uses the same handler passed in the call() method).", "source": "https://python.langchain.com/en/latest/modules/callbacks/getting_started.html"}
+{"id": "ac5572e2a572-3", "text": "The verbose argument is available on most objects throughout the API (Chains, Models, Tools, Agents, etc.) as a constructor argument, eg. LLMChain(verbose=True), and it is equivalent to passing a ConsoleCallbackHandler to the callbacks argument of that object and all child objects. This is useful for debugging, as it will log all events to the console.\nWhen do you want to use each of these?#\nConstructor callbacks are most useful for use cases such as logging, monitoring, etc., which are not specific to a single request, but rather to the entire chain. For example, if you want to log all the requests made to an LLMChain, you would pass a handler to the constructor.\nRequest callbacks are most useful for use cases such as streaming, where you want to stream the output of a single request to a specific websocket connection, or other similar use cases. For example, if you want to stream the output of a single request to a websocket, you would pass a handler to the call() method\nUsing an existing handler#\nLangChain provides a few built-in handlers that you can use to get started. These are available in the langchain/callbacks module. The most basic handler is the StdOutCallbackHandler, which simply logs all events to stdout. In the future we will add more default handlers to the library.\nNote when the verbose flag on the object is set to true, the StdOutCallbackHandler will be invoked even without being explicitly passed in.\nfrom langchain.callbacks import StdOutCallbackHandler\nfrom langchain.chains import LLMChain\nfrom langchain.llms import OpenAI\nfrom langchain.prompts import PromptTemplate\nhandler = StdOutCallbackHandler()\nllm = OpenAI()\nprompt = PromptTemplate.from_template(\"1 + {number} = \")\n# First, let's explicitly set the StdOutCallbackHandler in `callbacks`", "source": "https://python.langchain.com/en/latest/modules/callbacks/getting_started.html"}
+{"id": "ac5572e2a572-4", "text": "# First, let's explicitly set the StdOutCallbackHandler in `callbacks`\nchain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler])\nchain.run(number=2)\n# Then, let's use the `verbose` flag to achieve the same result\nchain = LLMChain(llm=llm, prompt=prompt, verbose=True)\nchain.run(number=2)\n# Finally, let's use the request `callbacks` to achieve the same result\nchain = LLMChain(llm=llm, prompt=prompt)\nchain.run(number=2, callbacks=[handler])\n> Entering new LLMChain chain...\nPrompt after formatting:\n1 + 2 = \n> Finished chain.\n> Entering new LLMChain chain...\nPrompt after formatting:\n1 + 2 = \n> Finished chain.\n> Entering new LLMChain chain...\nPrompt after formatting:\n1 + 2 = \n> Finished chain.\n'\\n\\n3'\nCreating a custom handler#\nYou can create a custom handler to set on the object as well. In the example below, we\u2019ll implement streaming with a custom handler.\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.schema import HumanMessage\nclass MyCustomHandler(BaseCallbackHandler):\n def on_llm_new_token(self, token: str, **kwargs) -> None:\n print(f\"My custom handler, token: {token}\")\n# To enable streaming, we pass in `streaming=True` to the ChatModel constructor\n# Additionally, we pass in a list with our custom handler\nchat = ChatOpenAI(max_tokens=25, streaming=True, callbacks=[MyCustomHandler()])\nchat([HumanMessage(content=\"Tell me a joke\")])\nMy custom handler, token:", "source": "https://python.langchain.com/en/latest/modules/callbacks/getting_started.html"}
+{"id": "ac5572e2a572-5", "text": "My custom handler, token: \nMy custom handler, token: Why\nMy custom handler, token: did\nMy custom handler, token: the\nMy custom handler, token: tomato\nMy custom handler, token: turn\nMy custom handler, token: red\nMy custom handler, token: ?\nMy custom handler, token: Because\nMy custom handler, token: it\nMy custom handler, token: saw\nMy custom handler, token: the\nMy custom handler, token: salad\nMy custom handler, token: dressing\nMy custom handler, token: !\nMy custom handler, token: \nAIMessage(content='Why did the tomato turn red? Because it saw the salad dressing!', additional_kwargs={})\nAsync Callbacks#\nIf you are planning to use the async API, it is recommended to use AsyncCallbackHandler to avoid blocking the runloop.\nAdvanced if you use a sync CallbackHandler while using an async method to run your llm/chain/tool/agent, it will still work. However, under the hood, it will be called with run_in_executor which can cause issues if your CallbackHandler is not thread-safe.\nimport asyncio\nfrom typing import Any, Dict, List\nfrom langchain.schema import LLMResult\nfrom langchain.callbacks.base import AsyncCallbackHandler\nclass MyCustomSyncHandler(BaseCallbackHandler):\n def on_llm_new_token(self, token: str, **kwargs) -> None:\n print(f\"Sync handler being called in a `thread_pool_executor`: token: {token}\")\nclass MyCustomAsyncHandler(AsyncCallbackHandler):\n \"\"\"Async callback handler that can be used to handle callbacks from langchain.\"\"\"\n async def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:", "source": "https://python.langchain.com/en/latest/modules/callbacks/getting_started.html"}
+{"id": "ac5572e2a572-6", "text": ") -> None:\n \"\"\"Run when chain starts running.\"\"\"\n print(\"zzzz....\")\n await asyncio.sleep(0.3)\n class_name = serialized[\"name\"]\n print(\"Hi! I just woke up. Your llm is starting\")\n async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Run when chain ends running.\"\"\"\n print(\"zzzz....\")\n await asyncio.sleep(0.3)\n print(\"Hi! I just woke up. Your llm is ending\")\n# To enable streaming, we pass in `streaming=True` to the ChatModel constructor\n# Additionally, we pass in a list with our custom handler\nchat = ChatOpenAI(max_tokens=25, streaming=True, callbacks=[MyCustomSyncHandler(), MyCustomAsyncHandler()])\nawait chat.agenerate([[HumanMessage(content=\"Tell me a joke\")]])\nzzzz....\nHi! I just woke up. Your llm is starting\nSync handler being called in a `thread_pool_executor`: token: \nSync handler being called in a `thread_pool_executor`: token: Why\nSync handler being called in a `thread_pool_executor`: token: don\nSync handler being called in a `thread_pool_executor`: token: 't\nSync handler being called in a `thread_pool_executor`: token: scientists\nSync handler being called in a `thread_pool_executor`: token: trust\nSync handler being called in a `thread_pool_executor`: token: atoms\nSync handler being called in a `thread_pool_executor`: token: ?\nSync handler being called in a `thread_pool_executor`: token: Because\nSync handler being called in a `thread_pool_executor`: token: they\nSync handler being called in a `thread_pool_executor`: token: make", "source": "https://python.langchain.com/en/latest/modules/callbacks/getting_started.html"}
+{"id": "ac5572e2a572-7", "text": "Sync handler being called in a `thread_pool_executor`: token: make\nSync handler being called in a `thread_pool_executor`: token: up\nSync handler being called in a `thread_pool_executor`: token: everything\nSync handler being called in a `thread_pool_executor`: token: !\nSync handler being called in a `thread_pool_executor`: token: \nzzzz....\nHi! I just woke up. Your llm is ending\nLLMResult(generations=[[ChatGeneration(text=\"Why don't scientists trust atoms?\\n\\nBecause they make up everything!\", generation_info=None, message=AIMessage(content=\"Why don't scientists trust atoms?\\n\\nBecause they make up everything!\", additional_kwargs={}))]], llm_output={'token_usage': {}, 'model_name': 'gpt-3.5-turbo'})\nUsing multiple handlers, passing in handlers#\nIn the previous examples, we passed in callback handlers upon creation of an object by using callbacks=. In this case, the callbacks will be scoped to that particular object.\nHowever, in many cases, it is advantageous to pass in handlers instead when running the object. When we pass through CallbackHandlers using the callbacks keyword arg when executing an run, those callbacks will be issued by all nested objects involved in the execution. For example, when a handler is passed through to an Agent, it will be used for all callbacks related to the agent and all the objects involved in the agent\u2019s execution, in this case, the Tools, LLMChain, and LLM.\nThis prevents us from having to manually attach the handlers to each individual nested object.\nfrom typing import Dict, Union, Any, List\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.schema import AgentAction\nfrom langchain.agents import AgentType, initialize_agent, load_tools\nfrom langchain.callbacks import tracing_enabled\nfrom langchain.llms import OpenAI", "source": "https://python.langchain.com/en/latest/modules/callbacks/getting_started.html"}
+{"id": "ac5572e2a572-8", "text": "from langchain.callbacks import tracing_enabled\nfrom langchain.llms import OpenAI\n# First, define custom callback handler implementations\nclass MyCustomHandlerOne(BaseCallbackHandler):\n def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> Any:\n print(f\"on_llm_start {serialized['name']}\")\n def on_llm_new_token(self, token: str, **kwargs: Any) -> Any:\n print(f\"on_new_token {token}\")\n def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> Any:\n \"\"\"Run when LLM errors.\"\"\"\n def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> Any:\n print(f\"on_chain_start {serialized['name']}\")\n def on_tool_start(\n self, serialized: Dict[str, Any], input_str: str, **kwargs: Any\n ) -> Any:\n print(f\"on_tool_start {serialized['name']}\")\n def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n print(f\"on_agent_action {action}\")\nclass MyCustomHandlerTwo(BaseCallbackHandler):\n def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> Any:\n print(f\"on_llm_start (I'm the second handler!!) {serialized['name']}\")\n# Instantiate the handlers\nhandler1 = MyCustomHandlerOne()\nhandler2 = MyCustomHandlerTwo()\n# Setup the agent. Only the `llm` will issue callbacks for handler2", "source": "https://python.langchain.com/en/latest/modules/callbacks/getting_started.html"}
+{"id": "ac5572e2a572-9", "text": "# Setup the agent. Only the `llm` will issue callbacks for handler2\nllm = OpenAI(temperature=0, streaming=True, callbacks=[handler2])\ntools = load_tools([\"llm-math\"], llm=llm)\nagent = initialize_agent(\n tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION\n)\n# Callbacks for handler1 will be issued by every object involved in the \n# Agent execution (llm, llmchain, tool, agent executor)\nagent.run(\"What is 2 raised to the 0.235 power?\", callbacks=[handler1])\non_chain_start AgentExecutor\non_chain_start LLMChain\non_llm_start OpenAI\non_llm_start (I'm the second handler!!) OpenAI\non_new_token I\non_new_token need\non_new_token to\non_new_token use\non_new_token a\non_new_token calculator\non_new_token to\non_new_token solve\non_new_token this\non_new_token .\non_new_token \nAction\non_new_token :\non_new_token Calculator\non_new_token \nAction\non_new_token Input\non_new_token :\non_new_token 2\non_new_token ^\non_new_token 0\non_new_token .\non_new_token 235\non_new_token \non_agent_action AgentAction(tool='Calculator', tool_input='2^0.235', log=' I need to use a calculator to solve this.\\nAction: Calculator\\nAction Input: 2^0.235')\non_tool_start Calculator\non_chain_start LLMMathChain\non_chain_start LLMChain\non_llm_start OpenAI\non_llm_start (I'm the second handler!!) OpenAI\non_new_token \non_new_token ```text", "source": "https://python.langchain.com/en/latest/modules/callbacks/getting_started.html"}
+{"id": "ac5572e2a572-10", "text": "on_new_token \non_new_token ```text\non_new_token \non_new_token 2\non_new_token **\non_new_token 0\non_new_token .\non_new_token 235\non_new_token \non_new_token ```\non_new_token ...\non_new_token num\non_new_token expr\non_new_token .\non_new_token evaluate\non_new_token (\"\non_new_token 2\non_new_token **\non_new_token 0\non_new_token .\non_new_token 235\non_new_token \")\non_new_token ...\non_new_token \non_new_token \non_chain_start LLMChain\non_llm_start OpenAI\non_llm_start (I'm the second handler!!) OpenAI\non_new_token I\non_new_token now\non_new_token know\non_new_token the\non_new_token final\non_new_token answer\non_new_token .\non_new_token \nFinal\non_new_token Answer\non_new_token :\non_new_token 1\non_new_token .\non_new_token 17\non_new_token 690\non_new_token 67\non_new_token 372\non_new_token 187\non_new_token 674\non_new_token \n'1.1769067372187674'\nTracing and Token Counting#\nTracing and token counting are two capabilities we provide which are built on our callbacks mechanism.\nTracing#\nThere are two recommended ways to trace your LangChains:\nSetting the LANGCHAIN_TRACING environment variable to \"true\".\nUsing a context manager with tracing_enabled() to trace a particular block of code.\nNote if the environment variable is set, all code will be traced, regardless of whether or not it\u2019s within the context manager.\nimport os", "source": "https://python.langchain.com/en/latest/modules/callbacks/getting_started.html"}
+{"id": "ac5572e2a572-11", "text": "import os\nfrom langchain.agents import AgentType, initialize_agent, load_tools\nfrom langchain.callbacks import tracing_enabled\nfrom langchain.llms import OpenAI\n# To run the code, make sure to set OPENAI_API_KEY and SERPAPI_API_KEY\nllm = OpenAI(temperature=0)\ntools = load_tools([\"llm-math\", \"serpapi\"], llm=llm)\nagent = initialize_agent(\n tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n)\nquestions = [\n \"Who won the US Open men's final in 2019? What is his age raised to the 0.334 power?\",\n \"Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?\",\n \"Who won the most recent formula 1 grand prix? What is their age raised to the 0.23 power?\",\n \"Who won the US Open women's final in 2019? What is her age raised to the 0.34 power?\",\n \"Who is Beyonce's husband? What is his age raised to the 0.19 power?\",\n]\nos.environ[\"LANGCHAIN_TRACING\"] = \"true\"\n# Both of the agent runs will be traced because the environment variable is set\nagent.run(questions[0])\nwith tracing_enabled() as session:\n assert session\n agent.run(questions[1])\n> Entering new AgentExecutor chain...\n I need to find out who won the US Open men's final in 2019 and then calculate his age raised to the 0.334 power.\nAction: Search\nAction Input: \"US Open men's final 2019 winner\"", "source": "https://python.langchain.com/en/latest/modules/callbacks/getting_started.html"}
+{"id": "ac5572e2a572-12", "text": "Action: Search\nAction Input: \"US Open men's final 2019 winner\"\nObservation: Rafael Nadal defeated Daniil Medvedev in the final, 7\u20135, 6\u20133, 5\u20137, 4\u20136, 6\u20134 to win the men's singles tennis title at the 2019 US Open. It was his fourth US ...\nThought: I need to find out the age of the winner\nAction: Search\nAction Input: \"Rafael Nadal age\"\nObservation: 36 years\nThought: I need to calculate the age raised to the 0.334 power\nAction: Calculator\nAction Input: 36^0.334\nObservation: Answer: 3.3098250249682484\nThought: I now know the final answer\nFinal Answer: Rafael Nadal, aged 36, won the US Open men's final in 2019 and his age raised to the 0.334 power is 3.3098250249682484.\n> Finished chain.\n> Entering new AgentExecutor chain...\n I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\nAction: Search\nAction Input: \"Olivia Wilde boyfriend\"\nObservation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.\nThought: I need to find out Harry Styles' age.\nAction: Search\nAction Input: \"Harry Styles age\"\nObservation: 29 years\nThought: I need to calculate 29 raised to the 0.23 power.\nAction: Calculator", "source": "https://python.langchain.com/en/latest/modules/callbacks/getting_started.html"}
+{"id": "ac5572e2a572-13", "text": "Action: Calculator\nAction Input: 29^0.23\nObservation: Answer: 2.169459462491557\nThought: I now know the final answer.\nFinal Answer: Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557.\n> Finished chain.\n# Now, we unset the environment variable and use a context manager.\nif \"LANGCHAIN_TRACING\" in os.environ:\n del os.environ[\"LANGCHAIN_TRACING\"]\n# here, we are writing traces to \"my_test_session\"\nwith tracing_enabled(\"my_test_session\") as session:\n assert session\n agent.run(questions[0]) # this should be traced\nagent.run(questions[1]) # this should not be traced\n> Entering new AgentExecutor chain...\n I need to find out who won the US Open men's final in 2019 and then calculate his age raised to the 0.334 power.\nAction: Search\nAction Input: \"US Open men's final 2019 winner\"\nObservation: Rafael Nadal defeated Daniil Medvedev in the final, 7\u20135, 6\u20133, 5\u20137, 4\u20136, 6\u20134 to win the men's singles tennis title at the 2019 US Open. It was his fourth US ...\nThought: I need to find out the age of the winner\nAction: Search\nAction Input: \"Rafael Nadal age\"\nObservation: 36 years\nThought: I need to calculate the age raised to the 0.334 power\nAction: Calculator\nAction Input: 36^0.334\nObservation: Answer: 3.3098250249682484\nThought: I now know the final answer", "source": "https://python.langchain.com/en/latest/modules/callbacks/getting_started.html"}
+{"id": "ac5572e2a572-14", "text": "Thought: I now know the final answer\nFinal Answer: Rafael Nadal, aged 36, won the US Open men's final in 2019 and his age raised to the 0.334 power is 3.3098250249682484.\n> Finished chain.\n> Entering new AgentExecutor chain...\n I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\nAction: Search\nAction Input: \"Olivia Wilde boyfriend\"\nObservation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.\nThought: I need to find out Harry Styles' age.\nAction: Search\nAction Input: \"Harry Styles age\"\nObservation: 29 years\nThought: I need to calculate 29 raised to the 0.23 power.\nAction: Calculator\nAction Input: 29^0.23\nObservation: Answer: 2.169459462491557\nThought: I now know the final answer.\nFinal Answer: Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557.\n> Finished chain.\n\"Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557.\"\n# The context manager is concurrency safe:\nif \"LANGCHAIN_TRACING\" in os.environ:\n del os.environ[\"LANGCHAIN_TRACING\"]\n# start a background task\ntask = asyncio.create_task(agent.arun(questions[0])) # this should not be traced", "source": "https://python.langchain.com/en/latest/modules/callbacks/getting_started.html"}
+{"id": "ac5572e2a572-15", "text": "with tracing_enabled() as session:\n assert session\n tasks = [agent.arun(q) for q in questions[1:3]] # these should be traced\n await asyncio.gather(*tasks)\nawait task\n> Entering new AgentExecutor chain...\n> Entering new AgentExecutor chain...\n> Entering new AgentExecutor chain...\n I need to find out who won the grand prix and then calculate their age raised to the 0.23 power.\nAction: Search\nAction Input: \"Formula 1 Grand Prix Winner\" I need to find out who won the US Open men's final in 2019 and then calculate his age raised to the 0.334 power.\nAction: Search\nAction Input: \"US Open men's final 2019 winner\"Rafael Nadal defeated Daniil Medvedev in the final, 7\u20135, 6\u20133, 5\u20137, 4\u20136, 6\u20134 to win the men's singles tennis title at the 2019 US Open. It was his fourth US ... I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\nAction: Search\nAction Input: \"Olivia Wilde boyfriend\"Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.Lewis Hamilton has won 103 Grands Prix during his career. He won 21 races with McLaren and has won 82 with Mercedes. Lewis Hamilton holds the record for the ... I need to find out the age of the winner\nAction: Search", "source": "https://python.langchain.com/en/latest/modules/callbacks/getting_started.html"}
+{"id": "ac5572e2a572-16", "text": "Action: Search\nAction Input: \"Rafael Nadal age\"36 years I need to find out Harry Styles' age.\nAction: Search\nAction Input: \"Harry Styles age\" I need to find out Lewis Hamilton's age\nAction: Search\nAction Input: \"Lewis Hamilton Age\"29 years I need to calculate the age raised to the 0.334 power\nAction: Calculator\nAction Input: 36^0.334 I need to calculate 29 raised to the 0.23 power.\nAction: Calculator\nAction Input: 29^0.23Answer: 3.3098250249682484Answer: 2.16945946249155738 years\n> Finished chain.\n> Finished chain.\n I now need to calculate 38 raised to the 0.23 power\nAction: Calculator\nAction Input: 38^0.23Answer: 2.3086081644669734\n> Finished chain.\n\"Rafael Nadal, aged 36, won the US Open men's final in 2019 and his age raised to the 0.334 power is 3.3098250249682484.\"\nToken Counting#\nLangChain offers a context manager that allows you to count tokens.\nfrom langchain.callbacks import get_openai_callback\nllm = OpenAI(temperature=0)\nwith get_openai_callback() as cb:\n llm(\"What is the square root of 4?\")\ntotal_tokens = cb.total_tokens\nassert total_tokens > 0\nwith get_openai_callback() as cb:\n llm(\"What is the square root of 4?\")\n llm(\"What is the square root of 4?\")\nassert cb.total_tokens == total_tokens * 2\n# You can kick off concurrent runs from within the context manager\nwith get_openai_callback() as cb:", "source": "https://python.langchain.com/en/latest/modules/callbacks/getting_started.html"}
+{"id": "ac5572e2a572-17", "text": "with get_openai_callback() as cb:\n await asyncio.gather(\n *[llm.agenerate([\"What is the square root of 4?\"]) for _ in range(3)]\n )\nassert cb.total_tokens == total_tokens * 3\n# The context manager is concurrency safe\ntask = asyncio.create_task(llm.agenerate([\"What is the square root of 4?\"]))\nwith get_openai_callback() as cb:\n await llm.agenerate([\"What is the square root of 4?\"])\nawait task\nassert cb.total_tokens == total_tokens\nprevious\nPlan and Execute\nnext\nAutonomous Agents\n Contents\n \nCallbacks\nHow to use callbacks\nWhen do you want to use each of these?\nUsing an existing handler\nCreating a custom handler\nAsync Callbacks\nUsing multiple handlers, passing in handlers\nTracing and Token Counting\nTracing\nToken Counting\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/callbacks/getting_started.html"}
+{"id": "7366c01fc4a6-0", "text": ".ipynb\n.pdf\nGetting Started\n Contents \nOne Line Index Creation\nWalkthrough\nGetting Started#\nLangChain primarily focuses on constructing indexes with the goal of using them as a Retriever. In order to best understand what this means, it\u2019s worth highlighting what the base Retriever interface is. The BaseRetriever class in LangChain is as follows:\nfrom abc import ABC, abstractmethod\nfrom typing import List\nfrom langchain.schema import Document\nclass BaseRetriever(ABC):\n @abstractmethod\n def get_relevant_documents(self, query: str) -> List[Document]:\n \"\"\"Get texts relevant for a query.\n Args:\n query: string to find relevant texts for\n Returns:\n List of relevant documents\n \"\"\"\nIt\u2019s that simple! The get_relevant_documents method can be implemented however you see fit.\nOf course, we also help construct what we think useful Retrievers are. The main type of Retriever that we focus on is a Vectorstore retriever. We will focus on that for the rest of this guide.\nIn order to understand what a vectorstore retriever is, it\u2019s important to understand what a Vectorstore is. So let\u2019s look at that.\nBy default, LangChain uses Chroma as the vectorstore to index and search embeddings. To walk through this tutorial, we\u2019ll first need to install chromadb.\npip install chromadb\nThis example showcases question answering over documents.\nWe have chosen this as the example for getting started because it nicely combines a lot of different elements (Text splitters, embeddings, vectorstores) and then also shows how to use them in a chain.\nQuestion answering over documents consists of four steps:\nCreate an index\nCreate a Retriever from that index\nCreate a question answering chain\nAsk questions!", "source": "https://python.langchain.com/en/latest/modules/indexes/getting_started.html"}
+{"id": "7366c01fc4a6-1", "text": "Create a Retriever from that index\nCreate a question answering chain\nAsk questions!\nEach of the steps has multiple sub steps and potential configurations. In this notebook we will primarily focus on (1). We will start by showing the one-liner for doing so, but then break down what is actually going on.\nFirst, let\u2019s import some common classes we\u2019ll use no matter what.\nfrom langchain.chains import RetrievalQA\nfrom langchain.llms import OpenAI\nNext in the generic setup, let\u2019s specify the document loader we want to use. You can download the state_of_the_union.txt file here\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../state_of_the_union.txt', encoding='utf8')\nOne Line Index Creation#\nTo get started as quickly as possible, we can use the VectorstoreIndexCreator.\nfrom langchain.indexes import VectorstoreIndexCreator\nindex = VectorstoreIndexCreator().from_loaders([loader])\nRunning Chroma using direct local API.\nUsing DuckDB in-memory for database. Data will be transient.\nNow that the index is created, we can use it to ask questions of the data! Note that under the hood this is actually doing a few steps as well, which we will cover later in this guide.\nquery = \"What did the president say about Ketanji Brown Jackson\"\nindex.query(query)\n\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\"\nquery = \"What did the president say about Ketanji Brown Jackson\"\nindex.query_with_sources(query)", "source": "https://python.langchain.com/en/latest/modules/indexes/getting_started.html"}
+{"id": "7366c01fc4a6-2", "text": "index.query_with_sources(query)\n{'question': 'What did the president say about Ketanji Brown Jackson',\n 'answer': \" The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, one of the nation's top legal minds, to continue Justice Breyer's legacy of excellence, and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\\n\",\n 'sources': '../state_of_the_union.txt'}\nWhat is returned from the VectorstoreIndexCreator is VectorStoreIndexWrapper, which provides these nice query and query_with_sources functionality. If we just wanted to access the vectorstore directly, we can also do that.\nindex.vectorstore\n\nIf we then want to access the VectorstoreRetriever, we can do that with:\nindex.vectorstore.as_retriever()\nVectorStoreRetriever(vectorstore=, search_kwargs={})\nWalkthrough#\nOkay, so what\u2019s actually going on? How is this index getting created?\nA lot of the magic is being hid in this VectorstoreIndexCreator. What is this doing?\nThere are three main steps going on after the documents are loaded:\nSplitting documents into chunks\nCreating embeddings for each document\nStoring documents and embeddings in a vectorstore\nLet\u2019s walk through this in code\ndocuments = loader.load()\nNext, we will split the documents into chunks.\nfrom langchain.text_splitter import CharacterTextSplitter\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ntexts = text_splitter.split_documents(documents)\nWe will then select which embeddings we want to use.", "source": "https://python.langchain.com/en/latest/modules/indexes/getting_started.html"}
+{"id": "7366c01fc4a6-3", "text": "We will then select which embeddings we want to use.\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nWe now create the vectorstore to use as the index.\nfrom langchain.vectorstores import Chroma\ndb = Chroma.from_documents(texts, embeddings)\nRunning Chroma using direct local API.\nUsing DuckDB in-memory for database. Data will be transient.\nSo that\u2019s creating the index. Then, we expose this index in a retriever interface.\nretriever = db.as_retriever()\nThen, as before, we create a chain and use it to answer questions!\nqa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type=\"stuff\", retriever=retriever)\nquery = \"What did the president say about Ketanji Brown Jackson\"\nqa.run(query)\n\" The President said that Judge Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He said she is a consensus builder and has received a broad range of support from organizations such as the Fraternal Order of Police and former judges appointed by Democrats and Republicans.\"\nVectorstoreIndexCreator is just a wrapper around all this logic. It is configurable in the text splitter it uses, the embeddings it uses, and the vectorstore it uses. For example, you can configure it as below:\nindex_creator = VectorstoreIndexCreator(\n vectorstore_cls=Chroma, \n embedding=OpenAIEmbeddings(),\n text_splitter=CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n)", "source": "https://python.langchain.com/en/latest/modules/indexes/getting_started.html"}
+{"id": "7366c01fc4a6-4", "text": ")\nHopefully this highlights what is going on under the hood of VectorstoreIndexCreator. While we think it\u2019s important to have a simple way to create indexes, we also think it\u2019s important to understand what\u2019s going on under the hood.\nprevious\nIndexes\nnext\nDocument Loaders\n Contents\n \nOne Line Index Creation\nWalkthrough\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/getting_started.html"}
+{"id": "e3fe30b2ef8c-0", "text": ".rst\n.pdf\nDocument Loaders\n Contents \nTransform loaders\nPublic dataset or service loaders\nProprietary dataset or service loaders\nDocument Loaders#\nNote\nConceptual Guide\nCombining language models with your own text data is a powerful way to differentiate them.\nThe first step in doing this is to load the data into \u201cDocuments\u201d - a fancy way of say some pieces of text.\nThe document loader is aimed at making this easy.\nThe following document loaders are provided:\nTransform loaders#\nThese transform loaders transform data from a specific format into the Document format.\nFor example, there are transformers for CSV and SQL.\nMostly, these loaders input data from files but sometime from URLs.\nA primary driver of a lot of these transformers is the Unstructured python package.\nThis package transforms many types of files - text, powerpoint, images, html, pdf, etc - into text data.\nFor detailed instructions on how to get set up with Unstructured, see installation guidelines here.\nAirtable\nOpenAIWhisperParser\nCoNLL-U\nCopy Paste\nCSV\nEmail\nEPub\nEverNote\nMicrosoft Excel\nFacebook Chat\nFile Directory\nHTML\nImages\nJupyter Notebook\nJSON\nMarkdown\nMicrosoft PowerPoint\nMicrosoft Word\nOpen Document Format (ODT)\nPandas DataFrame\nPDF\nSitemap\nSubtitle\nTelegram\nTOML\nUnstructured File\nURL\nSelenium URL Loader\nPlaywright URL Loader\nWebBaseLoader\nWeather\nWhatsApp Chat\nPublic dataset or service loaders#\nThese datasets and sources are created for public domain and we use queries to search there\nand download necessary documents.\nFor example, Hacker News service.\nWe don\u2019t need any access permissions to these datasets and services.\nArxiv\nAZLyrics\nBiliBili\nCollege Confidential\nGutenberg\nHacker News\nHuggingFace dataset\niFixit", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders.html"}
+{"id": "e3fe30b2ef8c-1", "text": "College Confidential\nGutenberg\nHacker News\nHuggingFace dataset\niFixit\nIMSDb\nMediaWikiDump\nWikipedia\nYouTube transcripts\nProprietary dataset or service loaders#\nThese datasets and services are not from the public domain.\nThese loaders mostly transform data from specific formats of applications or cloud services,\nfor example Google Drive.\nWe need access tokens and sometime other parameters to get access to these datasets and services.\nAirbyte JSON\nApify Dataset\nAWS S3 Directory\nAWS S3 File\nAzure Blob Storage Container\nAzure Blob Storage File\nBlackboard\nBlockchain\nChatGPT Data\nConfluence\nExamples\nDiffbot\nDocugami\nDuckDB\nFauna\nFigma\nGitBook\nGit\nGoogle BigQuery\nGoogle Cloud Storage Directory\nGoogle Cloud Storage File\nGoogle Drive\nImage captions\nIugu\nJoplin\nMicrosoft OneDrive\nModern Treasury\nNotion DB 2/2\nNotion DB 1/2\nObsidian\nPsychic\nPySpark DataFrame Loader\nReadTheDocs Documentation\nReddit\nRoam\nSlack\nSnowflake\nSpreedly\nStripe\n2Markdown\nTwitter\nprevious\nGetting Started\nnext\nAirtable\n Contents\n \nTransform loaders\nPublic dataset or service loaders\nProprietary dataset or service loaders\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders.html"}
+{"id": "804d1d42f459-0", "text": ".rst\n.pdf\nVectorstores\nVectorstores#\nNote\nConceptual Guide\nVectorstores are one of the most important components of building indexes.\nFor an introduction to vectorstores and generic functionality see:\nGetting Started\nWe also have documentation for all the types of vectorstores that are supported.\nPlease see below for that list.\nAnalyticDB\nAnnoy\nAtlas\nAwaDB\nChroma\nClickHouse Vector Search\nDeep Lake\nDocArrayHnswSearch\nDocArrayInMemorySearch\nElasticSearch\nElasticVectorSearch class\nElasticKnnSearch Class\nFAISS\nLanceDB\nMatchingEngine\nMilvus\nCommented out until further notice\nMyScale\nOpenSearch\nPGVector\nPinecone\nQdrant\nRedis\nSingleStoreDB vector search\nSKLearnVectorStore\nSupabase (Postgres)\nTair\nTigris\nTypesense\nVectara\nWeaviate\nPersistance\nRetriever options\nZilliz\nprevious\ntiktoken (OpenAI) tokenizer\nnext\nGetting Started\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores.html"}
+{"id": "764d876df394-0", "text": ".rst\n.pdf\nRetrievers\nRetrievers#\nNote\nConceptual Guide\nThe retriever interface is a generic interface that makes it easy to combine documents with\nlanguage models. This interface exposes a get_relevant_documents method which takes in a query\n(a string) and returns a list of documents.\nPlease see below for a list of all the retrievers supported.\nArxiv\nAWS Kendra\nAzure Cognitive Search\nChatGPT Plugin\nSelf-querying with Chroma\nCohere Reranker\nContextual Compression\nStringing compressors and document transformers together\nDataberry\nElasticSearch BM25\nkNN\nLOTR (Merger Retriever)\nMetal\nPinecone Hybrid Search\nPubMed Retriever\nSelf-querying with Qdrant\nSelf-querying\nSVM\nTF-IDF\nTime Weighted VectorStore\nVectorStore\nVespa\nWeaviate Hybrid Search\nSelf-querying with Weaviate\nWikipedia\nZep\nprevious\nZilliz\nnext\nArxiv\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers.html"}
+{"id": "493568990047-0", "text": ".rst\n.pdf\nText Splitters\nText Splitters#\nNote\nConceptual Guide\nWhen you want to deal with long pieces of text, it is necessary to split up that text into chunks.\nAs simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together. What \u201csemantically related\u201d means could depend on the type of text.\nThis notebook showcases several ways to do that.\nAt a high level, text splitters work as following:\nSplit the text up into small, semantically meaningful chunks (often sentences).\nStart combining these small chunks into a larger chunk until you reach a certain size (as measured by some function).\nOnce you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks).\nThat means there are two different axes along which you can customize your text splitter:\nHow the text is split\nHow the chunk size is measured\nFor an introduction to the default text splitter and generic functionality see:\nGetting Started\nUsage examples for the text splitters:\nCharacter\nCode (including HTML, Markdown, Latex, Python, etc)\nNLTK\nRecursive Character\nspaCy\ntiktoken (OpenAI)\nMost LLMs are constrained by the number of tokens that you can pass in, which is not the same as the number of characters.\nIn order to get a more accurate estimate, we can use tokenizers to count the number of tokens in the text.\nWe use this number inside the ..TextSplitter classes.\nThis implemented as the from_ methods of the ..TextSplitter classes:\nHugging Face tokenizer\ntiktoken (OpenAI) tokenizer\nprevious\nTwitter\nnext\nGetting Started\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters.html"}
+{"id": "493568990047-1", "text": "Getting Started\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters.html"}
+{"id": "50cf76aa0c35-0", "text": ".ipynb\n.pdf\nSitemap\n Contents \nFiltering sitemap URLs\nAdd custom scraping rules\nLocal Sitemap\nSitemap#\nExtends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document.\nThe scraping is done concurrently. There are reasonable limits to concurrent requests, defaulting to 2 per second. If you aren\u2019t concerned about being a good citizen, or you control the scrapped server, or don\u2019t care about load. Note, while this will speed up the scraping process, but it may cause the server to block you. Be careful!\n!pip install nest_asyncio\nRequirement already satisfied: nest_asyncio in /Users/tasp/Code/projects/langchain/.venv/lib/python3.10/site-packages (1.5.6)\n[notice] A new release of pip available: 22.3.1 -> 23.0.1\n[notice] To update, run: pip install --upgrade pip\n# fixes a bug with asyncio and jupyter\nimport nest_asyncio\nnest_asyncio.apply()\nfrom langchain.document_loaders.sitemap import SitemapLoader\nsitemap_loader = SitemapLoader(web_path=\"https://langchain.readthedocs.io/sitemap.xml\")\ndocs = sitemap_loader.load()\nYou can change the requests_per_second parameter to increase the max concurrent requests. and use requests_kwargs to pass kwargs when send requests.\nsitemap_loader.requests_per_second = 2\n# Optional: avoid `[SSL: CERTIFICATE_VERIFY_FAILED]` issue\nsitemap_loader.requests_kwargs = {\"verify\": False}\ndocs[0]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html"}
+{"id": "50cf76aa0c35-1", "text": "Document(page_content='\\n\\n\\n\\n\\n\\nWelcome to LangChain \u2014 \ud83e\udd9c\ud83d\udd17 LangChain 0.0.123\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nSkip to main content\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nCtrl+K\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\ud83e\udd9c\ud83d\udd17 LangChain 0.0.123\\n\\n\\n\\nGetting Started\\n\\nQuickstart Guide\\n\\nModules\\n\\nPrompt Templates\\nGetting Started\\nKey Concepts\\nHow-To Guides\\nCreate a custom prompt template\\nCreate a custom example selector\\nProvide few shot examples to a prompt\\nPrompt Serialization\\nExample Selectors\\nOutput Parsers\\n\\n\\nReference\\nPromptTemplates\\nExample Selector\\n\\n\\n\\n\\nLLMs\\nGetting Started\\nKey Concepts\\nHow-To Guides\\nGeneric Functionality\\nCustom LLM\\nFake LLM\\nLLM Caching\\nLLM Serialization\\nToken Usage Tracking\\n\\n\\nIntegrations\\nAI21\\nAleph Alpha\\nAnthropic\\nAzure OpenAI LLM Example\\nBanana\\nCerebriumAI LLM Example\\nCohere\\nDeepInfra LLM Example\\nForefrontAI LLM Example\\nGooseAI LLM Example\\nHugging Face Hub\\nManifest\\nModal\\nOpenAI\\nPetals LLM Example\\nPromptLayer OpenAI\\nSageMakerEndpoint\\nSelf-Hosted Models via Runhouse\\nStochasticAI\\nWriter\\n\\n\\nAsync API for LLM\\nStreaming with LLMs\\n\\n\\nReference\\n\\n\\nDocument Loaders\\nKey Concepts\\nHow To Guides\\nCoNLL-U\\nAirbyte JSON\\nAZLyrics\\nBlackboard\\nCollege Confidential\\nCopy Paste\\nCSV Loader\\nDirectory", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html"}
+{"id": "50cf76aa0c35-2", "text": "JSON\\nAZLyrics\\nBlackboard\\nCollege Confidential\\nCopy Paste\\nCSV Loader\\nDirectory Loader\\nEmail\\nEverNote\\nFacebook Chat\\nFigma\\nGCS Directory\\nGCS File Storage\\nGitBook\\nGoogle Drive\\nGutenberg\\nHacker News\\nHTML\\niFixit\\nImages\\nIMSDb\\nMarkdown\\nNotebook\\nNotion\\nObsidian\\nPDF\\nPowerPoint\\nReadTheDocs Documentation\\nRoam\\ns3 Directory\\ns3 File\\nSubtitle Files\\nTelegram\\nUnstructured File Loader\\nURL\\nWeb Base\\nWord Documents\\nYouTube\\n\\n\\n\\n\\nUtils\\nKey Concepts\\nGeneric Utilities\\nBash\\nBing Search\\nGoogle Search\\nGoogle Serper API\\nIFTTT WebHooks\\nPython REPL\\nRequests\\nSearxNG Search API\\nSerpAPI\\nWolfram Alpha\\nZapier Natural Language Actions API\\n\\n\\nReference\\nPython REPL\\nSerpAPI\\nSearxNG Search\\nDocstore\\nText Splitter\\nEmbeddings\\nVectorStores\\n\\n\\n\\n\\nIndexes\\nGetting Started\\nKey Concepts\\nHow To Guides\\nEmbeddings\\nHypothetical Document Embeddings\\nText Splitter\\nVectorStores\\nAtlasDB\\nChroma\\nDeep Lake\\nElasticSearch\\nFAISS\\nMilvus\\nOpenSearch\\nPGVector\\nPinecone\\nQdrant\\nRedis\\nWeaviate\\nChatGPT Plugin Retriever\\nVectorStore Retriever\\nAnalyze Document\\nChat Index\\nGraph QA\\nQuestion Answering with Sources\\nQuestion Answering\\nSummarization\\nRetrieval Question/Answering\\nRetrieval Question Answering with Sources\\nVector DB Text Generation\\n\\n\\n\\n\\nChains\\nGetting Started\\nHow-To Guides\\nGeneric Chains\\nLoading from LangChainHub\\nLLM Chain\\nSequential Chains\\nSerialization\\nTransformation Chain\\n\\n\\nUtility Chains\\nAPI Chains\\nSelf-Critique Chain with Constitutional", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html"}
+{"id": "50cf76aa0c35-3", "text": "Chain\\n\\n\\nUtility Chains\\nAPI Chains\\nSelf-Critique Chain with Constitutional AI\\nBashChain\\nLLMCheckerChain\\nLLM Math\\nLLMRequestsChain\\nLLMSummarizationCheckerChain\\nModeration\\nPAL\\nSQLite example\\n\\n\\nAsync API for Chain\\n\\n\\nKey Concepts\\nReference\\n\\n\\nAgents\\nGetting Started\\nKey Concepts\\nHow-To Guides\\nAgents and Vectorstores\\nAsync API for Agent\\nConversation Agent (for Chat Models)\\nChatGPT Plugins\\nCustom Agent\\nDefining Custom Tools\\nHuman as a tool\\nIntermediate Steps\\nLoading from LangChainHub\\nMax Iterations\\nMulti Input Tools\\nSearch Tools\\nSerialization\\nAdding SharedMemory to an Agent and its Tools\\nCSV Agent\\nJSON Agent\\nOpenAPI Agent\\nPandas Dataframe Agent\\nPython Agent\\nSQL Database Agent\\nVectorstore Agent\\nMRKL\\nMRKL Chat\\nReAct\\nSelf Ask With Search\\n\\n\\nReference\\n\\n\\nMemory\\nGetting Started\\nKey Concepts\\nHow-To Guides\\nConversationBufferMemory\\nConversationBufferWindowMemory\\nEntity Memory\\nConversation Knowledge Graph Memory\\nConversationSummaryMemory\\nConversationSummaryBufferMemory\\nConversationTokenBufferMemory\\nAdding Memory To an LLMChain\\nAdding Memory to a Multi-Input Chain\\nAdding Memory to an Agent\\nChatGPT Clone\\nConversation Agent\\nConversational Memory Customization\\nCustom Memory\\nMultiple Memory\\n\\n\\n\\n\\nChat\\nGetting Started\\nKey Concepts\\nHow-To Guides\\nAgent\\nChat Vector DB\\nFew Shot Examples\\nMemory\\nPromptLayer ChatOpenAI\\nStreaming\\nRetrieval Question/Answering\\nRetrieval Question Answering with Sources\\n\\n\\n\\n\\n\\nUse Cases\\n\\nAgents\\nChatbots\\nGenerate Examples\\nData Augmented Generation\\nQuestion Answering\\nSummarization\\nQuerying Tabular Data\\nExtraction\\nEvaluation\\nAgent Benchmarking: Search + Calculator\\nAgent VectorDB Question Answering Benchmarking\\nBenchmarking", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html"}
+{"id": "50cf76aa0c35-4", "text": "Benchmarking: Search + Calculator\\nAgent VectorDB Question Answering Benchmarking\\nBenchmarking Template\\nData Augmented Question Answering\\nUsing Hugging Face Datasets\\nLLM Math\\nQuestion Answering Benchmarking: Paul Graham Essay\\nQuestion Answering Benchmarking: State of the Union Address\\nQA Generation\\nQuestion Answering\\nSQL Question Answering Benchmarking: Chinook\\n\\n\\nModel Comparison\\n\\nReference\\n\\nInstallation\\nIntegrations\\nAPI References\\nPrompts\\nPromptTemplates\\nExample Selector\\n\\n\\nUtilities\\nPython REPL\\nSerpAPI\\nSearxNG Search\\nDocstore\\nText Splitter\\nEmbeddings\\nVectorStores\\n\\n\\nChains\\nAgents\\n\\n\\n\\nEcosystem\\n\\nLangChain Ecosystem\\nAI21 Labs\\nAtlasDB\\nBanana\\nCerebriumAI\\nChroma\\nCohere\\nDeepInfra\\nDeep Lake\\nForefrontAI\\nGoogle Search Wrapper\\nGoogle Serper Wrapper\\nGooseAI\\nGraphsignal\\nHazy Research\\nHelicone\\nHugging Face\\nMilvus\\nModal\\nNLPCloud\\nOpenAI\\nOpenSearch\\nPetals\\nPGVector\\nPinecone\\nPromptLayer\\nQdrant\\nRunhouse\\nSearxNG Search API\\nSerpAPI\\nStochasticAI\\nUnstructured\\nWeights & Biases\\nWeaviate\\nWolfram Alpha Wrapper\\nWriter\\n\\n\\n\\nAdditional Resources\\n\\nLangChainHub\\nGlossary\\nLangChain Gallery\\nDeployments\\nTracing\\nDiscord\\nProduction Support\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n.rst\\n\\n\\n\\n\\n\\n\\n\\n.pdf\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nWelcome to LangChain\\n\\n\\n\\n\\n Contents", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html"}
+{"id": "50cf76aa0c35-5", "text": "to LangChain\\n\\n\\n\\n\\n Contents \\n\\n\\n\\nGetting Started\\nModules\\nUse Cases\\nReference Docs\\nLangChain Ecosystem\\nAdditional Resources\\n\\n\\n\\n\\n\\n\\n\\n\\nWelcome to LangChain#\\nLarge language models (LLMs) are emerging as a transformative technology, enabling\\ndevelopers to build applications that they previously could not.\\nBut using these LLMs in isolation is often not enough to\\ncreate a truly powerful app - the real power comes when you are able to\\ncombine them with other sources of computation or knowledge.\\nThis library is aimed at assisting in the development of those types of applications. Common examples of these types of applications include:\\n\u2753 Question Answering over specific documents\\n\\nDocumentation\\nEnd-to-end Example: Question Answering over Notion Database\\n\\n\ud83d\udcac Chatbots\\n\\nDocumentation\\nEnd-to-end Example: Chat-LangChain\\n\\n\ud83e\udd16 Agents\\n\\nDocumentation\\nEnd-to-end Example: GPT+WolframAlpha\\n\\n\\nGetting Started#\\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\\n\\nGetting Started Documentation\\n\\n\\n\\n\\n\\nModules#\\nThere are several main modules that LangChain provides support for.\\nFor each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.\\nThese modules are, in increasing order of complexity:\\n\\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\\nLLMs: This includes a generic interface for all LLMs, and common utilities for working with LLMs.\\nDocument Loaders: This includes a standard interface for loading documents, as well as specific integrations to all types of text data sources.\\nUtils: Language models are often more powerful when interacting with other sources of knowledge or computation. This can include Python REPLs, embeddings, search engines, and more.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html"}
+{"id": "50cf76aa0c35-6", "text": "of knowledge or computation. This can include Python REPLs, embeddings, search engines, and more. LangChain provides a large collection of common utils to use in your application.\\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\\nChat: Chat models are a variation on Language Models that expose a different API - rather than working with raw text, they work with messages. LangChain provides a standard interface for working with them and doing all the same things as above.\\n\\n\\n\\n\\n\\nUse Cases#\\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\\n\\nAgents: Agents are systems that use a language model to interact with other tools. These can be used to do more grounded question/answering, interact with APIs, or even take actions.\\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\\nData Augmented Generation: Data Augmented Generation involves specific types of chains that first interact with an external datasource to fetch data to use in the", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html"}
+{"id": "50cf76aa0c35-7", "text": "Generation involves specific types of chains that first interact with an external datasource to fetch data to use in the generation step. Examples of this include summarization of long pieces of text and question/answering over specific data sources.\\nQuestion Answering: Answering questions over specific documents, only utilizing the information in those documents to construct an answer. A type of Data Augmented Generation.\\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\\nGenerate similar examples: Generating similar examples to a given input. This is a common use case for many applications, and LangChain provides some prompts/chains for assisting in this.\\nCompare models: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\\n\\n\\n\\n\\n\\nReference Docs#\\nAll of LangChain\u2019s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\\n\\nReference Documentation\\n\\n\\n\\n\\n\\nLangChain Ecosystem#\\nGuides for how other companies/products can be used with LangChain\\n\\nLangChain Ecosystem\\n\\n\\n\\n\\n\\nAdditional Resources#\\nAdditional collection of resources we think may be useful as you develop your application!\\n\\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\\nGlossary: A glossary of all", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html"}
+{"id": "50cf76aa0c35-8", "text": "and explore other prompts, chains, and agents.\\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\\nDiscord: Join us on our Discord to discuss all things LangChain!\\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\\nProduction Support: As you move your LangChains into production, we\u2019d love to offer more comprehensive support. Please fill out this form and we\u2019ll set up a dedicated support Slack channel.\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nnext\\nQuickstart Guide\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n Contents\\n \\n\\n\\nGetting Started\\nModules\\nUse Cases\\nReference Docs\\nLangChain Ecosystem\\nAdditional Resources\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nBy Harrison Chase\\n\\n\\n\\n\\n \\n \u00a9 Copyright 2023, Harrison Chase.\\n \\n\\n\\n\\n\\n Last updated on Mar 24, 2023.\\n \\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n', lookup_str='', metadata={'source': 'https://python.langchain.com/en/stable/', 'loc': 'https://python.langchain.com/en/stable/', 'lastmod': '2023-03-24T19:30:54.647430+00:00', 'changefreq': 'weekly', 'priority': '1'}, lookup_index=0)", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html"}
+{"id": "50cf76aa0c35-9", "text": "Filtering sitemap URLs#\nSitemaps can be massive files, with thousands of URLs. Often you don\u2019t need every single one of them. You can filter the URLs by passing a list of strings or regex patterns to the url_filter parameter. Only URLs that match one of the patterns will be loaded.\nloader = SitemapLoader(\n \"https://langchain.readthedocs.io/sitemap.xml\",\n filter_urls=[\"https://python.langchain.com/en/latest/\"]\n)\ndocuments = loader.load()\ndocuments[0]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html"}
+{"id": "50cf76aa0c35-10", "text": "Document(page_content='\\n\\n\\n\\n\\n\\nWelcome to LangChain \u2014 \ud83e\udd9c\ud83d\udd17 LangChain 0.0.123\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nSkip to main content\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nCtrl+K\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\ud83e\udd9c\ud83d\udd17 LangChain 0.0.123\\n\\n\\n\\nGetting Started\\n\\nQuickstart Guide\\n\\nModules\\n\\nModels\\nLLMs\\nGetting Started\\nGeneric Functionality\\nHow to use the async API for LLMs\\nHow to write a custom LLM wrapper\\nHow (and why) to use the fake LLM\\nHow to cache LLM calls\\nHow to serialize LLM classes\\nHow to stream LLM responses\\nHow to track token usage\\n\\n\\nIntegrations\\nAI21\\nAleph Alpha\\nAnthropic\\nAzure OpenAI LLM Example\\nBanana\\nCerebriumAI LLM Example\\nCohere\\nDeepInfra LLM Example\\nForefrontAI LLM Example\\nGooseAI LLM Example\\nHugging Face Hub\\nManifest\\nModal\\nOpenAI\\nPetals LLM Example\\nPromptLayer OpenAI\\nSageMakerEndpoint\\nSelf-Hosted Models via Runhouse\\nStochasticAI\\nWriter\\n\\n\\nReference\\n\\n\\nChat Models\\nGetting Started\\nHow-To Guides\\nHow to use few shot examples\\nHow to stream responses\\n\\n\\nIntegrations\\nAzure\\nOpenAI\\nPromptLayer ChatOpenAI\\n\\n\\n\\n\\nText Embedding Models\\nAzureOpenAI\\nCohere\\nFake Embeddings\\nHugging Face Hub\\nInstructEmbeddings\\nOpenAI\\nSageMaker Endpoint Embeddings\\nSelf", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html"}
+{"id": "50cf76aa0c35-11", "text": "Face Hub\\nInstructEmbeddings\\nOpenAI\\nSageMaker Endpoint Embeddings\\nSelf Hosted Embeddings\\nTensorflowHub\\n\\n\\n\\n\\nPrompts\\nPrompt Templates\\nGetting Started\\nHow-To Guides\\nHow to create a custom prompt template\\nHow to create a prompt template that uses few shot examples\\nHow to work with partial Prompt Templates\\nHow to serialize prompts\\n\\n\\nReference\\nPromptTemplates\\nExample Selector\\n\\n\\n\\n\\nChat Prompt Template\\nExample Selectors\\nHow to create a custom example selector\\nLengthBased ExampleSelector\\nMaximal Marginal Relevance ExampleSelector\\nNGram Overlap ExampleSelector\\nSimilarity ExampleSelector\\n\\n\\nOutput Parsers\\nOutput Parsers\\nCommaSeparatedListOutputParser\\nOutputFixingParser\\nPydanticOutputParser\\nRetryOutputParser\\nStructured Output Parser\\n\\n\\n\\n\\nIndexes\\nGetting Started\\nDocument Loaders\\nCoNLL-U\\nAirbyte JSON\\nAZLyrics\\nBlackboard\\nCollege Confidential\\nCopy Paste\\nCSV Loader\\nDirectory Loader\\nEmail\\nEverNote\\nFacebook Chat\\nFigma\\nGCS Directory\\nGCS File Storage\\nGitBook\\nGoogle Drive\\nGutenberg\\nHacker News\\nHTML\\niFixit\\nImages\\nIMSDb\\nMarkdown\\nNotebook\\nNotion\\nObsidian\\nPDF\\nPowerPoint\\nReadTheDocs Documentation\\nRoam\\ns3 Directory\\ns3 File\\nSubtitle Files\\nTelegram\\nUnstructured File Loader\\nURL\\nWeb Base\\nWord Documents\\nYouTube\\n\\n\\nText Splitters\\nGetting Started\\nCharacter Text Splitter\\nHuggingFace Length Function\\nLatex Text Splitter\\nMarkdown Text Splitter\\nNLTK Text Splitter\\nPython Code Text Splitter\\nRecursiveCharacterTextSplitter\\nSpacy Text Splitter\\ntiktoken (OpenAI) Length Function\\nTiktokenText Splitter\\n\\n\\nVectorstores\\nGetting Started\\nAtlasDB\\nChroma\\nDeep", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html"}
+{"id": "50cf76aa0c35-12", "text": "Splitter\\n\\n\\nVectorstores\\nGetting Started\\nAtlasDB\\nChroma\\nDeep Lake\\nElasticSearch\\nFAISS\\nMilvus\\nOpenSearch\\nPGVector\\nPinecone\\nQdrant\\nRedis\\nWeaviate\\n\\n\\nRetrievers\\nChatGPT Plugin Retriever\\nVectorStore Retriever\\n\\n\\n\\n\\nMemory\\nGetting Started\\nHow-To Guides\\nConversationBufferMemory\\nConversationBufferWindowMemory\\nEntity Memory\\nConversation Knowledge Graph Memory\\nConversationSummaryMemory\\nConversationSummaryBufferMemory\\nConversationTokenBufferMemory\\nHow to add Memory to an LLMChain\\nHow to add memory to a Multi-Input Chain\\nHow to add Memory to an Agent\\nHow to customize conversational memory\\nHow to create a custom Memory class\\nHow to use multiple memroy classes in the same chain\\n\\n\\n\\n\\nChains\\nGetting Started\\nHow-To Guides\\nAsync API for Chain\\nLoading from LangChainHub\\nLLM Chain\\nSequential Chains\\nSerialization\\nTransformation Chain\\nAnalyze Document\\nChat Index\\nGraph QA\\nHypothetical Document Embeddings\\nQuestion Answering with Sources\\nQuestion Answering\\nSummarization\\nRetrieval Question/Answering\\nRetrieval Question Answering with Sources\\nVector DB Text Generation\\nAPI Chains\\nSelf-Critique Chain with Constitutional AI\\nBashChain\\nLLMCheckerChain\\nLLM Math\\nLLMRequestsChain\\nLLMSummarizationCheckerChain\\nModeration\\nPAL\\nSQLite example\\n\\n\\nReference\\n\\n\\nAgents\\nGetting Started\\nTools\\nGetting Started\\nDefining Custom Tools\\nMulti Input Tools\\nBash\\nBing Search\\nChatGPT Plugins\\nGoogle Search\\nGoogle Serper API\\nHuman as a tool\\nIFTTT WebHooks\\nPython REPL\\nRequests\\nSearch Tools\\nSearxNG Search API\\nSerpAPI\\nWolfram Alpha\\nZapier Natural Language Actions", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html"}
+{"id": "50cf76aa0c35-13", "text": "Search API\\nSerpAPI\\nWolfram Alpha\\nZapier Natural Language Actions API\\n\\n\\nAgents\\nAgent Types\\nCustom Agent\\nConversation Agent (for Chat Models)\\nConversation Agent\\nMRKL\\nMRKL Chat\\nReAct\\nSelf Ask With Search\\n\\n\\nToolkits\\nCSV Agent\\nJSON Agent\\nOpenAPI Agent\\nPandas Dataframe Agent\\nPython Agent\\nSQL Database Agent\\nVectorstore Agent\\n\\n\\nAgent Executors\\nHow to combine agents and vectorstores\\nHow to use the async API for Agents\\nHow to create ChatGPT Clone\\nHow to access intermediate steps\\nHow to cap the max number of iterations\\nHow to add SharedMemory to an Agent and its Tools\\n\\n\\n\\n\\n\\nUse Cases\\n\\nPersonal Assistants\\nQuestion Answering over Docs\\nChatbots\\nQuerying Tabular Data\\nInteracting with APIs\\nSummarization\\nExtraction\\nEvaluation\\nAgent Benchmarking: Search + Calculator\\nAgent VectorDB Question Answering Benchmarking\\nBenchmarking Template\\nData Augmented Question Answering\\nUsing Hugging Face Datasets\\nLLM Math\\nQuestion Answering Benchmarking: Paul Graham Essay\\nQuestion Answering Benchmarking: State of the Union Address\\nQA Generation\\nQuestion Answering\\nSQL Question Answering Benchmarking: Chinook\\n\\n\\n\\nReference\\n\\nInstallation\\nIntegrations\\nAPI References\\nPrompts\\nPromptTemplates\\nExample Selector\\n\\n\\nUtilities\\nPython REPL\\nSerpAPI\\nSearxNG Search\\nDocstore\\nText Splitter\\nEmbeddings\\nVectorStores\\n\\n\\nChains\\nAgents\\n\\n\\n\\nEcosystem\\n\\nLangChain Ecosystem\\nAI21 Labs\\nAtlasDB\\nBanana\\nCerebriumAI\\nChroma\\nCohere\\nDeepInfra\\nDeep Lake\\nForefrontAI\\nGoogle Search Wrapper\\nGoogle Serper Wrapper\\nGooseAI\\nGraphsignal\\nHazy Research\\nHelicone\\nHugging", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html"}
+{"id": "50cf76aa0c35-14", "text": "Serper Wrapper\\nGooseAI\\nGraphsignal\\nHazy Research\\nHelicone\\nHugging Face\\nMilvus\\nModal\\nNLPCloud\\nOpenAI\\nOpenSearch\\nPetals\\nPGVector\\nPinecone\\nPromptLayer\\nQdrant\\nRunhouse\\nSearxNG Search API\\nSerpAPI\\nStochasticAI\\nUnstructured\\nWeights & Biases\\nWeaviate\\nWolfram Alpha Wrapper\\nWriter\\n\\n\\n\\nAdditional Resources\\n\\nLangChainHub\\nGlossary\\nLangChain Gallery\\nDeployments\\nTracing\\nDiscord\\nProduction Support\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n.rst\\n\\n\\n\\n\\n\\n\\n\\n.pdf\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nWelcome to LangChain\\n\\n\\n\\n\\n Contents \\n\\n\\n\\nGetting Started\\nModules\\nUse Cases\\nReference Docs\\nLangChain Ecosystem\\nAdditional Resources\\n\\n\\n\\n\\n\\n\\n\\n\\nWelcome to LangChain#\\nLangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:\\n\\nBe data-aware: connect a language model to other sources of data\\nBe agentic: allow a language model to interact with its environment\\n\\nThe LangChain framework is designed with the above principles in mind.\\nThis is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.\\n\\nGetting Started#\\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\\n\\nGetting Started Documentation\\n\\n\\n\\n\\n\\nModules#\\nThere", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html"}
+{"id": "50cf76aa0c35-15", "text": "an Language Model application.\\n\\nGetting Started Documentation\\n\\n\\n\\n\\n\\nModules#\\nThere are several main modules that LangChain provides support for.\\nFor each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.\\nThese modules are, in increasing order of complexity:\\n\\nModels: The various model types and model integrations LangChain supports.\\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\\n\\n\\n\\n\\n\\nUse Cases#\\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\\n\\nPersonal Assistants: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\\nQuestion Answering: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\\nChatbots: Since language models are good at producing text, that makes them", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html"}
+{"id": "50cf76aa0c35-16", "text": "construct an answer.\\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\\nInteracting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.\\nExtraction: Extract structured information from text.\\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\\n\\n\\n\\n\\n\\nReference Docs#\\nAll of LangChain\u2019s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\\n\\nReference Documentation\\n\\n\\n\\n\\n\\nLangChain Ecosystem#\\nGuides for how other companies/products can be used with LangChain\\n\\nLangChain Ecosystem\\n\\n\\n\\n\\n\\nAdditional Resources#\\nAdditional collection of resources we think may be useful as you develop your application!\\n\\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\\nTracing: A guide on using tracing in LangChain", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html"}
+{"id": "50cf76aa0c35-17", "text": "template repositories for deploying LangChain apps.\\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\\nModel Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\\nDiscord: Join us on our Discord to discuss all things LangChain!\\nProduction Support: As you move your LangChains into production, we\u2019d love to offer more comprehensive support. Please fill out this form and we\u2019ll set up a dedicated support Slack channel.\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nnext\\nQuickstart Guide\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n Contents\\n \\n\\n\\nGetting Started\\nModules\\nUse Cases\\nReference Docs\\nLangChain Ecosystem\\nAdditional Resources\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nBy Harrison Chase\\n\\n\\n\\n\\n \\n \u00a9 Copyright 2023, Harrison Chase.\\n \\n\\n\\n\\n\\n Last updated on Mar 27, 2023.\\n \\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n', lookup_str='', metadata={'source': 'https://python.langchain.com/en/latest/', 'loc': 'https://python.langchain.com/en/latest/', 'lastmod': '2023-03-27T22:50:49.790324+00:00', 'changefreq': 'daily', 'priority': '0.9'}, lookup_index=0)", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html"}
+{"id": "50cf76aa0c35-18", "text": "Add custom scraping rules#\nThe SitemapLoader uses beautifulsoup4 for the scraping process, and it scrapes every element on the page by default. The SitemapLoader constructor accepts a custom scraping function. This feature can be helpful to tailor the scraping process to your specific needs; for example, you might want to avoid scraping headers or navigation elements.\nThe following example shows how to develop and use a custom function to avoid navigation and header elements.\nImport the beautifulsoup4 library and define the custom function.\npip install beautifulsoup4\nfrom bs4 import BeautifulSoup\ndef remove_nav_and_header_elements(content: BeautifulSoup) -> str:\n # Find all 'nav' and 'header' elements in the BeautifulSoup object\n nav_elements = content.find_all('nav')\n header_elements = content.find_all('header')\n # Remove each 'nav' and 'header' element from the BeautifulSoup object\n for element in nav_elements + header_elements:\n element.decompose()\n return str(content.get_text())\nAdd your custom function to the SitemapLoader object.\nloader = SitemapLoader(\n \"https://langchain.readthedocs.io/sitemap.xml\",\n filter_urls=[\"https://python.langchain.com/en/latest/\"],\n parsing_function=remove_nav_and_header_elements\n)\nLocal Sitemap#\nThe sitemap loader can also be used to load local files.\nsitemap_loader = SitemapLoader(web_path=\"example_data/sitemap.xml\", is_local=True)\ndocs = sitemap_loader.load()\nFetching pages: 100%|####################################################################################################################################| 3/3 [00:00<00:00, 3.91it/s]\nprevious\nPDF\nnext\nSubtitle\n Contents\n \nFiltering sitemap URLs\nAdd custom scraping rules\nLocal Sitemap\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html"}
+{"id": "50cf76aa0c35-19", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html"}
+{"id": "ee2622e0672d-0", "text": ".ipynb\n.pdf\nImages\n Contents \nUsing Unstructured\nRetain Elements\nImages#\nThis covers how to load images such as JPG or PNG into a document format that we can use downstream.\nUsing Unstructured#\n#!pip install pdfminer\nfrom langchain.document_loaders.image import UnstructuredImageLoader\nloader = UnstructuredImageLoader(\"layout-parser-paper-fast.jpg\")\ndata = loader.load()\ndata[0]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image.html"}
+{"id": "ee2622e0672d-1", "text": "Document(page_content=\"LayoutParser: A Unified Toolkit for Deep\\nLearning Based Document Image Analysis\\n\\n\\n\u2018Zxjiang Shen' (F3}, Ruochen Zhang\u201d, Melissa Dell*, Benjamin Charles Germain\\nLeet, Jacob Carlson, and Weining LiF\\n\\n\\nsugehen\\n\\nshangthrows, et\\n\\n\u201cAbstract. Recent advanocs in document image analysis (DIA) have been\\n\u2018pimarliy driven bythe application of neural networks dell roar\\n{uteomer could be aly deployed in production and extended fo farther\\n[nvetigtion. However, various factory ke lcely organize codebanee\\nsnd sophisticated modal cnigurations compat the ey ree of\\n\u2018erin! innovation by wide sence, Though there have been sng\\n\u2018Hors to improve reuablty and simplify deep lees (DL) mode\\n\u2018aon, sone of them ae optimized for challenge inthe demain of DIA,\\nThis roprscte a major gap in the extng fol, sw DIA i eal to\\nscademic research acon wie range of dpi in the social ssencee\\n[rary for streamlining the sage of DL in DIA research and appicn\\n\u2018tons The core LayoutFaraer brary comes with a sch of simple and\\nIntative interfaee or applying and eutomiing DI. odel fr Inyo de\\npltfom for sharing both protrined modes an fal document dist\\n{ation pipeline We demonutate that LayootPareer shea fr both\\nlightweight and lrgeseledgtieation pipelines in eal-word uae ces\\nThe leary pblely smal at Btspe://layost-pareergsthab So\\n\\n\\n\\n\u2018Keywords: Document Image Analysis\u00bb Deep Learning Layout Analysis\\n\u2018Character Renguition - Open Serres dary \u00ab", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image.html"}
+{"id": "ee2622e0672d-2", "text": "Image Analysis\u00bb Deep Learning Layout Analysis\\n\u2018Character Renguition - Open Serres dary \u00ab Tol\\n\\n\\nIntroduction\\n\\n\\n\u2018Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of\\ndoctiment image analysis (DIA) tea including document image clasiffeation [I]\\n\", lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg'}, lookup_index=0)", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image.html"}
+{"id": "ee2622e0672d-3", "text": "Retain Elements#\nUnder the hood, Unstructured creates different \u201celements\u201d for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode=\"elements\".\nloader = UnstructuredImageLoader(\"layout-parser-paper-fast.jpg\", mode=\"elements\")\ndata = loader.load()\ndata[0]\nDocument(page_content='LayoutParser: A Unified Toolkit for Deep\\nLearning Based Document Image Analysis\\n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg', 'filename': 'layout-parser-paper-fast.jpg', 'page_number': 1, 'category': 'Title'}, lookup_index=0)\nprevious\nHTML\nnext\nJupyter Notebook\n Contents\n \nUsing Unstructured\nRetain Elements\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image.html"}
+{"id": "a54533024d16-0", "text": ".ipynb\n.pdf\nModern Treasury\nModern Treasury#\nModern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money.\nConnect to banks and payment systems\nTrack transactions and balances in real-time\nAutomate payment operations for scale\nThis notebook covers how to load data from the Modern Treasury REST API into a format that can be ingested into LangChain, along with example usage for vectorization.\nimport os\nfrom langchain.document_loaders import ModernTreasuryLoader\nfrom langchain.indexes import VectorstoreIndexCreator\nThe Modern Treasury API requires an organization ID and API key, which can be found in the Modern Treasury dashboard within developer settings.\nThis document loader also requires a resource option which defines what data you want to load.\nFollowing resources are available:\npayment_orders Documentation\nexpected_payments Documentation\nreturns Documentation\nincoming_payment_details Documentation\ncounterparties Documentation\ninternal_accounts Documentation\nexternal_accounts Documentation\ntransactions Documentation\nledgers Documentation\nledger_accounts Documentation\nledger_transactions Documentation\nevents Documentation\ninvoices Documentation\nmodern_treasury_loader = ModernTreasuryLoader(\"payment_orders\")\n# Create a vectorstore retriver from the loader\n# see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details\nindex = VectorstoreIndexCreator().from_loaders([modern_treasury_loader])\nmodern_treasury_doc_retriever = index.vectorstore.as_retriever()\nprevious\nMicrosoft OneDrive\nnext\nNotion DB 2/2\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/modern_treasury.html"}
+{"id": "7fa49adfa264-0", "text": ".ipynb\n.pdf\nOpen Document Format (ODT)\nOpen Document Format (ODT)#\nThe Open Document Format for Office Applications (ODF), also known as OpenDocument, is an open file format for word processing documents, spreadsheets, presentations and graphics and using ZIP-compressed XML files. It was developed with the aim of providing an open, XML-based file format specification for office applications.\nThe standard is developed and maintained by a technical committee in the Organization for the Advancement of Structured Information Standards (OASIS) consortium. It was based on the Sun Microsystems specification for OpenOffice.org XML, the default format for OpenOffice.org and LibreOffice. It was originally developed for StarOffice \u201cto provide an open standard for office documents.\u201d\nThe UnstructuredODTLoader is used to load Open Office ODT files.\nfrom langchain.document_loaders import UnstructuredODTLoader\nloader = UnstructuredODTLoader(\"example_data/fake.odt\", mode=\"elements\")\ndocs = loader.load()\ndocs[0]\nDocument(page_content='Lorem ipsum dolor sit amet.', metadata={'source': 'example_data/fake.odt', 'filename': 'example_data/fake.odt', 'category': 'Title'})\nprevious\nMicrosoft Word\nnext\nPandas DataFrame\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/odt.html"}
+{"id": "0946553b0e47-0", "text": ".ipynb\n.pdf\nTOML\nTOML#\nTOML is a file format for configuration files. It is intended to be easy to read and write, and is designed to map unambiguously to a dictionary. Its specification is open-source. TOML is implemented in many programming languages. The name TOML is an acronym for \u201cTom\u2019s Obvious, Minimal Language\u201d referring to its creator, Tom Preston-Werner.\nIf you need to load Toml files, use the TomlLoader.\nfrom langchain.document_loaders import TomlLoader\nloader = TomlLoader('example_data/fake_rule.toml')\nrule = loader.load()\nrule\n[Document(page_content='{\"internal\": {\"creation_date\": \"2023-05-01\", \"updated_date\": \"2022-05-01\", \"release\": [\"release_type\"], \"min_endpoint_version\": \"some_semantic_version\", \"os_list\": [\"operating_system_list\"]}, \"rule\": {\"uuid\": \"some_uuid\", \"name\": \"Fake Rule Name\", \"description\": \"Fake description of rule\", \"query\": \"process where process.name : \\\\\"somequery\\\\\"\\\\n\", \"threat\": [{\"framework\": \"MITRE ATT&CK\", \"tactic\": {\"name\": \"Execution\", \"id\": \"TA0002\", \"reference\": \"https://attack.mitre.org/tactics/TA0002/\"}}]}}', metadata={'source': 'example_data/fake_rule.toml'})]\nprevious\nTelegram\nnext\nUnstructured File\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/toml.html"}
+{"id": "199db7255963-0", "text": ".ipynb\n.pdf\nCoNLL-U\nCoNLL-U#\nCoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:\nWord lines containing the annotation of a word/token in 10 fields separated by single tab characters; see below.\nBlank lines marking sentence boundaries.\nComment lines starting with hash (#).\nThis is an example of how to load a file in CoNLL-U format. The whole file is treated as one document. The example data (conllu.conllu) is based on one of the standard UD/CoNLL-U examples.\nfrom langchain.document_loaders import CoNLLULoader\nloader = CoNLLULoader(\"example_data/conllu.conllu\")\ndocument = loader.load()\ndocument\n[Document(page_content='They buy and sell books.', metadata={'source': 'example_data/conllu.conllu'})]\nprevious\nOpenAIWhisperParser\nnext\nCopy Paste\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/conll-u.html"}
+{"id": "28cdf3f7d35c-0", "text": ".ipynb\n.pdf\nEPub\n Contents \nRetain Elements\nEPub#\nEPUB is an e-book file format that uses the \u201c.epub\u201d file extension. The term is short for electronic publication and is sometimes styled ePub. EPUB is supported by many e-readers, and compatible software is available for most smartphones, tablets, and computers.\nThis covers how to load .epub documents into the Document format that we can use downstream. You\u2019ll need to install the pandocs package for this loader to work.\n#!pip install pandocs\nfrom langchain.document_loaders import UnstructuredEPubLoader\nloader = UnstructuredEPubLoader(\"winter-sports.epub\")\ndata = loader.load()\nRetain Elements#\nUnder the hood, Unstructured creates different \u201celements\u201d for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode=\"elements\".\nloader = UnstructuredEPubLoader(\"winter-sports.epub\", mode=\"elements\")\ndata = loader.load()\ndata[0]\nDocument(page_content='The Project Gutenberg eBook of Winter Sports in\\nSwitzerland, by E. F. Benson', lookup_str='', metadata={'source': 'winter-sports.epub', 'page_number': 1, 'category': 'Title'}, lookup_index=0)\nprevious\nEmail\nnext\nEverNote\n Contents\n \nRetain Elements\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/epub.html"}
+{"id": "359148380e25-0", "text": ".ipynb\n.pdf\nNotion DB 2/2\n Contents \nRequirements\nSetup\n1. Create a Notion Table Database\n2. Create a Notion Integration\n3. Connect the Integration to the Database\n4. Get the Database ID\nUsage\nNotion DB 2/2#\nNotion is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management.\nNotionDBLoader is a Python class for loading content from a Notion database. It retrieves pages from the database, reads their content, and returns a list of Document objects.\nRequirements#\nA Notion Database\nNotion Integration Token\nSetup#\n1. Create a Notion Table Database#\nCreate a new table database in Notion. You can add any column to the database and they will be treated as metadata. For example you can add the following columns:\nTitle: set Title as the default property.\nCategories: A Multi-select property to store categories associated with the page.\nKeywords: A Multi-select property to store keywords associated with the page.\nAdd your content to the body of each page in the database. The NotionDBLoader will extract the content and metadata from these pages.\n2. Create a Notion Integration#\nTo create a Notion Integration, follow these steps:\nVisit the Notion Developers page and log in with your Notion account.\nClick on the \u201c+ New integration\u201d button.\nGive your integration a name and choose the workspace where your database is located.\nSelect the require capabilities, this extension only need the Read content capability\nClick the \u201cSubmit\u201d button to create the integration.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/notiondb.html"}
+{"id": "359148380e25-1", "text": "Click the \u201cSubmit\u201d button to create the integration.\nOnce the integration is created, you\u2019ll be provided with an Integration Token (API key). Copy this token and keep it safe, as you\u2019ll need it to use the NotionDBLoader.\n3. Connect the Integration to the Database#\nTo connect your integration to the database, follow these steps:\nOpen your database in Notion.\nClick on the three-dot menu icon in the top right corner of the database view.\nClick on the \u201c+ New integration\u201d button.\nFind your integration, you may need to start typing its name in the search box.\nClick on the \u201cConnect\u201d button to connect the integration to the database.\n4. Get the Database ID#\nTo get the database ID, follow these steps:\nOpen your database in Notion.\nClick on the three-dot menu icon in the top right corner of the database view.\nSelect \u201cCopy link\u201d from the menu to copy the database URL to your clipboard.\nThe database ID is the long string of alphanumeric characters found in the URL. It typically looks like this: https://www.notion.so/username/8935f9d140a04f95a872520c4f123456?v=\u2026. In this example, the database ID is 8935f9d140a04f95a872520c4f123456.\nWith the database properly set up and the integration token and database ID in hand, you can now use the NotionDBLoader code to load content and metadata from your Notion database.\nUsage#\nNotionDBLoader is part of the langchain package\u2019s document loaders. You can use it as follows:\nfrom getpass import getpass\nNOTION_TOKEN = getpass()\nDATABASE_ID = getpass()\n\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\n\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\nfrom langchain.document_loaders import NotionDBLoader", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/notiondb.html"}
+{"id": "359148380e25-2", "text": "\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\nfrom langchain.document_loaders import NotionDBLoader\nloader = NotionDBLoader(\n integration_token=NOTION_TOKEN, \n database_id=DATABASE_ID,\n request_timeout_sec=30 # optional, defaults to 10\n)\ndocs = loader.load()\nprint(docs)\nprevious\nModern Treasury\nnext\nNotion DB 1/2\n Contents\n \nRequirements\nSetup\n1. Create a Notion Table Database\n2. Create a Notion Integration\n3. Connect the Integration to the Database\n4. Get the Database ID\nUsage\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/notiondb.html"}
+{"id": "1800ecb21a83-0", "text": ".ipynb\n.pdf\nDiffbot\nDiffbot#\nUnlike traditional web scraping tools, Diffbot doesn\u2019t require any rules to read the content on a page.\nIt starts with computer vision, which classifies a page into one of 20 possible types. Content is then interpreted by a machine learning model trained to identify the key attributes on a page based on its type.\nThe result is a website transformed into clean structured data (like JSON or CSV), ready for your application.\nThis covers how to extract HTML documents from a list of URLs using the Diffbot extract API, into a document format that we can use downstream.\nurls = [\n \"https://python.langchain.com/en/latest/index.html\",\n]\nThe Diffbot Extract API Requires an API token. Once you have it, you can extract the data.\nRead instructions how to get the Diffbot API Token.\nimport os\nfrom langchain.document_loaders import DiffbotLoader\nloader = DiffbotLoader(urls=urls, api_token=os.environ.get(\"DIFFBOT_API_TOKEN\"))\nWith the .load() method, you can see the documents loaded\nloader.load()", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/diffbot.html"}
+{"id": "1800ecb21a83-1", "text": "[Document(page_content='LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:\\nBe data-aware: connect a language model to other sources of data\\nBe agentic: allow a language model to interact with its environment\\nThe LangChain framework is designed with the above principles in mind.\\nThis is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.\\nGetting Started\\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\\nGetting Started Documentation\\nModules\\nThere are several main modules that LangChain provides support for. For each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides. These modules are, in increasing order of complexity:\\nModels: The various model types and model integrations LangChain supports.\\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from,", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/diffbot.html"}
+{"id": "1800ecb21a83-2", "text": "until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\\nUse Cases\\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\\nPersonal Assistants: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\\nQuestion Answering: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\\nInteracting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.\\nExtraction: Extract structured information from text.\\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\\nReference Docs\\nAll of LangChain\u2019s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\\nReference Documentation\\nLangChain Ecosystem\\nGuides for how other companies/products can be used with LangChain\\nLangChain Ecosystem\\nAdditional Resources\\nAdditional collection of resources we think may be useful as you develop your application!\\nLangChainHub: The LangChainHub is", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/diffbot.html"}
+{"id": "1800ecb21a83-3", "text": "think may be useful as you develop your application!\\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\\nModel Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\\nDiscord: Join us on our Discord to discuss all things LangChain!\\nProduction Support: As you move your LangChains into production, we\u2019d love to offer more comprehensive support. Please fill out this form and we\u2019ll set up a dedicated support Slack channel.', metadata={'source': 'https://python.langchain.com/en/latest/index.html'})]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/diffbot.html"}
+{"id": "1800ecb21a83-4", "text": "previous\nConfluence\nnext\nDocugami\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/diffbot.html"}
+{"id": "1c43ba46f583-0", "text": ".ipynb\n.pdf\nGoogle Cloud Storage File\nGoogle Cloud Storage File#\nGoogle Cloud Storage is a managed service for storing unstructured data.\nThis covers how to load document objects from an Google Cloud Storage (GCS) file object (blob).\n# !pip install google-cloud-storage\nfrom langchain.document_loaders import GCSFileLoader\nloader = GCSFileLoader(project_name=\"aist\", bucket=\"testing-hwc\", blob=\"fake.docx\")\nloader.load()\n/Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a \"quota exceeded\" or \"API not enabled\" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/\n warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)\n[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmp3srlf8n8/fake.docx'}, lookup_index=0)]\nprevious\nGoogle Cloud Storage Directory\nnext\nGoogle Drive\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_cloud_storage_file.html"}
+{"id": "eeb2c26eff80-0", "text": ".ipynb\n.pdf\nGit\n Contents \nLoad existing repository from disk\nClone repository from url\nFiltering files to load\nGit#\nGit is a distributed version control system that tracks changes in any set of computer files, usually used for coordinating work among programmers collaboratively developing source code during software development.\nThis notebook shows how to load text files from Git repository.\nLoad existing repository from disk#\n!pip install GitPython\nfrom git import Repo\nrepo = Repo.clone_from(\n \"https://github.com/hwchase17/langchain\", to_path=\"./example_data/test_repo1\"\n)\nbranch = repo.head.reference\nfrom langchain.document_loaders import GitLoader\nloader = GitLoader(repo_path=\"./example_data/test_repo1/\", branch=branch)\ndata = loader.load()\nlen(data)\nprint(data[0])\npage_content='.venv\\n.github\\n.git\\n.mypy_cache\\n.pytest_cache\\nDockerfile' metadata={'file_path': '.dockerignore', 'file_name': '.dockerignore', 'file_type': ''}\nClone repository from url#\nfrom langchain.document_loaders import GitLoader\nloader = GitLoader(\n clone_url=\"https://github.com/hwchase17/langchain\",\n repo_path=\"./example_data/test_repo2/\",\n branch=\"master\",\n)\ndata = loader.load()\nlen(data)\n1074\nFiltering files to load#\nfrom langchain.document_loaders import GitLoader\n# eg. loading only python files\nloader = GitLoader(repo_path=\"./example_data/test_repo1/\", file_filter=lambda file_path: file_path.endswith(\".py\"))\nprevious\nGitBook\nnext\nGoogle BigQuery\n Contents\n \nLoad existing repository from disk\nClone repository from url\nFiltering files to load\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/git.html"}
+{"id": "eeb2c26eff80-1", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/git.html"}
+{"id": "e223e38ced13-0", "text": ".ipynb\n.pdf\nImage captions\n Contents \nPrepare a list of image urls from Wikimedia\nCreate the loader\nCreate the index\nQuery\nImage captions#\nBy default, the loader utilizes the pre-trained Salesforce BLIP image captioning model.\nThis notebook shows how to use the ImageCaptionLoader to generate a query-able index of image captions\n#!pip install transformers\nfrom langchain.document_loaders import ImageCaptionLoader\nPrepare a list of image urls from Wikimedia#\nlist_image_urls = [\n 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/5a/Hyla_japonica_sep01.jpg/260px-Hyla_japonica_sep01.jpg',\n 'https://upload.wikimedia.org/wikipedia/commons/thumb/7/71/Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg/270px-Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg',\n 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg/251px-Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg',\n 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Passion_fruits_-_whole_and_halved.jpg/270px-Passion_fruits_-_whole_and_halved.jpg',", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image_captions.html"}
+{"id": "e223e38ced13-1", "text": "'https://upload.wikimedia.org/wikipedia/commons/thumb/5/5e/Messier83_-_Heic1403a.jpg/277px-Messier83_-_Heic1403a.jpg',\n 'https://upload.wikimedia.org/wikipedia/commons/thumb/b/b6/2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg/288px-2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg',\n 'https://upload.wikimedia.org/wikipedia/commons/thumb/9/99/Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg/224px-Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg',\n]\nCreate the loader#\nloader = ImageCaptionLoader(path_images=list_image_urls)\nlist_docs = loader.load()\nlist_docs\n/Users/saitosean/dev/langchain/.venv/lib/python3.10/site-packages/transformers/generation/utils.py:1313: UserWarning: Using `max_length`'s default (20) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.\n warnings.warn(", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image_captions.html"}
+{"id": "e223e38ced13-2", "text": "warnings.warn(\n[Document(page_content='an image of a frog on a flower [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/5a/Hyla_japonica_sep01.jpg/260px-Hyla_japonica_sep01.jpg'}),\n Document(page_content='an image of a shark swimming in the ocean [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/7/71/Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg/270px-Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg'}),\n Document(page_content='an image of a painting of a battle scene [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg/251px-Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg'}),\n Document(page_content='an image of a passion fruit and a half cut passion [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Passion_fruits_-_whole_and_halved.jpg/270px-Passion_fruits_-_whole_and_halved.jpg'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image_captions.html"}
+{"id": "e223e38ced13-3", "text": "Document(page_content='an image of the spiral galaxy [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/5e/Messier83_-_Heic1403a.jpg/277px-Messier83_-_Heic1403a.jpg'}),\n Document(page_content='an image of a man on skis in the snow [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/b/b6/2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg/288px-2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg'}),\n Document(page_content='an image of a flower in the dark [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/9/99/Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg/224px-Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg'})]\nfrom PIL import Image\nimport requests\nImage.open(requests.get(list_image_urls[0], stream=True).raw).convert('RGB')\nCreate the index#\nfrom langchain.indexes import VectorstoreIndexCreator\nindex = VectorstoreIndexCreator().from_loaders([loader])", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image_captions.html"}
+{"id": "e223e38ced13-4", "text": "index = VectorstoreIndexCreator().from_loaders([loader])\n/Users/saitosean/dev/langchain/.venv/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n from .autonotebook import tqdm as notebook_tqdm\n/Users/saitosean/dev/langchain/.venv/lib/python3.10/site-packages/transformers/generation/utils.py:1313: UserWarning: Using `max_length`'s default (20) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.\n warnings.warn(\nUsing embedded DuckDB without persistence: data will be transient\nQuery#\nquery = \"What's the painting about?\"\nindex.query(query)\n' The painting is about a battle scene.'\nquery = \"What kind of images are there?\"\nindex.query(query)\n' There are images of a spiral galaxy, a painting of a battle scene, a flower in the dark, and a frog on a flower.'\nprevious\nGoogle Drive\nnext\nIugu\n Contents\n \nPrepare a list of image urls from Wikimedia\nCreate the loader\nCreate the index\nQuery\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image_captions.html"}
+{"id": "d01b56fc1b88-0", "text": ".ipynb\n.pdf\nWeather\nWeather#\nOpenWeatherMap is an open source weather service provider\nThis loader fetches the weather data from the OpenWeatherMap\u2019s OneCall API, using the pyowm Python package. You must initialize the loader with your OpenWeatherMap API token and the names of the cities you want the weather data for.\nfrom langchain.document_loaders import WeatherDataLoader\n#!pip install pyowm\n# Set API key either by passing it in to constructor directly\n# or by setting the environment variable \"OPENWEATHERMAP_API_KEY\".\nfrom getpass import getpass\nOPENWEATHERMAP_API_KEY = getpass()\nloader = WeatherDataLoader.from_params(['chennai','vellore'], openweathermap_api_key=OPENWEATHERMAP_API_KEY) \ndocuments = loader.load()\ndocuments\nprevious\nWebBaseLoader\nnext\nWhatsApp Chat\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/weather.html"}
+{"id": "0136005c27d0-0", "text": ".ipynb\n.pdf\nGitBook\n Contents \nLoad from single GitBook page\nLoad from all paths in a given GitBook\nGitBook#\nGitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.\nThis notebook shows how to pull page data from any GitBook.\nfrom langchain.document_loaders import GitbookLoader\nLoad from single GitBook page#\nloader = GitbookLoader(\"https://docs.gitbook.com\")\npage_data = loader.load()\npage_data\n[Document(page_content='Introduction to GitBook\\nGitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.\\nWe want to help \\nteams to work more efficiently\\n by creating a simple yet powerful platform for them to \\nshare their knowledge\\n.\\nOur mission is to make a \\nuser-friendly\\n and \\ncollaborative\\n product for everyone to create, edit and share knowledge through documentation.\\nPublish your documentation in 5 easy steps\\nImport\\n\\nMove your existing content to GitBook with ease.\\nGit Sync\\n\\nBenefit from our bi-directional synchronisation with GitHub and GitLab.\\nOrganise your content\\n\\nCreate pages and spaces and organize them into collections\\nCollaborate\\n\\nInvite other users and collaborate asynchronously with ease.\\nPublish your docs\\n\\nShare your documentation with selected users or with everyone.\\nNext\\n - Getting started\\nOverview\\nLast modified \\n3mo ago', lookup_str='', metadata={'source': 'https://docs.gitbook.com', 'title': 'Introduction to GitBook'}, lookup_index=0)]\nLoad from all paths in a given GitBook#\nFor this to work, the GitbookLoader needs to be initialized with the root path (https://docs.gitbook.com in this example) and have load_all_paths set to True.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html"}
+{"id": "0136005c27d0-1", "text": "loader = GitbookLoader(\"https://docs.gitbook.com\", load_all_paths=True)\nall_pages_data = loader.load()\nFetching text from https://docs.gitbook.com/\nFetching text from https://docs.gitbook.com/getting-started/overview\nFetching text from https://docs.gitbook.com/getting-started/import\nFetching text from https://docs.gitbook.com/getting-started/git-sync\nFetching text from https://docs.gitbook.com/getting-started/content-structure\nFetching text from https://docs.gitbook.com/getting-started/collaboration\nFetching text from https://docs.gitbook.com/getting-started/publishing\nFetching text from https://docs.gitbook.com/tour/quick-find\nFetching text from https://docs.gitbook.com/tour/editor\nFetching text from https://docs.gitbook.com/tour/customization\nFetching text from https://docs.gitbook.com/tour/member-management\nFetching text from https://docs.gitbook.com/tour/pdf-export\nFetching text from https://docs.gitbook.com/tour/activity-history\nFetching text from https://docs.gitbook.com/tour/insights\nFetching text from https://docs.gitbook.com/tour/notifications\nFetching text from https://docs.gitbook.com/tour/internationalization\nFetching text from https://docs.gitbook.com/tour/keyboard-shortcuts\nFetching text from https://docs.gitbook.com/tour/seo\nFetching text from https://docs.gitbook.com/advanced-guides/custom-domain\nFetching text from https://docs.gitbook.com/advanced-guides/advanced-sharing-and-security\nFetching text from https://docs.gitbook.com/advanced-guides/integrations\nFetching text from https://docs.gitbook.com/billing-and-admin/account-settings\nFetching text from https://docs.gitbook.com/billing-and-admin/plans\nFetching text from https://docs.gitbook.com/troubleshooting/faqs", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html"}
+{"id": "0136005c27d0-2", "text": "Fetching text from https://docs.gitbook.com/troubleshooting/faqs\nFetching text from https://docs.gitbook.com/troubleshooting/hard-refresh\nFetching text from https://docs.gitbook.com/troubleshooting/report-bugs\nFetching text from https://docs.gitbook.com/troubleshooting/connectivity-issues\nFetching text from https://docs.gitbook.com/troubleshooting/support\nprint(f\"fetched {len(all_pages_data)} documents.\")\n# show second document\nall_pages_data[2]\nfetched 28 documents.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html"}
+{"id": "0136005c27d0-3", "text": "Document(page_content=\"Import\\nFind out how to easily migrate your existing documentation and which formats are supported.\\nThe import function allows you to migrate and unify existing documentation in GitBook. You can choose to import single or multiple pages although limits apply. \\nPermissions\\nAll members with editor permission or above can use the import feature.\\nSupported formats\\nGitBook supports imports from websites or files that are:\\nMarkdown (.md or .markdown)\\nHTML (.html)\\nMicrosoft Word (.docx).\\nWe also support import from:\\nConfluence\\nNotion\\nGitHub Wiki\\nQuip\\nDropbox Paper\\nGoogle Docs\\nYou can also upload a ZIP\\n \\ncontaining HTML or Markdown files when \\nimporting multiple pages.\\nNote: this feature is in beta.\\nFeel free to suggest import sources we don't support yet and \\nlet us know\\n if you have any issues.\\nImport panel\\nWhen you create a new space, you'll have the option to import content straight away:\\nThe new page menu\\nImport a page or subpage by selecting \\nImport Page\\n from the New Page menu, or \\nImport Subpage\\n in the page action menu, found in the table of contents:\\nImport from the page action menu\\nWhen you choose your input source, instructions will explain how to proceed.\\nAlthough GitBook supports importing content from different kinds of sources, the end result might be different from your source due to differences in product features and document format.\\nLimits\\nGitBook currently has the following limits for imported content:\\nThe maximum number of pages that can be uploaded in a single import is \\n20.\\nThe maximum number of files (images etc.) that can be uploaded in a single import is \\n20.\\nGetting started - \\nPrevious\\nOverview\\nNext\\n - Getting started\\nGit Sync\\nLast modified \\n4mo ago\", lookup_str='', metadata={'source':", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html"}
+{"id": "0136005c27d0-4", "text": "started\\nGit Sync\\nLast modified \\n4mo ago\", lookup_str='', metadata={'source': 'https://docs.gitbook.com/getting-started/import', 'title': 'Import'}, lookup_index=0)", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html"}
+{"id": "0136005c27d0-5", "text": "previous\nFigma\nnext\nGit\n Contents\n \nLoad from single GitBook page\nLoad from all paths in a given GitBook\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html"}
+{"id": "b4a6bf503031-0", "text": ".ipynb\n.pdf\nFauna\n Contents \nQuery data example\nQuery with Pagination\nFauna#\nFauna is a Document Database.\nQuery Fauna documents\n#!pip install fauna\nQuery data example#\nfrom langchain.document_loaders.fauna import FaunaLoader\nsecret = \"\"\nquery = \"Item.all()\" # Fauna query. Assumes that the collection is called \"Item\"\nfield = \"text\" # The field that contains the page content. Assumes that the field is called \"text\"\nloader = FaunaLoader(query, field, secret)\ndocs = loader.lazy_load()\nfor value in docs:\n print(value)\nQuery with Pagination#\nYou get a after value if there are more data. You can get values after the curcor by passing in the after string in query.\nTo learn more following this link\nquery = \"\"\"\nItem.paginate(\"hs+DzoPOg ... aY1hOohozrV7A\")\nItem.all()\n\"\"\"\nloader = FaunaLoader(query, field, secret)\nprevious\nDuckDB\nnext\nFigma\n Contents\n \nQuery data example\nQuery with Pagination\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/fauna.html"}
+{"id": "ca0757c8b2da-0", "text": ".ipynb\n.pdf\nApify Dataset\n Contents \nPrerequisites\nAn example with question answering\nApify Dataset#\nApify Dataset is a scaleable append-only storage with sequential access built for storing structured web scraping results, such as a list of products or Google SERPs, and then export them to various formats like JSON, CSV, or Excel. Datasets are mainly used to save results of Apify Actors\u2014serverless cloud programs for varius web scraping, crawling, and data extraction use cases.\nThis notebook shows how to load Apify datasets to LangChain.\nPrerequisites#\nYou need to have an existing dataset on the Apify platform. If you don\u2019t have one, please first check out this notebook on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs.\n#!pip install apify-client\nFirst, import ApifyDatasetLoader into your source code:\nfrom langchain.document_loaders import ApifyDatasetLoader\nfrom langchain.document_loaders.base import Document\nThen provide a function that maps Apify dataset record fields to LangChain Document format.\nFor example, if your dataset items are structured like this:\n{\n \"url\": \"https://apify.com\",\n \"text\": \"Apify is the best web scraping and automation platform.\"\n}\nThe mapping function in the code below will convert them to LangChain Document format, so that you can use them further with any LLM model (e.g. for question answering).\nloader = ApifyDatasetLoader(\n dataset_id=\"your-dataset-id\",\n dataset_mapping_function=lambda dataset_item: Document(\n page_content=dataset_item[\"text\"], metadata={\"source\": dataset_item[\"url\"]}\n ),\n)\ndata = loader.load()\nAn example with question answering#\nIn this example, we use data from a dataset to answer a question.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/apify_dataset.html"}
+{"id": "ca0757c8b2da-1", "text": "In this example, we use data from a dataset to answer a question.\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders import ApifyDatasetLoader\nfrom langchain.indexes import VectorstoreIndexCreator\nloader = ApifyDatasetLoader(\n dataset_id=\"your-dataset-id\",\n dataset_mapping_function=lambda item: Document(\n page_content=item[\"text\"] or \"\", metadata={\"source\": item[\"url\"]}\n ),\n)\nindex = VectorstoreIndexCreator().from_loaders([loader])\nquery = \"What is Apify?\"\nresult = index.query_with_sources(query)\nprint(result[\"answer\"])\nprint(result[\"sources\"])\n Apify is a platform for developing, running, and sharing serverless cloud programs. It enables users to create web scraping and automation tools and publish them on the Apify platform.\nhttps://docs.apify.com/platform/actors, https://docs.apify.com/platform/actors/running/actors-in-store, https://docs.apify.com/platform/security, https://docs.apify.com/platform/actors/examples\nprevious\nAirbyte JSON\nnext\nAWS S3 Directory\n Contents\n \nPrerequisites\nAn example with question answering\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/apify_dataset.html"}
+{"id": "26981f610e66-0", "text": ".ipynb\n.pdf\nStripe\nStripe#\nStripe is an Irish-American financial services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.\nThis notebook covers how to load data from the Stripe REST API into a format that can be ingested into LangChain, along with example usage for vectorization.\nimport os\nfrom langchain.document_loaders import StripeLoader\nfrom langchain.indexes import VectorstoreIndexCreator\nThe Stripe API requires an access token, which can be found inside of the Stripe dashboard.\nThis document loader also requires a resource option which defines what data you want to load.\nFollowing resources are available:\nbalance_transations Documentation\ncharges Documentation\ncustomers Documentation\nevents Documentation\nrefunds Documentation\ndisputes Documentation\nstripe_loader = StripeLoader(\"charges\")\n# Create a vectorstore retriver from the loader\n# see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details\nindex = VectorstoreIndexCreator().from_loaders([stripe_loader])\nstripe_doc_retriever = index.vectorstore.as_retriever()\nprevious\nSpreedly\nnext\n2Markdown\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/stripe.html"}
+{"id": "b8d166bf1553-0", "text": ".ipynb\n.pdf\nPandas DataFrame\nPandas DataFrame#\nThis notebook goes over how to load data from a pandas DataFrame.\n#!pip install pandas\nimport pandas as pd\ndf = pd.read_csv('example_data/mlb_teams_2012.csv')\ndf.head()\nTeam\n\"Payroll (millions)\"\n\"Wins\"\n0\nNationals\n81.34\n98\n1\nReds\n82.20\n97\n2\nYankees\n197.96\n95\n3\nGiants\n117.62\n94\n4\nBraves\n83.31\n94\nfrom langchain.document_loaders import DataFrameLoader\nloader = DataFrameLoader(df, page_content_column=\"Team\")\nloader.load()\n[Document(page_content='Nationals', metadata={' \"Payroll (millions)\"': 81.34, ' \"Wins\"': 98}),\n Document(page_content='Reds', metadata={' \"Payroll (millions)\"': 82.2, ' \"Wins\"': 97}),\n Document(page_content='Yankees', metadata={' \"Payroll (millions)\"': 197.96, ' \"Wins\"': 95}),\n Document(page_content='Giants', metadata={' \"Payroll (millions)\"': 117.62, ' \"Wins\"': 94}),\n Document(page_content='Braves', metadata={' \"Payroll (millions)\"': 83.31, ' \"Wins\"': 94}),\n Document(page_content='Athletics', metadata={' \"Payroll (millions)\"': 55.37, ' \"Wins\"': 94}),\n Document(page_content='Rangers', metadata={' \"Payroll (millions)\"': 120.51, ' \"Wins\"': 93}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pandas_dataframe.html"}
+{"id": "b8d166bf1553-1", "text": "Document(page_content='Orioles', metadata={' \"Payroll (millions)\"': 81.43, ' \"Wins\"': 93}),\n Document(page_content='Rays', metadata={' \"Payroll (millions)\"': 64.17, ' \"Wins\"': 90}),\n Document(page_content='Angels', metadata={' \"Payroll (millions)\"': 154.49, ' \"Wins\"': 89}),\n Document(page_content='Tigers', metadata={' \"Payroll (millions)\"': 132.3, ' \"Wins\"': 88}),\n Document(page_content='Cardinals', metadata={' \"Payroll (millions)\"': 110.3, ' \"Wins\"': 88}),\n Document(page_content='Dodgers', metadata={' \"Payroll (millions)\"': 95.14, ' \"Wins\"': 86}),\n Document(page_content='White Sox', metadata={' \"Payroll (millions)\"': 96.92, ' \"Wins\"': 85}),\n Document(page_content='Brewers', metadata={' \"Payroll (millions)\"': 97.65, ' \"Wins\"': 83}),\n Document(page_content='Phillies', metadata={' \"Payroll (millions)\"': 174.54, ' \"Wins\"': 81}),\n Document(page_content='Diamondbacks', metadata={' \"Payroll (millions)\"': 74.28, ' \"Wins\"': 81}),\n Document(page_content='Pirates', metadata={' \"Payroll (millions)\"': 63.43, ' \"Wins\"': 79}),\n Document(page_content='Padres', metadata={' \"Payroll (millions)\"': 55.24, ' \"Wins\"': 76}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pandas_dataframe.html"}
+{"id": "b8d166bf1553-2", "text": "Document(page_content='Mariners', metadata={' \"Payroll (millions)\"': 81.97, ' \"Wins\"': 75}),\n Document(page_content='Mets', metadata={' \"Payroll (millions)\"': 93.35, ' \"Wins\"': 74}),\n Document(page_content='Blue Jays', metadata={' \"Payroll (millions)\"': 75.48, ' \"Wins\"': 73}),\n Document(page_content='Royals', metadata={' \"Payroll (millions)\"': 60.91, ' \"Wins\"': 72}),\n Document(page_content='Marlins', metadata={' \"Payroll (millions)\"': 118.07, ' \"Wins\"': 69}),\n Document(page_content='Red Sox', metadata={' \"Payroll (millions)\"': 173.18, ' \"Wins\"': 69}),\n Document(page_content='Indians', metadata={' \"Payroll (millions)\"': 78.43, ' \"Wins\"': 68}),\n Document(page_content='Twins', metadata={' \"Payroll (millions)\"': 94.08, ' \"Wins\"': 66}),\n Document(page_content='Rockies', metadata={' \"Payroll (millions)\"': 78.06, ' \"Wins\"': 64}),\n Document(page_content='Cubs', metadata={' \"Payroll (millions)\"': 88.19, ' \"Wins\"': 61}),\n Document(page_content='Astros', metadata={' \"Payroll (millions)\"': 60.65, ' \"Wins\"': 55})]\nprevious\nOpen Document Format (ODT)\nnext\nPDF\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pandas_dataframe.html"}
+{"id": "f0202d2c0106-0", "text": ".ipynb\n.pdf\nSubtitle\nSubtitle#\nThe SubRip file format is described on the Matroska multimedia container format website as \u201cperhaps the most basic of all subtitle formats.\u201d SubRip (SubRip Text) files are named with the extension .srt, and contain formatted lines of plain text in groups separated by a blank line. Subtitles are numbered sequentially, starting at 1. The timecode format used is hours:minutes:seconds,milliseconds with time units fixed to two zero-padded digits and fractions fixed to three zero-padded digits (00:00:00,000). The fractional separator used is the comma, since the program was written in France.\nHow to load data from subtitle (.srt) files\nPlease, download the example .srt file from here.\n!pip install pysrt\nfrom langchain.document_loaders import SRTLoader\nloader = SRTLoader(\"example_data/Star_Wars_The_Clone_Wars_S06E07_Crisis_at_the_Heart.srt\")\ndocs = loader.load()\ndocs[0].page_content[:100]\n'Corruption discovered\\nat the core of the Banking Clan! Reunited, Rush Clovis\\nand Senator A'\nprevious\nSitemap\nnext\nTelegram\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/subtitle.html"}
+{"id": "fbbfb116561e-0", "text": ".ipynb\n.pdf\nAirbyte JSON\nAirbyte JSON#\nAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.\nThis covers how to load any source from Airbyte into a local JSON file that can be read in as a document\nPrereqs:\nHave docker desktop installed\nSteps:\nClone Airbyte from GitHub - git clone https://github.com/airbytehq/airbyte.git\nSwitch into Airbyte directory - cd airbyte\nStart Airbyte - docker compose up\nIn your browser, just visit\u00a0http://localhost:8000. You will be asked for a username and password. By default, that\u2019s username\u00a0airbyte\u00a0and password\u00a0password.\nSetup any source you wish.\nSet destination as Local JSON, with specified destination path - lets say /json_data. Set up manual sync.\nRun the connection.\nTo see what files are create, you can navigate to: file:///tmp/airbyte_local\nFind your data and copy path. That path should be saved in the file variable below. It should start with /tmp/airbyte_local\nfrom langchain.document_loaders import AirbyteJSONLoader\n!ls /tmp/airbyte_local/json_data/\n_airbyte_raw_pokemon.jsonl\nloader = AirbyteJSONLoader('/tmp/airbyte_local/json_data/_airbyte_raw_pokemon.jsonl')\ndata = loader.load()\nprint(data[0].page_content[:500])\nabilities: \nability: \nname: blaze\nurl: https://pokeapi.co/api/v2/ability/66/\nis_hidden: False\nslot: 1\nability: \nname: solar-power\nurl: https://pokeapi.co/api/v2/ability/94/\nis_hidden: True\nslot: 3", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/airbyte_json.html"}
+{"id": "fbbfb116561e-1", "text": "is_hidden: True\nslot: 3\nbase_experience: 267\nforms: \nname: charizard\nurl: https://pokeapi.co/api/v2/pokemon-form/6/\ngame_indices: \ngame_index: 180\nversion: \nname: red\nurl: https://pokeapi.co/api/v2/version/1/\ngame_index: 180\nversion: \nname: blue\nurl: https://pokeapi.co/api/v2/version/2/\ngame_index: 180\nversion: \nn\nprevious\nYouTube transcripts\nnext\nApify Dataset\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/airbyte_json.html"}
+{"id": "a9dc2b45b5e7-0", "text": ".ipynb\n.pdf\nBlackboard\nBlackboard#\nBlackboard Learn (previously the Blackboard Learning Management System) is a web-based virtual learning environment and learning management system developed by Blackboard Inc. The software features course management, customizable open architecture, and scalable design that allows integration with student information systems and authentication protocols. It may be installed on local servers, hosted by Blackboard ASP Solutions, or provided as Software as a Service hosted on Amazon Web Services. Its main purposes are stated to include the addition of online elements to courses traditionally delivered face-to-face and development of completely online courses with few or no face-to-face meetings\nThis covers how to load data from a Blackboard Learn instance.\nThis loader is not compatible with all Blackboard courses. It is only\ncompatible with courses that use the new Blackboard interface.\nTo use this loader, you must have the BbRouter cookie. You can get this\ncookie by logging into the course and then copying the value of the\nBbRouter cookie from the browser\u2019s developer tools.\nfrom langchain.document_loaders import BlackboardLoader\nloader = BlackboardLoader(\n blackboard_course_url=\"https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1\",\n bbrouter=\"expires:12345...\",\n load_all_recursively=True,\n)\ndocuments = loader.load()\nprevious\nAzure Blob Storage File\nnext\nBlockchain\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/blackboard.html"}
+{"id": "ff7feea85cdd-0", "text": ".ipynb\n.pdf\nURL\n Contents \nURL\nSelenium URL Loader\nSetup\nPlaywright URL Loader\nSetup\nURL#\nThis covers how to load HTML documents from a list of URLs into a document format that we can use downstream.\n from langchain.document_loaders import UnstructuredURLLoader\nurls = [\n \"https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-8-2023\",\n \"https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-9-2023\"\n]\nloader = UnstructuredURLLoader(urls=urls)\ndata = loader.load()\nSelenium URL Loader#\nThis covers how to load HTML documents from a list of URLs using the SeleniumURLLoader.\nUsing selenium allows us to load pages that require JavaScript to render.\nSetup#\nTo use the SeleniumURLLoader, you will need to install selenium and unstructured.\nfrom langchain.document_loaders import SeleniumURLLoader\nurls = [\n \"https://www.youtube.com/watch?v=dQw4w9WgXcQ\",\n \"https://goo.gl/maps/NDSHwePEyaHMFGwh8\"\n]\nloader = SeleniumURLLoader(urls=urls)\ndata = loader.load()\nPlaywright URL Loader#\nThis covers how to load HTML documents from a list of URLs using the PlaywrightURLLoader.\nAs in the Selenium case, Playwright allows us to load pages that need JavaScript to render.\nSetup#\nTo use the PlaywrightURLLoader, you will need to install playwright and unstructured. Additionally, you will need to install the Playwright Chromium browser:\n# Install playwright\n!pip install \"playwright\"\n!pip install \"unstructured\"\n!playwright install", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/url.html"}
+{"id": "ff7feea85cdd-1", "text": "!pip install \"unstructured\"\n!playwright install\nfrom langchain.document_loaders import PlaywrightURLLoader\nurls = [\n \"https://www.youtube.com/watch?v=dQw4w9WgXcQ\",\n \"https://goo.gl/maps/NDSHwePEyaHMFGwh8\"\n]\nloader = PlaywrightURLLoader(urls=urls, remove_selectors=[\"header\", \"footer\"])\ndata = loader.load()\nprevious\nUnstructured File\nnext\nWebBaseLoader\n Contents\n \nURL\nSelenium URL Loader\nSetup\nPlaywright URL Loader\nSetup\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/url.html"}
+{"id": "81a91c62b252-0", "text": ".ipynb\n.pdf\nGutenberg\nGutenberg#\nProject Gutenberg is an online library of free eBooks.\nThis notebook covers how to load links to Gutenberg e-books into a document format that we can use downstream.\nfrom langchain.document_loaders import GutenbergLoader\nloader = GutenbergLoader('https://www.gutenberg.org/cache/epub/69972/pg69972.txt')\ndata = loader.load()\ndata[0].page_content[:300]\n'The Project Gutenberg eBook of The changed brides, by Emma Dorothy\\r\\n\\n\\nEliza Nevitte Southworth\\r\\n\\n\\n\\r\\n\\n\\nThis eBook is for the use of anyone anywhere in the United States and\\r\\n\\n\\nmost other parts of the world at no cost and with almost no restrictions\\r\\n\\n\\nwhatsoever. You may copy it, give it away or re-u'\ndata[0].metadata\n{'source': 'https://www.gutenberg.org/cache/epub/69972/pg69972.txt'}\nprevious\nCollege Confidential\nnext\nHacker News\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gutenberg.html"}
+{"id": "f0795d15fb2c-0", "text": ".ipynb\n.pdf\nPDF\n Contents \nUsing PyPDF\nUsing MathPix\nUsing Unstructured\nRetain Elements\nFetching remote PDFs using Unstructured\nUsing PyPDFium2\nUsing PDFMiner\nUsing PDFMiner to generate HTML text\nUsing PyMuPDF\nPyPDF Directory\nUsing pdfplumber\nPDF#\nPortable Document Format (PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems.\nThis covers how to load PDF documents into the Document format that we use downstream.\nUsing PyPDF#\nLoad PDF using pypdf into array of documents, where each document contains the page content and metadata with page number.\n!pip install pypdf\nfrom langchain.document_loaders import PyPDFLoader\nloader = PyPDFLoader(\"example_data/layout-parser-paper.pdf\")\npages = loader.load_and_split()\npages[0]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-1", "text": "Document(page_content='LayoutParser : A Uni\\x0ced Toolkit for Deep\\nLearning Based Document Image Analysis\\nZejiang Shen1( \\x00), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\\nLee4, Jacob Carlson3, and Weining Li5\\n1Allen Institute for AI\\nshannons@allenai.org\\n2Brown University\\nruochen zhang@brown.edu\\n3Harvard University\\nfmelissadell,jacob carlson g@fas.harvard.edu\\n4University of Washington\\nbcgl@cs.washington.edu\\n5University of Waterloo\\nw422li@uwaterloo.ca\\nAbstract. Recent advances in document image analysis (DIA) have been\\nprimarily driven by the application of neural networks. Ideally, research\\noutcomes could be easily deployed in production and extended for further\\ninvestigation. However, various factors like loosely organized codebases\\nand sophisticated model con\\x0cgurations complicate the easy reuse of im-\\nportant innovations by a wide audience. Though there have been on-going\\ne\\x0borts to improve reusability and simplify deep learning (DL) model\\ndevelopment in disciplines like natural language processing and computer\\nvision, none of them are optimized for challenges in the domain of DIA.\\nThis represents a major gap in the existing toolkit, as DIA is central to\\nacademic research across a wide range of disciplines in the social sciences\\nand humanities. This paper introduces LayoutParser , an open-source\\nlibrary for streamlining the usage of DL in DIA research and applica-\\ntions. The core LayoutParser library comes with a set of simple and\\nintuitive interfaces for applying and customizing DL models for layout de-\\ntection, character recognition, and many other document processing tasks.\\nTo promote extensibility, LayoutParser also incorporates a community\\nplatform for sharing both pre-trained models and full document digiti-\\nzation", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-2", "text": "also incorporates a community\\nplatform for sharing both pre-trained models and full document digiti-\\nzation pipelines. We demonstrate that LayoutParser is helpful for both\\nlightweight and large-scale digitization pipelines in real-word use cases.\\nThe library is publicly available at https://layout-parser.github.io .\\nKeywords: Document Image Analysis \u00b7Deep Learning \u00b7Layout Analysis\\n\u00b7Character Recognition \u00b7Open Source library \u00b7Toolkit.\\n1 Introduction\\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\\ndocument image analysis (DIA) tasks including document image classi\\x0ccation [ 11,arXiv:2103.15348v2 [cs.CV] 21 Jun 2021', metadata={'source': 'example_data/layout-parser-paper.pdf', 'page': 0})", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-3", "text": "An advantage of this approach is that documents can be retrieved with page numbers.\nWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.\nimport os\nimport getpass\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nOpenAI API Key: \u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\nfrom langchain.vectorstores import FAISS\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfaiss_index = FAISS.from_documents(pages, OpenAIEmbeddings())\ndocs = faiss_index.similarity_search(\"How will the community be engaged?\", k=2)\nfor doc in docs:\n print(str(doc.metadata[\"page\"]) + \":\", doc.page_content[:300])\n9: 10 Z. Shen et al.\nFig. 4: Illustration of (a) the original historical Japanese document with layout\ndetection results and (b) a recreated version of the document image that achieves\nmuch better character recognition recall. The reorganization algorithm rearranges\nthe tokens based on the their detect\n3: 4 Z. Shen et al.\nEfficient Data AnnotationC u s t o m i z e d M o d e l T r a i n i n gModel Cust omizationDI A Model HubDI A Pipeline SharingCommunity PlatformLa y out Detection ModelsDocument Images \nT h e C o r e L a y o u t P a r s e r L i b r a r yOCR ModuleSt or age & VisualizationLa y ou\nUsing MathPix#\nInspired by Daniel Gross\u2019s https://gist.github.com/danielgross/3ab4104e14faccc12b49200843adab21\nfrom langchain.document_loaders import MathpixPDFLoader\nloader = MathpixPDFLoader(\"example_data/layout-parser-paper.pdf\")\ndata = loader.load()", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-4", "text": "data = loader.load()\nUsing Unstructured#\nfrom langchain.document_loaders import UnstructuredPDFLoader\nloader = UnstructuredPDFLoader(\"example_data/layout-parser-paper.pdf\")\ndata = loader.load()\nRetain Elements#\nUnder the hood, Unstructured creates different \u201celements\u201d for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode=\"elements\".\nloader = UnstructuredPDFLoader(\"example_data/layout-parser-paper.pdf\", mode=\"elements\")\ndata = loader.load()\ndata[0]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-5", "text": "Document(page_content='LayoutParser: A Uni\ufb01ed Toolkit for Deep\\nLearning Based Document Image Analysis\\nZejiang Shen1 (\ufffd), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\\nLee4, Jacob Carlson3, and Weining Li5\\n1 Allen Institute for AI\\nshannons@allenai.org\\n2 Brown University\\nruochen zhang@brown.edu\\n3 Harvard University\\n{melissadell,jacob carlson}@fas.harvard.edu\\n4 University of Washington\\nbcgl@cs.washington.edu\\n5 University of Waterloo\\nw422li@uwaterloo.ca\\nAbstract. Recent advances in document image analysis (DIA) have been\\nprimarily driven by the application of neural networks. Ideally, research\\noutcomes could be easily deployed in production and extended for further\\ninvestigation. However, various factors like loosely organized codebases\\nand sophisticated model con\ufb01gurations complicate the easy reuse of im-\\nportant innovations by a wide audience. Though there have been on-going\\ne\ufb00orts to improve reusability and simplify deep learning (DL) model\\ndevelopment in disciplines like natural language processing and computer\\nvision, none of them are optimized for challenges in the domain of DIA.\\nThis represents a major gap in the existing toolkit, as DIA is central to\\nacademic research across a wide range of disciplines in the social sciences\\nand humanities. This paper introduces LayoutParser, an open-source\\nlibrary for streamlining the usage of DL in DIA research and applica-\\ntions. The core LayoutParser library comes with a set of simple and\\nintuitive interfaces for applying and customizing DL models for layout de-\\ntection, character recognition, and many other document processing tasks.\\nTo promote extensibility, LayoutParser also incorporates a community\\nplatform for sharing both pre-trained models and full document digiti-\\nzation pipelines. We demonstrate that", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-6", "text": "for sharing both pre-trained models and full document digiti-\\nzation pipelines. We demonstrate that LayoutParser is helpful for both\\nlightweight and large-scale digitization pipelines in real-word use cases.\\nThe library is publicly available at https://layout-parser.github.io.\\nKeywords: Document Image Analysis \u00b7 Deep Learning \u00b7 Layout Analysis\\n\u00b7 Character Recognition \u00b7 Open Source library \u00b7 Toolkit.\\n1\\nIntroduction\\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\\ndocument image analysis (DIA) tasks including document image classi\ufb01cation [11,\\narXiv:2103.15348v2 [cs.CV] 21 Jun 2021\\n', lookup_str='', metadata={'file_path': 'example_data/layout-parser-paper.pdf', 'page_number': 1, 'total_pages': 16, 'format': 'PDF 1.5', 'title': '', 'author': '', 'subject': '', 'keywords': '', 'creator': 'LaTeX with hyperref', 'producer': 'pdfTeX-1.40.21', 'creationDate': 'D:20210622012710Z', 'modDate': 'D:20210622012710Z', 'trapped': '', 'encryption': None}, lookup_index=0)", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-7", "text": "Fetching remote PDFs using Unstructured#\nThis covers how to load online pdfs into a document format that we can use downstream. This can be used for various online pdf sites such as https://open.umn.edu/opentextbooks/textbooks/ and https://arxiv.org/archive/\nNote: all other pdf loaders can also be used to fetch remote PDFs, but OnlinePDFLoader is a legacy function, and works specifically with UnstructuredPDFLoader.\nfrom langchain.document_loaders import OnlinePDFLoader\nloader = OnlinePDFLoader(\"https://arxiv.org/pdf/2302.03803.pdf\")\ndata = loader.load()\nprint(data)", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-8", "text": "[Document(page_content='A WEAK ( k, k ) -LEFSCHETZ THEOREM FOR PROJECTIVE TORIC ORBIFOLDS\\n\\nWilliam D. Montoya\\n\\nInstituto de Matem\u00b4atica, Estat\u00b4\u0131stica e Computa\u00b8c\u02dcao Cient\u00b4\u0131\ufb01ca,\\n\\nIn [3] we proved that, under suitable conditions, on a very general codimension s quasi- smooth intersection subvariety X in a projective toric orbifold P d \u03a3 with d + s = 2 ( k + 1 ) the Hodge conjecture holds, that is, every ( p, p ) -cohomology class, under the Poincar\u00b4e duality is a rational linear combination of fundamental classes of algebraic subvarieties of X . The proof of the above-mentioned result relies, for p \u2260 d + 1 \u2212 s , on a Lefschetz\\n\\nKeywords: (1,1)- Lefschetz theorem, Hodge conjecture, toric varieties, complete intersection Email: wmontoya@ime.unicamp.br\\n\\ntheorem ([7]) and the Hard Lefschetz theorem for projective orbifolds ([11]). When p = d + 1 \u2212 s the proof relies on the Cayley trick, a trick which associates to X a quasi-smooth hypersurface Y in a projective vector bundle, and the Cayley Proposition (4.3) which gives an isomorphism of some primitive cohomologies (4.2) of X and Y . The Cayley trick, following the philosophy of Mavlyutov in [7], reduces results known for quasi-smooth hypersurfaces to quasi-smooth intersection subvarieties. The idea in this paper goes the other way around, we translate some results for quasi-smooth intersection subvarieties to\\n\\nAcknowledgement. I thank", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-9", "text": "we translate some results for quasi-smooth intersection subvarieties to\\n\\nAcknowledgement. I thank Prof. Ugo Bruzzo and Tiago Fonseca for useful discus- sions. I also acknowledge support from FAPESP postdoctoral grant No. 2019/23499-7.\\n\\nLet M be a free abelian group of rank d , let N = Hom ( M, Z ) , and N R = N \u2297 Z R .\\n\\nif there exist k linearly independent primitive elements e\\n\\n, . . . , e k \u2208 N such that \u03c3 = { \u00b5\\n\\ne\\n\\n+ \u22ef + \u00b5 k e k } . \u2022 The generators e i are integral if for every i and any nonnegative rational number \u00b5 the product \u00b5e i is in N only if \u00b5 is an integer. \u2022 Given two rational simplicial cones \u03c3 , \u03c3 \u2032 one says that \u03c3 \u2032 is a face of \u03c3 ( \u03c3 \u2032 < \u03c3 ) if the set of integral generators of \u03c3 \u2032 is a subset of the set of integral generators of \u03c3 . \u2022 A \ufb01nite set \u03a3 = { \u03c3\\n\\n, . . . , \u03c3 t } of rational simplicial cones is called a rational simplicial complete d -dimensional fan if:\\n\\nall faces of cones in \u03a3 are in \u03a3 ;\\n\\nif \u03c3, \u03c3 \u2032 \u2208 \u03a3 then \u03c3 \u2229 \u03c3 \u2032 < \u03c3 and \u03c3 \u2229 \u03c3 \u2032 < \u03c3 \u2032 ;\\n\\nN R = \u03c3\\n\\n\u222a \u22c5 \u22c5 \u22c5 \u222a \u03c3 t .\\n\\nA rational simplicial complete d -dimensional fan \u03a3 de\ufb01nes a d -dimensional toric variety P d \u03a3 having only orbifold singularities which we assume to be projective. Moreover, T \u2236 = N \u2297 Z C", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-10", "text": "which we assume to be projective. Moreover, T \u2236 = N \u2297 Z C \u2217 \u2243 ( C \u2217 ) d is the torus action on P d \u03a3 . We denote by \u03a3 ( i ) the i -dimensional cones\\n\\nFor a cone \u03c3 \u2208 \u03a3, \u02c6 \u03c3 is the set of 1-dimensional cone in \u03a3 that are not contained in \u03c3\\n\\nand x \u02c6 \u03c3 \u2236 = \u220f \u03c1 \u2208 \u02c6 \u03c3 x \u03c1 is the associated monomial in S .\\n\\nDe\ufb01nition 2.2. The irrelevant ideal of P d \u03a3 is the monomial ideal B \u03a3 \u2236 =< x \u02c6 \u03c3 \u2223 \u03c3 \u2208 \u03a3 > and the zero locus Z ( \u03a3 ) \u2236 = V ( B \u03a3 ) in the a\ufb03ne space A d \u2236 = Spec ( S ) is the irrelevant locus.\\n\\nProposition 2.3 (Theorem 5.1.11 [5]) . The toric variety P d \u03a3 is a categorical quotient A d \u2216 Z ( \u03a3 ) by the group Hom ( Cl ( \u03a3 ) , C \u2217 ) and the group action is induced by the Cl ( \u03a3 ) - grading of S .\\n\\nNow we give a brief introduction to complex orbifolds and we mention the needed theorems for the next section. Namely: de Rham theorem and Dolbeault theorem for complex orbifolds.\\n\\nDe\ufb01nition 2.4. A complex orbifold of complex dimension d is a singular complex space whose singularities are locally isomorphic to quotient singularities C d / G , for \ufb01nite sub- groups G \u2282 Gl ( d, C ) .\\n\\nDe\ufb01nition 2.5. A di\ufb00erential form", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-11", "text": "2.5. A di\ufb00erential form on a complex orbifold Z is de\ufb01ned locally at z \u2208 Z as a G -invariant di\ufb00erential form on C d where G \u2282 Gl ( d, C ) and Z is locally isomorphic to d\\n\\nRoughly speaking the local geometry of orbifolds reduces to local G -invariant geometry.\\n\\nWe have a complex of di\ufb00erential forms ( A \u25cf ( Z ) , d ) and a double complex ( A \u25cf , \u25cf ( Z ) , \u2202, \u00af \u2202 ) of bigraded di\ufb00erential forms which de\ufb01ne the de Rham and the Dolbeault cohomology groups (for a \ufb01xed p \u2208 N ) respectively:\\n\\n(1,1)-Lefschetz theorem for projective toric orbifolds\\n\\nDe\ufb01nition 3.1. A subvariety X \u2282 P d \u03a3 is quasi-smooth if V ( I X ) \u2282 A #\u03a3 ( 1 ) is smooth outside\\n\\nExample 3.2 . Quasi-smooth hypersurfaces or more generally quasi-smooth intersection sub-\\n\\nExample 3.2 . Quasi-smooth hypersurfaces or more generally quasi-smooth intersection sub- varieties are quasi-smooth subvarieties (see [2] or [7] for more details).\\n\\nRemark 3.3 . Quasi-smooth subvarieties are suborbifolds of P d \u03a3 in the sense of Satake in [8]. Intuitively speaking they are subvarieties whose only singularities come from the ambient\\n\\nProof. From the exponential short exact sequence\\n\\nwe have a long exact sequence in cohomology\\n\\nH 1 (O \u2217 X ) \u2192 H 2 ( X, Z )", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-12", "text": "1 (O \u2217 X ) \u2192 H 2 ( X, Z ) \u2192 H 2 (O X ) \u2243 H 0 , 2 ( X )\\n\\nwhere the last isomorphisms is due to Steenbrink in [9]. Now, it is enough to prove the commutativity of the next diagram\\n\\nwhere the last isomorphisms is due to Steenbrink in [9]. Now,\\n\\nH 2 ( X, Z ) / / H 2 ( X, O X ) \u2243 Dolbeault H 2 ( X, C ) deRham \u2243 H 2 dR ( X, C ) / / H 0 , 2 \u00af \u2202 ( X )\\n\\nof the proof follows as the ( 1 , 1 ) -Lefschetz theorem in [6].\\n\\nRemark 3.5 . For k = 1 and P d \u03a3 as the projective space, we recover the classical ( 1 , 1 ) - Lefschetz theorem.\\n\\nBy the Hard Lefschetz Theorem for projective orbifolds (see [11] for details) we\\n\\nBy the Hard Lefschetz Theorem for projective orbifolds (see [11] for details) we get an isomorphism of cohomologies :\\n\\ngiven by the Lefschetz morphism and since it is a morphism of Hodge structures, we have:\\n\\nH 1 , 1 ( X, Q ) \u2243 H dim X \u2212 1 , dim X \u2212 1 ( X, Q )\\n\\nCorollary 3.6. If the dimension of X is 1 , 2 or 3 . The Hodge conjecture holds on X\\n\\nProof. If the dim C X = 1 the result is clear by the", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-13", "text": "on X\\n\\nProof. If the dim C X = 1 the result is clear by the Hard Lefschetz theorem for projective orbifolds. The dimension 2 and 3 cases are covered by Theorem 3.5 and the Hard Lefschetz.\\n\\nCayley trick and Cayley proposition\\n\\nThe Cayley trick is a way to associate to a quasi-smooth intersection subvariety a quasi- smooth hypersurface. Let L 1 , . . . , L s be line bundles on P d \u03a3 and let \u03c0 \u2236 P ( E ) \u2192 P d \u03a3 be the projective space bundle associated to the vector bundle E = L 1 \u2295 \u22ef \u2295 L s . It is known that P ( E ) is a ( d + s \u2212 1 ) -dimensional simplicial toric variety whose fan depends on the degrees of the line bundles and the fan \u03a3. Furthermore, if the Cox ring, without considering the grading, of P d \u03a3 is C [ x 1 , . . . , x m ] then the Cox ring of P ( E ) is\\n\\nMoreover for X a quasi-smooth intersection subvariety cut o\ufb00 by f 1 , . . . , f s with deg ( f i ) = [ L i ] we relate the hypersurface Y cut o\ufb00 by F = y 1 f 1 + \u22c5 \u22c5 \u22c5 + y s f s which turns out to be quasi-smooth. For more details see Section 2 in [7].\\n\\nWe will denote P ( E ) as P d + s \u2212 1 \u03a3 ,X to keep track of its relation with X and P d \u03a3 .\\n\\nThe following is a key remark.\\n\\nRemark 4.1 . There is a morphism \u03b9 \u2236 X \u2192 Y", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-14", "text": "4.1 . There is a morphism \u03b9 \u2236 X \u2192 Y \u2282 P d + s \u2212 1 \u03a3 ,X . Moreover every point z \u2236 = ( x, y ) \u2208 Y with y \u2260 0 has a preimage. Hence for any subvariety W = V ( I W ) \u2282 X \u2282 P d \u03a3 there exists W \u2032 \u2282 Y \u2282 P d + s \u2212 1 \u03a3 ,X such that \u03c0 ( W \u2032 ) = W , i.e., W \u2032 = { z = ( x, y ) \u2223 x \u2208 W } .\\n\\nFor X \u2282 P d \u03a3 a quasi-smooth intersection variety the morphism in cohomology induced by the inclusion i \u2217 \u2236 H d \u2212 s ( P d \u03a3 , C ) \u2192 H d \u2212 s ( X, C ) is injective by Proposition 1.4 in [7].\\n\\nDe\ufb01nition 4.2. The primitive cohomology of H d \u2212 s prim ( X ) is the quotient H d \u2212 s ( X, C )/ i \u2217 ( H d \u2212 s ( P d \u03a3 , C )) and H d \u2212 s prim ( X, Q ) with rational coe\ufb03cients.\\n\\nH d \u2212 s ( P d \u03a3 , C ) and H d \u2212 s ( X, C ) have pure Hodge structures, and the morphism i \u2217 is com- patible with them, so that H d \u2212 s prim ( X ) gets a pure Hodge structure.\\n\\nThe next Proposition is the Cayley proposition.\\n\\nProposition 4.3. [Proposition 2.3 in [3] ] Let X = X 1 \u2229\u22c5 \u22c5 \u22c5\u2229 X s be a quasi-smooth intersec- tion", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-15", "text": "\u22c5 \u22c5\u2229 X s be a quasi-smooth intersec- tion subvariety in P d \u03a3 cut o\ufb00 by homogeneous polynomials f 1 . . . f s . Then for p \u2260 d + s \u2212 1 2 , d + s \u2212 3 2\\n\\nRemark 4.5 . The above isomorphisms are also true with rational coe\ufb03cients since H \u25cf ( X, C ) = H \u25cf ( X, Q ) \u2297 Q C . See the beginning of Section 7.1 in [10] for more details.\\n\\nTheorem 5.1. Let Y = { F = y 1 f 1 + \u22ef + y k f k = 0 } \u2282 P 2 k + 1 \u03a3 ,X be the quasi-smooth hypersurface associated to the quasi-smooth intersection surface X = X f 1 \u2229 \u22c5 \u22c5 \u22c5 \u2229 X f k \u2282 P k + 2 \u03a3 . Then on Y the Hodge conjecture holds.\\n\\nthe Hodge conjecture holds.\\n\\nProof. If H k,k prim ( X, Q ) = 0 we are done. So let us assume H k,k prim ( X, Q ) \u2260 0. By the Cayley proposition H k,k prim ( Y, Q ) \u2243 H 1 , 1 prim ( X, Q ) and by the ( 1 , 1 ) -Lefschetz theorem for projective\\n\\ntoric orbifolds there is a non-zero algebraic basis \u03bb C 1 , . . . , \u03bb C n with rational coe\ufb03cients of H 1 , 1 prim ( X, Q ) , that is, there are n \u2236 = h 1 , 1 prim ( X, Q )", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-16", "text": "is, there are n \u2236 = h 1 , 1 prim ( X, Q ) algebraic curves C 1 , . . . , C n in X such that under the Poincar\u00b4e duality the class in homology [ C i ] goes to \u03bb C i , [ C i ] \u21a6 \u03bb C i . Recall that the Cox ring of P k + 2 is contained in the Cox ring of P 2 k + 1 \u03a3 ,X without considering the grading. Considering the grading we have that if \u03b1 \u2208 Cl ( P k + 2 \u03a3 ) then ( \u03b1, 0 ) \u2208 Cl ( P 2 k + 1 \u03a3 ,X ) . So the polynomials de\ufb01ning C i \u2282 P k + 2 \u03a3 can be interpreted in P 2 k + 1 X, \u03a3 but with di\ufb00erent degree. Moreover, by Remark 4.1 each C i is contained in Y = { F = y 1 f 1 + \u22ef + y k f k = 0 } and\\n\\nfurthermore it has codimension k .\\n\\nClaim: { C i } ni = 1 is a basis of prim ( ) . It is enough to prove that \u03bb C i is di\ufb00erent from zero in H k,k prim ( Y, Q ) or equivalently that the cohomology classes { \u03bb C i } ni = 1 do not come from the ambient space. By contradiction, let us assume that there exists a j and C \u2282 P 2 k + 1 \u03a3 ,X such that \u03bb C \u2208 H k,k ( P 2 k + 1 \u03a3 ,X , Q ) with i \u2217 ( \u03bb C ) = \u03bb C j or in terms of homology there exists a ( k + 2 ) -dimensional algebraic subvariety", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-17", "text": "of homology there exists a ( k + 2 ) -dimensional algebraic subvariety V \u2282 P 2 k + 1 \u03a3 ,X such that V \u2229 Y = C j so they are equal as a homology class of P 2 k + 1 \u03a3 ,X ,i.e., [ V \u2229 Y ] = [ C j ] . It is easy to check that \u03c0 ( V ) \u2229 X = C j as a subvariety of P k + 2 \u03a3 where \u03c0 \u2236 ( x, y ) \u21a6 x . Hence [ \u03c0 ( V ) \u2229 X ] = [ C j ] which is equivalent to say that \u03bb C j comes from P k + 2 \u03a3 which contradicts the choice of [ C j ] .\\n\\nRemark 5.2 . Into the proof of the previous theorem, the key fact was that on X the Hodge conjecture holds and we translate it to Y by contradiction. So, using an analogous argument we have:\\n\\nargument we have:\\n\\nProposition 5.3. Let Y = { F = y 1 f s +\u22ef+ y s f s = 0 } \u2282 P 2 k + 1 \u03a3 ,X be the quasi-smooth hypersurface associated to a quasi-smooth intersection subvariety X = X f 1 \u2229 \u22c5 \u22c5 \u22c5 \u2229 X f s \u2282 P d \u03a3 such that d + s = 2 ( k + 1 ) . If the Hodge conjecture holds on X then it holds as well on Y .\\n\\nCorollary 5.4. If the dimension of Y is 2 s \u2212 1 , 2 s or 2 s + 1 then the Hodge conjecture holds on Y .\\n\\nProof. By Proposition 5.3 and", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-18", "text": "Hodge conjecture holds on Y .\\n\\nProof. By Proposition 5.3 and Corollary 3.6.\\n\\n[\\n\\n] Angella, D. Cohomologies of certain orbifolds. Journal of Geometry and Physics\\n\\n(\\n\\n),\\n\\n\u2013\\n\\n[\\n\\n] Batyrev, V. V., and Cox, D. A. On the Hodge structure of projective hypersur- faces in toric varieties. Duke Mathematical Journal\\n\\n,\\n\\n(Aug\\n\\n). [\\n\\n] Bruzzo, U., and Montoya, W. On the Hodge conjecture for quasi-smooth in- tersections in toric varieties. S\u02dcao Paulo J. Math. Sci. Special Section: Geometry in Algebra and Algebra in Geometry (\\n\\n). [\\n\\n] Caramello Jr, F. C. Introduction to orbifolds. a\\n\\niv:\\n\\nv\\n\\n(\\n\\n). [\\n\\n] Cox, D., Little, J., and Schenck, H. Toric varieties, vol.\\n\\nAmerican Math- ematical Soc.,\\n\\n[\\n\\n] Griffiths, P., and Harris, J. Principles of Algebraic Geometry. John Wiley & Sons, Ltd,\\n\\n[\\n\\n] Mavlyutov, A. R. Cohomology of complete intersections in toric varieties. Pub- lished in Paci\ufb01c J. of Math.\\n\\nNo.\\n\\n(\\n\\n),\\n\\n\u2013\\n\\n[\\n\\n] Satake, I. On a Generalization of the Notion of Manifold. Proceedings of the National Academy of Sciences of the United States of America\\n\\n,\\n\\n(\\n\\n),\\n\\n\u2013\\n\\n[\\n\\n] Steenbrink, J. H. M. Intersection form for", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-19", "text": "Steenbrink, J. H. M. Intersection form for quasi-homogeneous singularities. Com- positio Mathematica\\n\\n,\\n\\n(\\n\\n),\\n\\n\u2013\\n\\n[\\n\\n] Voisin, C. Hodge Theory and Complex Algebraic Geometry I, vol.\\n\\nof Cambridge Studies in Advanced Mathematics . Cambridge University Press,\\n\\n[\\n\\n] Wang, Z. Z., and Zaffran, D. A remark on the Hard Lefschetz theorem for K\u00a8ahler orbifolds. Proceedings of the American Mathematical Society\\n\\n,\\n\\n(Aug\\n\\n).\\n\\n[2] Batyrev, V. V., and Cox, D. A. On the Hodge structure of projective hypersur- faces in toric varieties. Duke Mathematical Journal 75, 2 (Aug 1994).\\n\\n[\\n\\n] Bruzzo, U., and Montoya, W. On the Hodge conjecture for quasi-smooth in- tersections in toric varieties. S\u02dcao Paulo J. Math. Sci. Special Section: Geometry in Algebra and Algebra in Geometry (\\n\\n).\\n\\n[3] Bruzzo, U., and Montoya, W. On the Hodge conjecture for quasi-smooth in- tersections in toric varieties. S\u02dcao Paulo J. Math. Sci. Special Section: Geometry in Algebra and Algebra in Geometry (2021).\\n\\nA. R. Cohomology of complete intersections in toric varieties. Pub-', lookup_str='', metadata={'source': '/var/folders/ph/hhm7_zyx4l13k3v8z02dwp1w0000gn/T/tmpgq0ckaja/online_file.pdf'}, lookup_index=0)]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-20", "text": "Using PyPDFium2#\nfrom langchain.document_loaders import PyPDFium2Loader\nloader = PyPDFium2Loader(\"example_data/layout-parser-paper.pdf\")\ndata = loader.load()\nUsing PDFMiner#\nfrom langchain.document_loaders import PDFMinerLoader\nloader = PDFMinerLoader(\"example_data/layout-parser-paper.pdf\")\ndata = loader.load()\nUsing PDFMiner to generate HTML text#\nThis can be helpful for chunking texts semantically into sections as the output html content can be parsed via BeautifulSoup to get more structured and rich information about font size, page numbers, pdf headers/footers, etc.\nfrom langchain.document_loaders import PDFMinerPDFasHTMLLoader\nloader = PDFMinerPDFasHTMLLoader(\"example_data/layout-parser-paper.pdf\")\ndata = loader.load()[0] # entire pdf is loaded as a single Document\nfrom bs4 import BeautifulSoup\nsoup = BeautifulSoup(data.page_content,'html.parser')\ncontent = soup.find_all('div')\nimport re\ncur_fs = None\ncur_text = ''\nsnippets = [] # first collect all snippets that have the same font size\nfor c in content:\n sp = c.find('span')\n if not sp:\n continue\n st = sp.get('style')\n if not st:\n continue\n fs = re.findall('font-size:(\\d+)px',st)\n if not fs:\n continue\n fs = int(fs[0])\n if not cur_fs:\n cur_fs = fs\n if fs == cur_fs:\n cur_text += c.text\n else:\n snippets.append((cur_text,cur_fs))\n cur_fs = fs\n cur_text = c.text\nsnippets.append((cur_text,cur_fs))", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-21", "text": "cur_text = c.text\nsnippets.append((cur_text,cur_fs))\n# Note: The above logic is very straightforward. One can also add more strategies such as removing duplicate snippets (as\n# headers/footers in a PDF appear on multiple pages so if we find duplicatess safe to assume that it is redundant info)\nfrom langchain.docstore.document import Document\ncur_idx = -1\nsemantic_snippets = []\n# Assumption: headings have higher font size than their respective content\nfor s in snippets:\n # if current snippet's font size > previous section's heading => it is a new heading\n if not semantic_snippets or s[1] > semantic_snippets[cur_idx].metadata['heading_font']:\n metadata={'heading':s[0], 'content_font': 0, 'heading_font': s[1]}\n metadata.update(data.metadata)\n semantic_snippets.append(Document(page_content='',metadata=metadata))\n cur_idx += 1\n continue\n \n # if current snippet's font size <= previous section's content => content belongs to the same section (one can also create\n # a tree like structure for sub sections if needed but that may require some more thinking and may be data specific)\n if not semantic_snippets[cur_idx].metadata['content_font'] or s[1] <= semantic_snippets[cur_idx].metadata['content_font']:\n semantic_snippets[cur_idx].page_content += s[0]\n semantic_snippets[cur_idx].metadata['content_font'] = max(s[1], semantic_snippets[cur_idx].metadata['content_font'])\n continue\n \n # if current snippet's font size > previous section's content but less tha previous section's heading than also make a new", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-22", "text": "# section (e.g. title of a pdf will have the highest font size but we don't want it to subsume all sections)\n metadata={'heading':s[0], 'content_font': 0, 'heading_font': s[1]}\n metadata.update(data.metadata)\n semantic_snippets.append(Document(page_content='',metadata=metadata))\n cur_idx += 1\nsemantic_snippets[4]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-23", "text": "Document(page_content='Recently, various DL models and datasets have been developed for layout analysis\\ntasks. The dhSegment [22] utilizes fully convolutional networks [20] for segmen-\\ntation tasks on historical documents. Object detection-based methods like Faster\\nR-CNN [28] and Mask R-CNN [12] are used for identifying document elements [38]\\nand detecting tables [30, 26]. Most recently, Graph Neural Networks [29] have also\\nbeen used in table detection [27]. However, these models are usually implemented\\nindividually and there is no uni\ufb01ed framework to load and use such models.\\nThere has been a surge of interest in creating open-source tools for document\\nimage processing: a search of document image analysis in Github leads to 5M\\nrelevant code pieces 6; yet most of them rely on traditional rule-based methods\\nor provide limited functionalities. The closest prior research to our work is the\\nOCR-D project7, which also tries to build a complete toolkit for DIA. However,\\nsimilar to the platform developed by Neudecker et al. [21], it is designed for\\nanalyzing historical documents, and provides no supports for recent DL models.\\nThe DocumentLayoutAnalysis project8 focuses on processing born-digital PDF\\ndocuments via analyzing the stored PDF data. Repositories like DeepLayout9\\nand Detectron2-PubLayNet10 are individual deep learning models trained on\\nlayout analysis datasets without support for the full DIA pipeline. The Document\\nAnalysis and Exploitation (DAE) platform [15] and the DeepDIVA project [2]\\naim to improve the reproducibility of DIA methods (or DL models), yet they\\nare not actively maintained. OCR engines like Tesseract [14], easyOCR11 and\\npaddleOCR12 usually do not come with comprehensive functionalities for other\\nDIA tasks like layout analysis.\\nRecent", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-24", "text": "usually do not come with comprehensive functionalities for other\\nDIA tasks like layout analysis.\\nRecent years have also seen numerous e\ufb00orts to create libraries for promoting\\nreproducibility and reusability in the \ufb01eld of DL. Libraries like Dectectron2 [35],\\n6 The number shown is obtained by specifying the search type as \u2018code\u2019.\\n7 https://ocr-d.de/en/about\\n8 https://github.com/BobLd/DocumentLayoutAnalysis\\n9 https://github.com/leonlulu/DeepLayout\\n10 https://github.com/hpanwar08/detectron2\\n11 https://github.com/JaidedAI/EasyOCR\\n12 https://github.com/PaddlePaddle/PaddleOCR\\n4\\nZ. Shen et al.\\nFig. 1: The overall architecture of LayoutParser. For an input document image,\\nthe core LayoutParser library provides a set of o\ufb00-the-shelf tools for layout\\ndetection, OCR, visualization, and storage, backed by a carefully designed layout\\ndata structure. LayoutParser also supports high level customization via e\ufb03cient\\nlayout annotation and model training functions. These improve model accuracy\\non the target samples. The community platform enables the easy sharing of DIA\\nmodels and whole digitization pipelines to promote reusability and reproducibility.\\nA collection of detailed documentation, tutorials and exemplar projects make\\nLayoutParser easy to learn and use.\\nAllenNLP [8] and transformers [34] have provided the community with complete\\nDL-based support for developing and deploying models for general computer\\nvision and natural language processing problems. LayoutParser, on the other\\nhand, specializes speci\ufb01cally in DIA tasks. LayoutParser is also equipped with a\\ncommunity platform inspired by established model hubs such as Torch Hub [23]\\nand TensorFlow Hub [1]. It enables the sharing of pretrained models as", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-25", "text": "Torch Hub [23]\\nand TensorFlow Hub [1]. It enables the sharing of pretrained models as well as\\nfull document processing pipelines that are unique to DIA tasks.\\nThere have been a variety of document data collections to facilitate the\\ndevelopment of DL models. Some examples include PRImA [3](magazine layouts),\\nPubLayNet [38](academic paper layouts), Table Bank [18](tables in academic\\npapers), Newspaper Navigator Dataset [16, 17](newspaper \ufb01gure layouts) and\\nHJDataset [31](historical Japanese document layouts). A spectrum of models\\ntrained on these datasets are currently available in the LayoutParser model zoo\\nto support di\ufb00erent use cases.\\n', metadata={'heading': '2 Related Work\\n', 'content_font': 9, 'heading_font': 11, 'source': 'example_data/layout-parser-paper.pdf'})", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-26", "text": "Using PyMuPDF#\nThis is the fastest of the PDF parsing options, and contains detailed metadata about the PDF and its pages, as well as returns one document per page.\nfrom langchain.document_loaders import PyMuPDFLoader\nloader = PyMuPDFLoader(\"example_data/layout-parser-paper.pdf\")\ndata = loader.load()\ndata[0]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-27", "text": "Document(page_content='LayoutParser: A Uni\ufb01ed Toolkit for Deep\\nLearning Based Document Image Analysis\\nZejiang Shen1 (\ufffd), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\\nLee4, Jacob Carlson3, and Weining Li5\\n1 Allen Institute for AI\\nshannons@allenai.org\\n2 Brown University\\nruochen zhang@brown.edu\\n3 Harvard University\\n{melissadell,jacob carlson}@fas.harvard.edu\\n4 University of Washington\\nbcgl@cs.washington.edu\\n5 University of Waterloo\\nw422li@uwaterloo.ca\\nAbstract. Recent advances in document image analysis (DIA) have been\\nprimarily driven by the application of neural networks. Ideally, research\\noutcomes could be easily deployed in production and extended for further\\ninvestigation. However, various factors like loosely organized codebases\\nand sophisticated model con\ufb01gurations complicate the easy reuse of im-\\nportant innovations by a wide audience. Though there have been on-going\\ne\ufb00orts to improve reusability and simplify deep learning (DL) model\\ndevelopment in disciplines like natural language processing and computer\\nvision, none of them are optimized for challenges in the domain of DIA.\\nThis represents a major gap in the existing toolkit, as DIA is central to\\nacademic research across a wide range of disciplines in the social sciences\\nand humanities. This paper introduces LayoutParser, an open-source\\nlibrary for streamlining the usage of DL in DIA research and applica-\\ntions. The core LayoutParser library comes with a set of simple and\\nintuitive interfaces for applying and customizing DL models for layout de-\\ntection, character recognition, and many other document processing tasks.\\nTo promote extensibility, LayoutParser also incorporates a community\\nplatform for sharing both pre-trained models and full document digiti-\\nzation pipelines. We demonstrate that", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-28", "text": "for sharing both pre-trained models and full document digiti-\\nzation pipelines. We demonstrate that LayoutParser is helpful for both\\nlightweight and large-scale digitization pipelines in real-word use cases.\\nThe library is publicly available at https://layout-parser.github.io.\\nKeywords: Document Image Analysis \u00b7 Deep Learning \u00b7 Layout Analysis\\n\u00b7 Character Recognition \u00b7 Open Source library \u00b7 Toolkit.\\n1\\nIntroduction\\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\\ndocument image analysis (DIA) tasks including document image classi\ufb01cation [11,\\narXiv:2103.15348v2 [cs.CV] 21 Jun 2021\\n', lookup_str='', metadata={'file_path': 'example_data/layout-parser-paper.pdf', 'page_number': 1, 'total_pages': 16, 'format': 'PDF 1.5', 'title': '', 'author': '', 'subject': '', 'keywords': '', 'creator': 'LaTeX with hyperref', 'producer': 'pdfTeX-1.40.21', 'creationDate': 'D:20210622012710Z', 'modDate': 'D:20210622012710Z', 'trapped': '', 'encryption': None}, lookup_index=0)", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-29", "text": "Additionally, you can pass along any of the options from the PyMuPDF documentation as keyword arguments in the load call, and it will be pass along to the get_text() call.\nPyPDF Directory#\nLoad PDFs from directory\nfrom langchain.document_loaders import PyPDFDirectoryLoader\nloader = PyPDFDirectoryLoader(\"example_data/\")\ndocs = loader.load()\nUsing pdfplumber#\nLike PyMuPDF, the output Documents contain detailed metadata about the PDF and its pages, and returns one document per page.\nfrom langchain.document_loaders import PDFPlumberLoader\nloader = PDFPlumberLoader(\"example_data/layout-parser-paper.pdf\")\ndata = loader.load()\ndata[0]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-30", "text": "Document(page_content='LayoutParser: A Unified Toolkit for Deep\\nLearning Based Document Image Analysis\\nZejiang Shen1 ((cid:0)), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\\nLee4, Jacob Carlson3, and Weining Li5\\n1 Allen Institute for AI\\n1202 shannons@allenai.org\\n2 Brown University\\nruochen zhang@brown.edu\\n3 Harvard University\\nnuJ {melissadell,jacob carlson}@fas.harvard.edu\\n4 University of Washington\\nbcgl@cs.washington.edu\\n12 5 University of Waterloo\\nw422li@uwaterloo.ca\\n]VC.sc[\\nAbstract. Recentadvancesindocumentimageanalysis(DIA)havebeen\\nprimarily driven by the application of neural networks. Ideally, research\\noutcomescouldbeeasilydeployedinproductionandextendedforfurther\\ninvestigation. However, various factors like loosely organized codebases\\nand sophisticated model configurations complicate the easy reuse of im-\\n2v84351.3012:viXra portantinnovationsbyawideaudience.Thoughtherehavebeenon-going\\nefforts to improve reusability and simplify deep learning (DL) model\\ndevelopmentindisciplineslikenaturallanguageprocessingandcomputer\\nvision, none of them are optimized for challenges in the domain of DIA.\\nThis represents a major gap in the existing toolkit, as DIA is central to\\nacademicresearchacross awiderangeof disciplinesinthesocialsciences\\nand humanities. This paper introduces LayoutParser, an open-source\\nlibrary for streamlining the usage of DL in DIA research and applica-\\ntions. The core LayoutParser library comes with a set of simple and\\nintuitiveinterfacesforapplyingandcustomizingDLmodelsforlayoutde-\\ntection,characterrecognition,andmanyotherdocumentprocessingtasks.\\nTo promote extensibility,", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-31", "text": "promote extensibility, LayoutParser also incorporates a community\\nplatform for sharing both pre-trained models and full document digiti-\\nzation pipelines. We demonstrate that LayoutParser is helpful for both\\nlightweight and large-scale digitization pipelines in real-word use cases.\\nThe library is publicly available at https://layout-parser.github.io.\\nKeywords: DocumentImageAnalysis\u00b7DeepLearning\u00b7LayoutAnalysis\\n\u00b7 Character Recognition \u00b7 Open Source library \u00b7 Toolkit.\\n1 Introduction\\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\\ndocumentimageanalysis(DIA)tasksincludingdocumentimageclassification[11,', metadata={'source': 'example_data/layout-parser-paper.pdf', 'file_path': 'example_data/layout-parser-paper.pdf', 'page': 1, 'total_pages': 16, 'Author': '', 'CreationDate': 'D:20210622012710Z', 'Creator': 'LaTeX with hyperref', 'Keywords': '', 'ModDate': 'D:20210622012710Z', 'PTEX.Fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live 2020) kpathsea version 6.3.2', 'Producer': 'pdfTeX-1.40.21', 'Subject': '', 'Title': '', 'Trapped': 'False'})", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "f0795d15fb2c-32", "text": "previous\nPandas DataFrame\nnext\nSitemap\n Contents\n \nUsing PyPDF\nUsing MathPix\nUsing Unstructured\nRetain Elements\nFetching remote PDFs using Unstructured\nUsing PyPDFium2\nUsing PDFMiner\nUsing PDFMiner to generate HTML text\nUsing PyMuPDF\nPyPDF Directory\nUsing pdfplumber\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html"}
+{"id": "29d469b34894-0", "text": ".ipynb\n.pdf\nDuckDB\n Contents \nSpecifying Which Columns are Content vs Metadata\nAdding Source to Metadata\nDuckDB#\nDuckDB is an in-process SQL OLAP database management system.\nLoad a DuckDB query with one document per row.\n#!pip install duckdb\nfrom langchain.document_loaders import DuckDBLoader\n%%file example.csv\nTeam,Payroll\nNationals,81.34\nReds,82.20\nWriting example.csv\nloader = DuckDBLoader(\"SELECT * FROM read_csv_auto('example.csv')\")\ndata = loader.load()\nprint(data)\n[Document(page_content='Team: Nationals\\nPayroll: 81.34', metadata={}), Document(page_content='Team: Reds\\nPayroll: 82.2', metadata={})]\nSpecifying Which Columns are Content vs Metadata#\nloader = DuckDBLoader(\n \"SELECT * FROM read_csv_auto('example.csv')\",\n page_content_columns=[\"Team\"],\n metadata_columns=[\"Payroll\"]\n)\ndata = loader.load()\nprint(data)\n[Document(page_content='Team: Nationals', metadata={'Payroll': 81.34}), Document(page_content='Team: Reds', metadata={'Payroll': 82.2})]\nAdding Source to Metadata#\nloader = DuckDBLoader(\n \"SELECT Team, Payroll, Team As source FROM read_csv_auto('example.csv')\",\n metadata_columns=[\"source\"]\n)\ndata = loader.load()\nprint(data)\n[Document(page_content='Team: Nationals\\nPayroll: 81.34\\nsource: Nationals', metadata={'source': 'Nationals'}), Document(page_content='Team: Reds\\nPayroll: 82.2\\nsource: Reds', metadata={'source': 'Reds'})]\nprevious\nDocugami\nnext\nFauna\n Contents", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/duckdb.html"}
+{"id": "29d469b34894-1", "text": "previous\nDocugami\nnext\nFauna\n Contents\n \nSpecifying Which Columns are Content vs Metadata\nAdding Source to Metadata\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/duckdb.html"}
+{"id": "8abf990c09cf-0", "text": ".ipynb\n.pdf\nGoogle BigQuery\n Contents \nBasic Usage\nSpecifying Which Columns are Content vs Metadata\nAdding Source to Metadata\nGoogle BigQuery#\nGoogle BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data.\nBigQuery is a part of the Google Cloud Platform.\nLoad a BigQuery query with one document per row.\n#!pip install google-cloud-bigquery\nfrom langchain.document_loaders import BigQueryLoader\nBASE_QUERY = '''\nSELECT\n id,\n dna_sequence,\n organism\nFROM (\n SELECT\n ARRAY (\n SELECT\n AS STRUCT 1 AS id, \"ATTCGA\" AS dna_sequence, \"Lokiarchaeum sp. (strain GC14_75).\" AS organism\n UNION ALL\n SELECT\n AS STRUCT 2 AS id, \"AGGCGA\" AS dna_sequence, \"Heimdallarchaeota archaeon (strain LC_2).\" AS organism\n UNION ALL\n SELECT\n AS STRUCT 3 AS id, \"TCCGGA\" AS dna_sequence, \"Acidianus hospitalis (strain W1).\" AS organism) AS new_array),\n UNNEST(new_array)\n'''\nBasic Usage#\nloader = BigQueryLoader(BASE_QUERY)\ndata = loader.load()\nprint(data)", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_bigquery.html"}
+{"id": "8abf990c09cf-1", "text": "loader = BigQueryLoader(BASE_QUERY)\ndata = loader.load()\nprint(data)\n[Document(page_content='id: 1\\ndna_sequence: ATTCGA\\norganism: Lokiarchaeum sp. (strain GC14_75).', lookup_str='', metadata={}, lookup_index=0), Document(page_content='id: 2\\ndna_sequence: AGGCGA\\norganism: Heimdallarchaeota archaeon (strain LC_2).', lookup_str='', metadata={}, lookup_index=0), Document(page_content='id: 3\\ndna_sequence: TCCGGA\\norganism: Acidianus hospitalis (strain W1).', lookup_str='', metadata={}, lookup_index=0)]\nSpecifying Which Columns are Content vs Metadata#\nloader = BigQueryLoader(BASE_QUERY, page_content_columns=[\"dna_sequence\", \"organism\"], metadata_columns=[\"id\"])\ndata = loader.load()\nprint(data)\n[Document(page_content='dna_sequence: ATTCGA\\norganism: Lokiarchaeum sp. (strain GC14_75).', lookup_str='', metadata={'id': 1}, lookup_index=0), Document(page_content='dna_sequence: AGGCGA\\norganism: Heimdallarchaeota archaeon (strain LC_2).', lookup_str='', metadata={'id': 2}, lookup_index=0), Document(page_content='dna_sequence: TCCGGA\\norganism: Acidianus hospitalis (strain W1).', lookup_str='', metadata={'id': 3}, lookup_index=0)]\nAdding Source to Metadata#\n# Note that the `id` column is being returned twice, with one instance aliased as `source`\nALIASED_QUERY = '''\nSELECT\n id,\n dna_sequence,\n organism,\n id as source\nFROM (\n SELECT\n ARRAY (\n SELECT", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_bigquery.html"}
+{"id": "8abf990c09cf-2", "text": "id as source\nFROM (\n SELECT\n ARRAY (\n SELECT\n AS STRUCT 1 AS id, \"ATTCGA\" AS dna_sequence, \"Lokiarchaeum sp. (strain GC14_75).\" AS organism\n UNION ALL\n SELECT\n AS STRUCT 2 AS id, \"AGGCGA\" AS dna_sequence, \"Heimdallarchaeota archaeon (strain LC_2).\" AS organism\n UNION ALL\n SELECT\n AS STRUCT 3 AS id, \"TCCGGA\" AS dna_sequence, \"Acidianus hospitalis (strain W1).\" AS organism) AS new_array),\n UNNEST(new_array)\n'''\nloader = BigQueryLoader(ALIASED_QUERY, metadata_columns=[\"source\"])\ndata = loader.load()\nprint(data)\n[Document(page_content='id: 1\\ndna_sequence: ATTCGA\\norganism: Lokiarchaeum sp. (strain GC14_75).\\nsource: 1', lookup_str='', metadata={'source': 1}, lookup_index=0), Document(page_content='id: 2\\ndna_sequence: AGGCGA\\norganism: Heimdallarchaeota archaeon (strain LC_2).\\nsource: 2', lookup_str='', metadata={'source': 2}, lookup_index=0), Document(page_content='id: 3\\ndna_sequence: TCCGGA\\norganism: Acidianus hospitalis (strain W1).\\nsource: 3', lookup_str='', metadata={'source': 3}, lookup_index=0)]\nprevious\nGit\nnext\nGoogle Cloud Storage Directory\n Contents\n \nBasic Usage\nSpecifying Which Columns are Content vs Metadata\nAdding Source to Metadata\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_bigquery.html"}
+{"id": "8abf990c09cf-3", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_bigquery.html"}
+{"id": "5a0d2c405fcc-0", "text": ".ipynb\n.pdf\nCSV\n Contents \nCustomizing the csv parsing and loading\nSpecify a column to identify the document source\nUnstructuredCSVLoader\nCSV#\nA comma-separated values (CSV) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas.\nLoad csv data with a single row per document.\nfrom langchain.document_loaders.csv_loader import CSVLoader\nloader = CSVLoader(file_path='./example_data/mlb_teams_2012.csv')\ndata = loader.load()\nprint(data)", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html"}
+{"id": "5a0d2c405fcc-1", "text": "[Document(page_content='Team: Nationals\\n\"Payroll (millions)\": 81.34\\n\"Wins\": 98', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 0}, lookup_index=0), Document(page_content='Team: Reds\\n\"Payroll (millions)\": 82.20\\n\"Wins\": 97', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 1}, lookup_index=0), Document(page_content='Team: Yankees\\n\"Payroll (millions)\": 197.96\\n\"Wins\": 95', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 2}, lookup_index=0), Document(page_content='Team: Giants\\n\"Payroll (millions)\": 117.62\\n\"Wins\": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 3}, lookup_index=0), Document(page_content='Team: Braves\\n\"Payroll (millions)\": 83.31\\n\"Wins\": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 4}, lookup_index=0), Document(page_content='Team: Athletics\\n\"Payroll (millions)\": 55.37\\n\"Wins\": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 5}, lookup_index=0), Document(page_content='Team: Rangers\\n\"Payroll (millions)\": 120.51\\n\"Wins\": 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 6}, lookup_index=0), Document(page_content='Team: Orioles\\n\"Payroll", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html"}
+{"id": "5a0d2c405fcc-2", "text": "6}, lookup_index=0), Document(page_content='Team: Orioles\\n\"Payroll (millions)\": 81.43\\n\"Wins\": 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 7}, lookup_index=0), Document(page_content='Team: Rays\\n\"Payroll (millions)\": 64.17\\n\"Wins\": 90', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 8}, lookup_index=0), Document(page_content='Team: Angels\\n\"Payroll (millions)\": 154.49\\n\"Wins\": 89', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 9}, lookup_index=0), Document(page_content='Team: Tigers\\n\"Payroll (millions)\": 132.30\\n\"Wins\": 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 10}, lookup_index=0), Document(page_content='Team: Cardinals\\n\"Payroll (millions)\": 110.30\\n\"Wins\": 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 11}, lookup_index=0), Document(page_content='Team: Dodgers\\n\"Payroll (millions)\": 95.14\\n\"Wins\": 86', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 12}, lookup_index=0), Document(page_content='Team: White Sox\\n\"Payroll (millions)\": 96.92\\n\"Wins\": 85', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 13}, lookup_index=0), Document(page_content='Team:", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html"}
+{"id": "5a0d2c405fcc-3", "text": "'row': 13}, lookup_index=0), Document(page_content='Team: Brewers\\n\"Payroll (millions)\": 97.65\\n\"Wins\": 83', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 14}, lookup_index=0), Document(page_content='Team: Phillies\\n\"Payroll (millions)\": 174.54\\n\"Wins\": 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 15}, lookup_index=0), Document(page_content='Team: Diamondbacks\\n\"Payroll (millions)\": 74.28\\n\"Wins\": 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 16}, lookup_index=0), Document(page_content='Team: Pirates\\n\"Payroll (millions)\": 63.43\\n\"Wins\": 79', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 17}, lookup_index=0), Document(page_content='Team: Padres\\n\"Payroll (millions)\": 55.24\\n\"Wins\": 76', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 18}, lookup_index=0), Document(page_content='Team: Mariners\\n\"Payroll (millions)\": 81.97\\n\"Wins\": 75', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 19}, lookup_index=0), Document(page_content='Team: Mets\\n\"Payroll (millions)\": 93.35\\n\"Wins\": 74', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 20}, lookup_index=0),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html"}
+{"id": "5a0d2c405fcc-4", "text": "'row': 20}, lookup_index=0), Document(page_content='Team: Blue Jays\\n\"Payroll (millions)\": 75.48\\n\"Wins\": 73', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 21}, lookup_index=0), Document(page_content='Team: Royals\\n\"Payroll (millions)\": 60.91\\n\"Wins\": 72', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 22}, lookup_index=0), Document(page_content='Team: Marlins\\n\"Payroll (millions)\": 118.07\\n\"Wins\": 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 23}, lookup_index=0), Document(page_content='Team: Red Sox\\n\"Payroll (millions)\": 173.18\\n\"Wins\": 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 24}, lookup_index=0), Document(page_content='Team: Indians\\n\"Payroll (millions)\": 78.43\\n\"Wins\": 68', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 25}, lookup_index=0), Document(page_content='Team: Twins\\n\"Payroll (millions)\": 94.08\\n\"Wins\": 66', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 26}, lookup_index=0), Document(page_content='Team: Rockies\\n\"Payroll (millions)\": 78.06\\n\"Wins\": 64', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 27}, lookup_index=0),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html"}
+{"id": "5a0d2c405fcc-5", "text": "'row': 27}, lookup_index=0), Document(page_content='Team: Cubs\\n\"Payroll (millions)\": 88.19\\n\"Wins\": 61', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 28}, lookup_index=0), Document(page_content='Team: Astros\\n\"Payroll (millions)\": 60.65\\n\"Wins\": 55', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 29}, lookup_index=0)]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html"}
+{"id": "5a0d2c405fcc-6", "text": "Customizing the csv parsing and loading#\nSee the csv module documentation for more information of what csv args are supported.\nloader = CSVLoader(file_path='./example_data/mlb_teams_2012.csv', csv_args={\n 'delimiter': ',',\n 'quotechar': '\"',\n 'fieldnames': ['MLB Team', 'Payroll in millions', 'Wins']\n})\ndata = loader.load()\nprint(data)", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html"}
+{"id": "5a0d2c405fcc-7", "text": "[Document(page_content='MLB Team: Team\\nPayroll in millions: \"Payroll (millions)\"\\nWins: \"Wins\"', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 0}, lookup_index=0), Document(page_content='MLB Team: Nationals\\nPayroll in millions: 81.34\\nWins: 98', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 1}, lookup_index=0), Document(page_content='MLB Team: Reds\\nPayroll in millions: 82.20\\nWins: 97', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 2}, lookup_index=0), Document(page_content='MLB Team: Yankees\\nPayroll in millions: 197.96\\nWins: 95', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 3}, lookup_index=0), Document(page_content='MLB Team: Giants\\nPayroll in millions: 117.62\\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 4}, lookup_index=0), Document(page_content='MLB Team: Braves\\nPayroll in millions: 83.31\\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 5}, lookup_index=0), Document(page_content='MLB Team: Athletics\\nPayroll in millions: 55.37\\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 6}, lookup_index=0), Document(page_content='MLB Team: Rangers\\nPayroll in millions:", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html"}
+{"id": "5a0d2c405fcc-8", "text": "lookup_index=0), Document(page_content='MLB Team: Rangers\\nPayroll in millions: 120.51\\nWins: 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 7}, lookup_index=0), Document(page_content='MLB Team: Orioles\\nPayroll in millions: 81.43\\nWins: 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 8}, lookup_index=0), Document(page_content='MLB Team: Rays\\nPayroll in millions: 64.17\\nWins: 90', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 9}, lookup_index=0), Document(page_content='MLB Team: Angels\\nPayroll in millions: 154.49\\nWins: 89', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 10}, lookup_index=0), Document(page_content='MLB Team: Tigers\\nPayroll in millions: 132.30\\nWins: 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 11}, lookup_index=0), Document(page_content='MLB Team: Cardinals\\nPayroll in millions: 110.30\\nWins: 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 12}, lookup_index=0), Document(page_content='MLB Team: Dodgers\\nPayroll in millions: 95.14\\nWins: 86', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 13}, lookup_index=0), Document(page_content='MLB Team: White Sox\\nPayroll in millions:", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html"}
+{"id": "5a0d2c405fcc-9", "text": "Document(page_content='MLB Team: White Sox\\nPayroll in millions: 96.92\\nWins: 85', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 14}, lookup_index=0), Document(page_content='MLB Team: Brewers\\nPayroll in millions: 97.65\\nWins: 83', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 15}, lookup_index=0), Document(page_content='MLB Team: Phillies\\nPayroll in millions: 174.54\\nWins: 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 16}, lookup_index=0), Document(page_content='MLB Team: Diamondbacks\\nPayroll in millions: 74.28\\nWins: 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 17}, lookup_index=0), Document(page_content='MLB Team: Pirates\\nPayroll in millions: 63.43\\nWins: 79', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 18}, lookup_index=0), Document(page_content='MLB Team: Padres\\nPayroll in millions: 55.24\\nWins: 76', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 19}, lookup_index=0), Document(page_content='MLB Team: Mariners\\nPayroll in millions: 81.97\\nWins: 75', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 20}, lookup_index=0), Document(page_content='MLB Team: Mets\\nPayroll in millions:", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html"}
+{"id": "5a0d2c405fcc-10", "text": "lookup_index=0), Document(page_content='MLB Team: Mets\\nPayroll in millions: 93.35\\nWins: 74', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 21}, lookup_index=0), Document(page_content='MLB Team: Blue Jays\\nPayroll in millions: 75.48\\nWins: 73', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 22}, lookup_index=0), Document(page_content='MLB Team: Royals\\nPayroll in millions: 60.91\\nWins: 72', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 23}, lookup_index=0), Document(page_content='MLB Team: Marlins\\nPayroll in millions: 118.07\\nWins: 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 24}, lookup_index=0), Document(page_content='MLB Team: Red Sox\\nPayroll in millions: 173.18\\nWins: 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 25}, lookup_index=0), Document(page_content='MLB Team: Indians\\nPayroll in millions: 78.43\\nWins: 68', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 26}, lookup_index=0), Document(page_content='MLB Team: Twins\\nPayroll in millions: 94.08\\nWins: 66', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 27}, lookup_index=0), Document(page_content='MLB Team: Rockies\\nPayroll in millions:", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html"}
+{"id": "5a0d2c405fcc-11", "text": "lookup_index=0), Document(page_content='MLB Team: Rockies\\nPayroll in millions: 78.06\\nWins: 64', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 28}, lookup_index=0), Document(page_content='MLB Team: Cubs\\nPayroll in millions: 88.19\\nWins: 61', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 29}, lookup_index=0), Document(page_content='MLB Team: Astros\\nPayroll in millions: 60.65\\nWins: 55', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 30}, lookup_index=0)]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html"}
+{"id": "5a0d2c405fcc-12", "text": "Specify a column to identify the document source#\nUse the source_column argument to specify a source for the document created from each row. Otherwise file_path will be used as the source for all documents created from the CSV file.\nThis is useful when using documents loaded from CSV files for chains that answer questions using sources.\nloader = CSVLoader(file_path='./example_data/mlb_teams_2012.csv', source_column=\"Team\")\ndata = loader.load()\nprint(data)", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html"}
+{"id": "5a0d2c405fcc-13", "text": "[Document(page_content='Team: Nationals\\n\"Payroll (millions)\": 81.34\\n\"Wins\": 98', lookup_str='', metadata={'source': 'Nationals', 'row': 0}, lookup_index=0), Document(page_content='Team: Reds\\n\"Payroll (millions)\": 82.20\\n\"Wins\": 97', lookup_str='', metadata={'source': 'Reds', 'row': 1}, lookup_index=0), Document(page_content='Team: Yankees\\n\"Payroll (millions)\": 197.96\\n\"Wins\": 95', lookup_str='', metadata={'source': 'Yankees', 'row': 2}, lookup_index=0), Document(page_content='Team: Giants\\n\"Payroll (millions)\": 117.62\\n\"Wins\": 94', lookup_str='', metadata={'source': 'Giants', 'row': 3}, lookup_index=0), Document(page_content='Team: Braves\\n\"Payroll (millions)\": 83.31\\n\"Wins\": 94', lookup_str='', metadata={'source': 'Braves', 'row': 4}, lookup_index=0), Document(page_content='Team: Athletics\\n\"Payroll (millions)\": 55.37\\n\"Wins\": 94', lookup_str='', metadata={'source': 'Athletics', 'row': 5}, lookup_index=0), Document(page_content='Team: Rangers\\n\"Payroll (millions)\": 120.51\\n\"Wins\": 93', lookup_str='', metadata={'source': 'Rangers', 'row': 6}, lookup_index=0), Document(page_content='Team: Orioles\\n\"Payroll (millions)\": 81.43\\n\"Wins\": 93', lookup_str='', metadata={'source': 'Orioles', 'row': 7}, lookup_index=0), Document(page_content='Team: Rays\\n\"Payroll", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html"}
+{"id": "5a0d2c405fcc-14", "text": "7}, lookup_index=0), Document(page_content='Team: Rays\\n\"Payroll (millions)\": 64.17\\n\"Wins\": 90', lookup_str='', metadata={'source': 'Rays', 'row': 8}, lookup_index=0), Document(page_content='Team: Angels\\n\"Payroll (millions)\": 154.49\\n\"Wins\": 89', lookup_str='', metadata={'source': 'Angels', 'row': 9}, lookup_index=0), Document(page_content='Team: Tigers\\n\"Payroll (millions)\": 132.30\\n\"Wins\": 88', lookup_str='', metadata={'source': 'Tigers', 'row': 10}, lookup_index=0), Document(page_content='Team: Cardinals\\n\"Payroll (millions)\": 110.30\\n\"Wins\": 88', lookup_str='', metadata={'source': 'Cardinals', 'row': 11}, lookup_index=0), Document(page_content='Team: Dodgers\\n\"Payroll (millions)\": 95.14\\n\"Wins\": 86', lookup_str='', metadata={'source': 'Dodgers', 'row': 12}, lookup_index=0), Document(page_content='Team: White Sox\\n\"Payroll (millions)\": 96.92\\n\"Wins\": 85', lookup_str='', metadata={'source': 'White Sox', 'row': 13}, lookup_index=0), Document(page_content='Team: Brewers\\n\"Payroll (millions)\": 97.65\\n\"Wins\": 83', lookup_str='', metadata={'source': 'Brewers', 'row': 14}, lookup_index=0), Document(page_content='Team: Phillies\\n\"Payroll (millions)\": 174.54\\n\"Wins\": 81', lookup_str='', metadata={'source': 'Phillies', 'row': 15}, lookup_index=0), Document(page_content='Team:", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html"}
+{"id": "5a0d2c405fcc-15", "text": "'row': 15}, lookup_index=0), Document(page_content='Team: Diamondbacks\\n\"Payroll (millions)\": 74.28\\n\"Wins\": 81', lookup_str='', metadata={'source': 'Diamondbacks', 'row': 16}, lookup_index=0), Document(page_content='Team: Pirates\\n\"Payroll (millions)\": 63.43\\n\"Wins\": 79', lookup_str='', metadata={'source': 'Pirates', 'row': 17}, lookup_index=0), Document(page_content='Team: Padres\\n\"Payroll (millions)\": 55.24\\n\"Wins\": 76', lookup_str='', metadata={'source': 'Padres', 'row': 18}, lookup_index=0), Document(page_content='Team: Mariners\\n\"Payroll (millions)\": 81.97\\n\"Wins\": 75', lookup_str='', metadata={'source': 'Mariners', 'row': 19}, lookup_index=0), Document(page_content='Team: Mets\\n\"Payroll (millions)\": 93.35\\n\"Wins\": 74', lookup_str='', metadata={'source': 'Mets', 'row': 20}, lookup_index=0), Document(page_content='Team: Blue Jays\\n\"Payroll (millions)\": 75.48\\n\"Wins\": 73', lookup_str='', metadata={'source': 'Blue Jays', 'row': 21}, lookup_index=0), Document(page_content='Team: Royals\\n\"Payroll (millions)\": 60.91\\n\"Wins\": 72', lookup_str='', metadata={'source': 'Royals', 'row': 22}, lookup_index=0), Document(page_content='Team: Marlins\\n\"Payroll (millions)\": 118.07\\n\"Wins\": 69', lookup_str='', metadata={'source': 'Marlins', 'row': 23}, lookup_index=0),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html"}
+{"id": "5a0d2c405fcc-16", "text": "metadata={'source': 'Marlins', 'row': 23}, lookup_index=0), Document(page_content='Team: Red Sox\\n\"Payroll (millions)\": 173.18\\n\"Wins\": 69', lookup_str='', metadata={'source': 'Red Sox', 'row': 24}, lookup_index=0), Document(page_content='Team: Indians\\n\"Payroll (millions)\": 78.43\\n\"Wins\": 68', lookup_str='', metadata={'source': 'Indians', 'row': 25}, lookup_index=0), Document(page_content='Team: Twins\\n\"Payroll (millions)\": 94.08\\n\"Wins\": 66', lookup_str='', metadata={'source': 'Twins', 'row': 26}, lookup_index=0), Document(page_content='Team: Rockies\\n\"Payroll (millions)\": 78.06\\n\"Wins\": 64', lookup_str='', metadata={'source': 'Rockies', 'row': 27}, lookup_index=0), Document(page_content='Team: Cubs\\n\"Payroll (millions)\": 88.19\\n\"Wins\": 61', lookup_str='', metadata={'source': 'Cubs', 'row': 28}, lookup_index=0), Document(page_content='Team: Astros\\n\"Payroll (millions)\": 60.65\\n\"Wins\": 55', lookup_str='', metadata={'source': 'Astros', 'row': 29}, lookup_index=0)]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html"}
+{"id": "5a0d2c405fcc-17", "text": "UnstructuredCSVLoader#\nYou can also load the table using the UnstructuredCSVLoader. One advantage of using UnstructuredCSVLoader is that if you use it in \"elements\" mode, an HTML representation of the table will be available in the metadata.\nfrom langchain.document_loaders.csv_loader import UnstructuredCSVLoader\nloader = UnstructuredCSVLoader(file_path='example_data/mlb_teams_2012.csv', mode=\"elements\")\ndocs = loader.load()\nprint(docs[0].metadata[\"text_as_html\"])\n\n \n \n Nationals | \n 81.34 | \n 98 | \n
\n \n Reds | \n 82.20 | \n 97 | \n
\n \n Yankees | \n 197.96 | \n 95 | \n
\n \n Giants | \n 117.62 | \n 94 | \n
\n \n Braves | \n 83.31 | \n 94 | \n
\n \n Athletics | \n 55.37 | \n 94 | \n
\n \n Rangers | \n 120.51 | \n 93 | ", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html"}
+{"id": "5a0d2c405fcc-18", "text": "120.51 | \n 93 | \n
\n \n Orioles | \n 81.43 | \n 93 | \n
\n \n Rays | \n 64.17 | \n 90 | \n
\n \n Angels | \n 154.49 | \n 89 | \n
\n \n Tigers | \n 132.30 | \n 88 | \n
\n \n Cardinals | \n 110.30 | \n 88 | \n
\n \n Dodgers | \n 95.14 | \n 86 | \n
\n \n White Sox | \n 96.92 | \n 85 | \n
\n \n Brewers | \n 97.65 | \n 83 | \n
\n \n Phillies | \n 174.54 | \n 81 | \n
\n \n Diamondbacks | ", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html"}
+{"id": "5a0d2c405fcc-19", "text": "
\n \n Diamondbacks | \n 74.28 | \n 81 | \n
\n \n Pirates | \n 63.43 | \n 79 | \n
\n \n Padres | \n 55.24 | \n 76 | \n
\n \n Mariners | \n 81.97 | \n 75 | \n
\n \n Mets | \n 93.35 | \n 74 | \n
\n \n Blue Jays | \n 75.48 | \n 73 | \n
\n \n Royals | \n 60.91 | \n 72 | \n
\n \n Marlins | \n 118.07 | \n 69 | \n
\n \n Red Sox | \n 173.18 | \n 69 | \n
\n \n Indians | \n 78.43 | \n 68 | ", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html"}
+{"id": "5a0d2c405fcc-20", "text": "78.43 | \n 68 | \n
\n \n Twins | \n 94.08 | \n 66 | \n
\n \n Rockies | \n 78.06 | \n 64 | \n
\n \n Cubs | \n 88.19 | \n 61 | \n
\n \n Astros | \n 60.65 | \n 55 | \n
\n \n
\nprevious\nCopy Paste\nnext\nEmail\n Contents\n \nCustomizing the csv parsing and loading\nSpecify a column to identify the document source\nUnstructuredCSVLoader\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html"}
+{"id": "a546852f0d85-0", "text": ".ipynb\n.pdf\nWikipedia\n Contents \nInstallation\nExamples\nWikipedia#\nWikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.\nThis notebook shows how to load wiki pages from wikipedia.org into the Document format that we use downstream.\nInstallation#\nFirst, you need to install wikipedia python package.\n#!pip install wikipedia\nExamples#\nWikipediaLoader has these arguments:\nquery: free text which used to find documents in Wikipedia\noptional lang: default=\u201den\u201d. Use it to search in a specific language part of Wikipedia\noptional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.\noptional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), title, Summary. If True, other fields also downloaded.\nfrom langchain.document_loaders import WikipediaLoader\ndocs = WikipediaLoader(query='HUNTER X HUNTER', load_max_docs=2).load()\nlen(docs)\ndocs[0].metadata # meta-information of the Document\ndocs[0].page_content[:400] # a content of the Document \nprevious\nMediaWikiDump\nnext\nYouTube transcripts\n Contents\n \nInstallation\nExamples\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/wikipedia.html"}
+{"id": "661ffa15e3f9-0", "text": ".ipynb\n.pdf\nHuggingFace dataset\n Contents \nExample\nHuggingFace dataset#\nThe Hugging Face Hub is home to over 5,000 datasets in more than 100 languages that can be used for a broad range of tasks across NLP, Computer Vision, and Audio. They used for a diverse range of tasks such as translation,\nautomatic speech recognition, and image classification.\nThis notebook shows how to load Hugging Face Hub datasets to LangChain.\nfrom langchain.document_loaders import HuggingFaceDatasetLoader\ndataset_name=\"imdb\"\npage_content_column=\"text\"\nloader=HuggingFaceDatasetLoader(dataset_name,page_content_column)\ndata = loader.load()\ndata[:15]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html"}
+{"id": "661ffa15e3f9-1", "text": "data = loader.load()\ndata[:15]\n[Document(page_content='I rented I AM CURIOUS-YELLOW from my video store because of all the controversy that surrounded it when it was first released in 1967. I also heard that at first it was seized by U.S. customs if it ever tried to enter this country, therefore being a fan of films considered \"controversial\" I really had to see this for myself.
The plot is centered around a young Swedish drama student named Lena who wants to learn everything she can about life. In particular she wants to focus her attentions to making some sort of documentary on what the average Swede thought about certain political issues such as the Vietnam War and race issues in the United States. In between asking politicians and ordinary denizens of Stockholm about their opinions on politics, she has sex with her drama teacher, classmates, and married men.
What kills me about I AM CURIOUS-YELLOW is that 40 years ago, this was considered pornographic. Really, the sex and nudity scenes are few and far between, even then it\\'s not shot like some cheaply made porno. While my countrymen mind find it shocking, in reality sex and nudity are a major staple in Swedish cinema. Even Ingmar Bergman, arguably their answer to good old boy John Ford, had sex scenes in his films.
I do commend the filmmakers for the fact that any sex shown in the film is shown for artistic purposes rather than just to shock people and make money to be shown in pornographic theaters in America. I AM CURIOUS-YELLOW is a good film for anyone wanting to study the meat and potatoes (no pun intended) of Swedish cinema. But really, this film doesn\\'t have much of a plot.', metadata={'label': 0}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html"}
+{"id": "661ffa15e3f9-2", "text": "Document(page_content='\"I Am Curious: Yellow\" is a risible and pretentious steaming pile. It doesn\\'t matter what one\\'s political views are because this film can hardly be taken seriously on any level. As for the claim that frontal male nudity is an automatic NC-17, that isn\\'t true. I\\'ve seen R-rated films with male nudity. Granted, they only offer some fleeting views, but where are the R-rated films with gaping vulvas and flapping labia? Nowhere, because they don\\'t exist. The same goes for those crappy cable shows: schlongs swinging in the breeze but not a clitoris in sight. And those pretentious indie movies like The Brown Bunny, in which we\\'re treated to the site of Vincent Gallo\\'s throbbing johnson, but not a trace of pink visible on Chloe Sevigny. Before crying (or implying) \"double-standard\" in matters of nudity, the mentally obtuse should take into account one unavoidably obvious anatomical difference between men and women: there are no genitals on display when actresses appears nude, and the same cannot be said for a man. In fact, you generally won\\'t see female genitals in an American film in anything short of porn or explicit erotica. This alleged double-standard is less a double standard than an admittedly depressing ability to come to terms culturally with the insides of women\\'s bodies.', metadata={'label': 0}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html"}
+{"id": "661ffa15e3f9-3", "text": "Document(page_content=\"If only to avoid making this type of film in the future. This film is interesting as an experiment but tells no cogent story.
One might feel virtuous for sitting thru it because it touches on so many IMPORTANT issues but it does so without any discernable motive. The viewer comes away with no new perspectives (unless one comes up with one while one's mind wanders, as it will invariably do during this pointless film).
One might better spend one's time staring out a window at a tree growing.
\", metadata={'label': 0}),\n Document(page_content=\"This film was probably inspired by Godard's Masculin, f\u00e9minin and I urge you to see that film instead.
The film has two strong elements and those are, (1) the realistic acting (2) the impressive, undeservedly good, photo. Apart from that, what strikes me most is the endless stream of silliness. Lena Nyman has to be most annoying actress in the world. She acts so stupid and with all the nudity in this film,...it's unattractive. Comparing to Godard's film, intellectuality has been replaced with stupidity. Without going too far on this subject, I would say that follows from the difference in ideals between the French and the Swedish society.
A movie of its time, and place. 2/10.\", metadata={'label': 0}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html"}
+{"id": "661ffa15e3f9-4", "text": "Document(page_content='Oh, brother...after hearing about this ridiculous film for umpteen years all I can think of is that old Peggy Lee song..
\"Is that all there is??\" ...I was just an early teen when this smoked fish hit the U.S. I was too young to get in the theater (although I did manage to sneak into \"Goodbye Columbus\"). Then a screening at a local film museum beckoned - Finally I could see this film, except now I was as old as my parents were when they schlepped to see it!!
The ONLY reason this film was not condemned to the anonymous sands of time was because of the obscenity case sparked by its U.S. release. MILLIONS of people flocked to this stinker, thinking they were going to see a sex film...Instead, they got lots of closeups of gnarly, repulsive Swedes, on-street interviews in bland shopping malls, asinie political pretension...and feeble who-cares simulated sex scenes with saggy, pale actors.
Cultural icon, holy grail, historic artifact..whatever this thing was, shred it, burn it, then stuff the ashes in a lead box!
Elite esthetes still scrape to find value in its boring pseudo revolutionary political spewings..But if it weren\\'t for the censorship scandal, it would have been ignored, then forgotten.
Instead, the \"I Am Blank, Blank\" rhythymed title was repeated endlessly for years as a titilation for porno films (I am Curious, Lavender - for gay films, I Am Curious, Black - for blaxploitation films, etc..) and every ten years or so the thing rises from the dead, to be viewed by a new generation of suckers who want to see that \"naughty sex film\" that \"revolutionized", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html"}
+{"id": "661ffa15e3f9-5", "text": "new generation of suckers who want to see that \"naughty sex film\" that \"revolutionized the film industry\"...
Yeesh, avoid like the plague..Or if you MUST see it - rent the video and fast forward to the \"dirty\" parts, just to get it over with.
', metadata={'label': 0}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html"}
+{"id": "661ffa15e3f9-6", "text": "Document(page_content=\"I would put this at the top of my list of films in the category of unwatchable trash! There are films that are bad, but the worst kind are the ones that are unwatchable but you are suppose to like them because they are supposed to be good for you! The sex sequences, so shocking in its day, couldn't even arouse a rabbit. The so called controversial politics is strictly high school sophomore amateur night Marxism. The film is self-consciously arty in the worst sense of the term. The photography is in a harsh grainy black and white. Some scenes are out of focus or taken from the wrong angle. Even the sound is bad! And some people call this art?
\", metadata={'label': 0}),\n Document(page_content=\"Whoever wrote the screenplay for this movie obviously never consulted any books about Lucille Ball, especially her autobiography. I've never seen so many mistakes in a biopic, ranging from her early years in Celoron and Jamestown to her later years with Desi. I could write a whole list of factual errors, but it would go on for pages. In all, I believe that Lucille Ball is one of those inimitable people who simply cannot be portrayed by anyone other than themselves. If I were Lucie Arnaz and Desi, Jr., I would be irate at how many mistakes were made in this film. The filmmakers tried hard, but the movie seems awfully sloppy to me.\", metadata={'label': 0}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html"}
+{"id": "661ffa15e3f9-7", "text": "Document(page_content='When I first saw a glimpse of this movie, I quickly noticed the actress who was playing the role of Lucille Ball. Rachel York\\'s portrayal of Lucy is absolutely awful. Lucille Ball was an astounding comedian with incredible talent. To think about a legend like Lucille Ball being portrayed the way she was in the movie is horrendous. I cannot believe out of all the actresses in the world who could play a much better Lucy, the producers decided to get Rachel York. She might be a good actress in other roles but to play the role of Lucille Ball is tough. It is pretty hard to find someone who could resemble Lucille Ball, but they could at least find someone a bit similar in looks and talent. If you noticed York\\'s portrayal of Lucy in episodes of I Love Lucy like the chocolate factory or vitavetavegamin, nothing is similar in any way-her expression, voice, or movement.
To top it all off, Danny Pino playing Desi Arnaz is horrible. Pino does not qualify to play as Ricky. He\\'s small and skinny, his accent is unreal, and once again, his acting is unbelievable. Although Fred and Ethel were not similar either, they were not as bad as the characters of Lucy and Ricky.
Overall, extremely horrible casting and the story is badly told. If people want to understand the real life situation of Lucille Ball, I suggest watching A&E Biography of Lucy and Desi, read the book from Lucille Ball herself, or PBS\\' American Masters: Finding Lucy. If you want to see a docudrama, \"Before the Laughter\" would be a better choice. The casting of Lucille Ball and Desi Arnaz in \"Before the Laughter\" is much better compared to this. At least, a similar aspect is shown rather than nothing.', metadata={'label': 0}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html"}
+{"id": "661ffa15e3f9-8", "text": "Document(page_content='Who are these \"They\"- the actors? the filmmakers? Certainly couldn\\'t be the audience- this is among the most air-puffed productions in existence. It\\'s the kind of movie that looks like it was a lot of fun to shoot\\x97 TOO much fun, nobody is getting any actual work done, and that almost always makes for a movie that\\'s no fun to watch.
Ritter dons glasses so as to hammer home his character\\'s status as a sort of doppleganger of the bespectacled Bogdanovich; the scenes with the breezy Ms. Stratten are sweet, but have an embarrassing, look-guys-I\\'m-dating-the-prom-queen feel to them. Ben Gazzara sports his usual cat\\'s-got-canary grin in a futile attempt to elevate the meager plot, which requires him to pursue Audrey Hepburn with all the interest of a narcoleptic at an insomnia clinic. In the meantime, the budding couple\\'s respective children (nepotism alert: Bogdanovich\\'s daughters) spew cute and pick up some fairly disturbing pointers on \\'love\\' while observing their parents. (Ms. Hepburn, drawing on her dignity, manages to rise above the proceedings- but she has the monumental challenge of playing herself, ostensibly.) Everybody looks great, but so what? It\\'s a movie and we can expect that much, if that\\'s what you\\'re looking for you\\'d be better off picking up a copy of Vogue.
Oh- and it has to be mentioned that Colleen Camp thoroughly annoys, even apart from her singing, which, while competent, is wholly unconvincing... the country and western numbers are woefully mismatched with the standards on the soundtrack. Surely this is NOT what Gershwin (who wrote the song from which the movie\\'s title is derived)", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html"}
+{"id": "661ffa15e3f9-9", "text": "NOT what Gershwin (who wrote the song from which the movie\\'s title is derived) had in mind; his stage musicals of the 20\\'s may have been slight, but at least they were long on charm. \"They All Laughed\" tries to coast on its good intentions, but nobody- least of all Peter Bogdanovich - has the good sense to put on the brakes.
Due in no small part to the tragic death of Dorothy Stratten, this movie has a special place in the heart of Mr. Bogdanovich- he even bought it back from its producers, then distributed it on his own and went bankrupt when it didn\\'t prove popular. His rise and fall is among the more sympathetic and tragic of Hollywood stories, so there\\'s no joy in criticizing the film... there _is_ real emotional investment in Ms. Stratten\\'s scenes. But \"Laughed\" is a faint echo of \"The Last Picture Show\", \"Paper Moon\" or \"What\\'s Up, Doc\"- following \"Daisy Miller\" and \"At Long Last Love\", it was a thundering confirmation of the phase from which P.B. has never emerged.
All in all, though, the movie is harmless, only a waste of rental. I want to watch people having a good time, I\\'ll go to the park on a sunny day. For filmic expressions of joy and love, I\\'ll stick to Ernest Lubitsch and Jaques Demy...', metadata={'label': 0}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html"}
+{"id": "661ffa15e3f9-10", "text": "Document(page_content=\"This is said to be a personal film for Peter Bogdonavitch. He based it on his life but changed things around to fit the characters, who are detectives. These detectives date beautiful models and have no problem getting them. Sounds more like a millionaire playboy filmmaker than a detective, doesn't it? This entire movie was written by Peter, and it shows how out of touch with real people he was. You're supposed to write what you know, and he did that, indeed. And leaves the audience bored and confused, and jealous, for that matter. This is a curio for people who want to see Dorothy Stratten, who was murdered right after filming. But Patti Hanson, who would, in real life, marry Keith Richards, was also a model, like Stratten, but is a lot better and has a more ample part. In fact, Stratten's part seemed forced; added. She doesn't have a lot to do with the story, which is pretty convoluted to begin with. All in all, every character in this film is somebody that very few people can relate with, unless you're millionaire from Manhattan with beautiful supermodels at your beckon call. For the rest of us, it's an irritating snore fest. That's what happens when you're out of touch. You entertain your few friends with inside jokes, and bore all the rest.\", metadata={'label': 0}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html"}
+{"id": "661ffa15e3f9-11", "text": "Document(page_content='It was great to see some of my favorite stars of 30 years ago including John Ritter, Ben Gazarra and Audrey Hepburn. They looked quite wonderful. But that was it. They were not given any characters or good lines to work with. I neither understood or cared what the characters were doing.
Some of the smaller female roles were fine, Patty Henson and Colleen Camp were quite competent and confident in their small sidekick parts. They showed some talent and it is sad they didn\\'t go on to star in more and better films. Sadly, I didn\\'t think Dorothy Stratten got a chance to act in this her only important film role.
The film appears to have some fans, and I was very open-minded when I started watching it. I am a big Peter Bogdanovich fan and I enjoyed his last movie, \"Cat\\'s Meow\" and all his early ones from \"Targets\" to \"Nickleodeon\". So, it really surprised me that I was barely able to keep awake watching this one.
It is ironic that this movie is about a detective agency where the detectives and clients get romantically involved with each other. Five years later, Bogdanovich\\'s ex-girlfriend, Cybil Shepherd had a hit television series called \"Moonlighting\" stealing the story idea from Bogdanovich. Of course, there was a great difference in that the series relied on tons of witty dialogue, while this tries to make do with slapstick and a few screwball lines.
Bottom line: It ain\\'t no \"Paper Moon\" and only a very pale version of \"What\\'s Up, Doc\".', metadata={'label': 0}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html"}
+{"id": "661ffa15e3f9-12", "text": "Document(page_content=\"I can't believe that those praising this movie herein aren't thinking of some other film. I was prepared for the possibility that this would be awful, but the script (or lack thereof) makes for a film that's also pointless. On the plus side, the general level of craft on the part of the actors and technical crew is quite competent, but when you've got a sow's ear to work with you can't make a silk purse. Ben G fans should stick with just about any other movie he's been in. Dorothy S fans should stick to Galaxina. Peter B fans should stick to Last Picture Show and Target. Fans of cheap laughs at the expense of those who seem to be asking for it should stick to Peter B's amazingly awful book, Killing of the Unicorn.\", metadata={'label': 0}),\n Document(page_content='Never cast models and Playboy bunnies in your films! Bob Fosse\\'s \"Star 80\" about Dorothy Stratten, of whom Bogdanovich was obsessed enough to have married her SISTER after her murder at the hands of her low-life husband, is a zillion times more interesting than Dorothy herself on the silver screen. Patty Hansen is no actress either..I expected to see some sort of lost masterpiece a la Orson Welles but instead got Audrey Hepburn cavorting in jeans and a god-awful \"poodlesque\" hair-do....Very disappointing....\"Paper Moon\" and \"The Last Picture Show\" I could watch again and again. This clunker I could barely sit through once. This movie was reputedly not released because of the brouhaha surrounding Ms. Stratten\\'s tawdry death; I think the real reason was because it was so bad!', metadata={'label': 0}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html"}
+{"id": "661ffa15e3f9-13", "text": "Document(page_content=\"Its not the cast. A finer group of actors, you could not find. Its not the setting. The director is in love with New York City, and by the end of the film, so are we all! Woody Allen could not improve upon what Bogdonovich has done here. If you are going to fall in love, or find love, Manhattan is the place to go. No, the problem with the movie is the script. There is none. The actors fall in love at first sight, words are unnecessary. In the director's own experience in Hollywood that is what happens when they go to work on the set. It is reality to him, and his peers, but it is a fantasy to most of us in the real world. So, in the end, the movie is hollow, and shallow, and message-less.\", metadata={'label': 0}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html"}
+{"id": "661ffa15e3f9-14", "text": "Document(page_content='Today I found \"They All Laughed\" on VHS on sale in a rental. It was a really old and very used VHS, I had no information about this movie, but I liked the references listed on its cover: the names of Peter Bogdanovich, Audrey Hepburn, John Ritter and specially Dorothy Stratten attracted me, the price was very low and I decided to risk and buy it. I searched IMDb, and the User Rating of 6.0 was an excellent reference. I looked in \"Mick Martin & Marsha Porter Video & DVD Guide 2003\" and \\x96 wow \\x96 four stars! So, I decided that I could not waste more time and immediately see it. Indeed, I have just finished watching \"They All Laughed\" and I found it a very boring overrated movie. The characters are badly developed, and I spent lots of minutes to understand their roles in the story. The plot is supposed to be funny (private eyes who fall in love for the women they are chasing), but I have not laughed along the whole story. The coincidences, in a huge city like New York, are ridiculous. Ben Gazarra as an attractive and very seductive man, with the women falling for him as if her were a Brad Pitt, Antonio Banderas or George Clooney, is quite ridiculous. In the end, the greater attractions certainly are the presence of the Playboy centerfold and playmate of the year Dorothy Stratten, murdered by her husband pretty after the release of this movie, and whose life was showed in \"Star 80\" and \"Death of a Centerfold: The Dorothy Stratten Story\"; the amazing beauty of the sexy Patti Hansen, the future Mrs. Keith Richards; the always wonderful, even being fifty-two years old, Audrey Hepburn; and the song \"Amigo\", from Roberto Carlos. Although I do not like him, Roberto Carlos has been", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html"}
+{"id": "661ffa15e3f9-15", "text": "song \"Amigo\", from Roberto Carlos. Although I do not like him, Roberto Carlos has been the most popular Brazilian singer since the end of the 60\\'s and is called by his fans as \"The King\". I will keep this movie in my collection only because of these attractions (manly Dorothy Stratten). My vote is four.
Title (Brazil): \"Muito Riso e Muita Alegria\" (\"Many Laughs and Lots of Happiness\")', metadata={'label': 0})]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html"}
+{"id": "661ffa15e3f9-16", "text": "Example#\nIn this example, we use data from a dataset to answer a question\nfrom langchain.indexes import VectorstoreIndexCreator\nfrom langchain.document_loaders.hugging_face_dataset import HuggingFaceDatasetLoader\ndataset_name=\"tweet_eval\"\npage_content_column=\"text\"\nname=\"stance_climate\"\nloader=HuggingFaceDatasetLoader(dataset_name,page_content_column,name)\nindex = VectorstoreIndexCreator().from_loaders([loader])\nFound cached dataset tweet_eval\nUsing embedded DuckDB without persistence: data will be transient\nquery = \"What are the most used hashtag?\"\nresult = index.query(query)\nresult\n' The most used hashtags in this context are #UKClimate2015, #Sustainability, #TakeDownTheFlag, #LoveWins, #CSOTA, #ClimateSummitoftheAmericas, #SM, and #SocialMedia.'\nprevious\nHacker News\nnext\niFixit\n Contents\n \nExample\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html"}
+{"id": "534b4c7edced-0", "text": ".ipynb\n.pdf\nObsidian\nObsidian#\nObsidian is a powerful and extensible knowledge base\nthat works on top of your local folder of plain text files.\nThis notebook covers how to load documents from an Obsidian database.\nSince Obsidian is just stored on disk as a folder of Markdown files, the loader just takes a path to this directory.\nObsidian files also sometimes contain metadata which is a YAML block at the top of the file. These values will be added to the document\u2019s metadata. (ObsidianLoader can also be passed a collect_metadata=False argument to disable this behavior.)\nfrom langchain.document_loaders import ObsidianLoader\nloader = ObsidianLoader(\"\")\ndocs = loader.load()\nprevious\nNotion DB 1/2\nnext\nPsychic\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/obsidian.html"}
+{"id": "40ba3d7d7368-0", "text": ".ipynb\n.pdf\niFixit\n Contents \nSearching iFixit using /suggest\niFixit#\niFixit is the largest, open repair community on the web. The site contains nearly 100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is licensed under CC-BY-NC-SA 3.0.\nThis loader will allow you to download the text of a repair guide, text of Q&A\u2019s and wikis from devices on iFixit using their open APIs. It\u2019s incredibly useful for context related to technical documents and answers to questions about devices in the corpus of data on iFixit.\nfrom langchain.document_loaders import IFixitLoader\nloader = IFixitLoader(\"https://www.ifixit.com/Teardown/Banana+Teardown/811\")\ndata = loader.load()\ndata", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/ifixit.html"}
+{"id": "40ba3d7d7368-1", "text": "data = loader.load()\ndata\n[Document(page_content=\"# Banana Teardown\\nIn this teardown, we open a banana to see what's inside. Yellow and delicious, but most importantly, yellow.\\n\\n\\n###Tools Required:\\n\\n - Fingers\\n\\n - Teeth\\n\\n - Thumbs\\n\\n\\n###Parts Required:\\n\\n - None\\n\\n\\n## Step 1\\nTake one banana from the bunch.\\nDon't squeeze too hard!\\n\\n\\n## Step 2\\nHold the banana in your left hand and grip the stem between your right thumb and forefinger.\\n\\n\\n## Step 3\\nPull the stem downward until the peel splits.\\n\\n\\n## Step 4\\nInsert your thumbs into the split of the peel and pull the two sides apart.\\nExpose the top of the banana. It may be slightly squished from pulling on the stem, but this will not affect the flavor.\\n\\n\\n## Step 5\\nPull open the peel, starting from your original split, and opening it along the length of the banana.\\n\\n\\n## Step 6\\nRemove fruit from peel.\\n\\n\\n## Step 7\\nEat and enjoy!\\nThis is where you'll need your teeth.\\nDo not choke on banana!\\n\", lookup_str='', metadata={'source': 'https://www.ifixit.com/Teardown/Banana+Teardown/811', 'title': 'Banana Teardown'}, lookup_index=0)]\nloader = IFixitLoader(\"https://www.ifixit.com/Answers/View/318583/My+iPhone+6+is+typing+and+opening+apps+by+itself\")\ndata = loader.load()\ndata", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/ifixit.html"}
+{"id": "40ba3d7d7368-2", "text": "[Document(page_content='# My iPhone 6 is typing and opening apps by itself\\nmy iphone 6 is typing and opening apps by itself. How do i fix this. I just bought it last week.\\nI restored as manufactures cleaned up the screen\\nthe problem continues\\n\\n## 27 Answers\\n\\nFilter by: \\n\\nMost Helpful\\nNewest\\nOldest\\n\\n### Accepted Answer\\nHi,\\nWhere did you buy it? If you bought it from Apple or from an official retailer like Carphone warehouse etc. Then you\\'ll have a year warranty and can get it replaced free.\\nIf you bought it second hand, from a third part repair shop or online, then it may still have warranty, unless it is refurbished and has been repaired elsewhere.\\nIf this is the case, it may be the screen that needs replacing to solve your issue.\\nEither way, wherever you got it, it\\'s best to return it and get a refund or a replacement device. :-)\\n\\n\\n\\n### Most Helpful Answer\\nI had the same issues, screen freezing, opening apps by itself, selecting the screens and typing on it\\'s own. I first suspected aliens and then ghosts and then hackers.\\niPhone 6 is weak physically and tend to bend on pressure. And my phone had no case or cover.\\nI took the phone to apple stores and they said sensors need to be replaced and possibly screen replacement as well. My phone is just 17 months old.\\nHere is what I did two days ago and since then it is working like a charm..\\nHold the phone in portrait (as if watching a movie). Twist it very very gently. do it few times.Rest the phone for 10 mins (put it on a flat surface). You can now notice those self typing things gone and screen getting stabilized.\\nThen, reset the hardware (hold the power and home button till the screen goes off and comes back with apple", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/ifixit.html"}
+{"id": "40ba3d7d7368-3", "text": "reset the hardware (hold the power and home button till the screen goes off and comes back with apple logo). release the buttons when you see this.\\nThen, connect to your laptop and log in to iTunes and reset your phone completely. (please take a back-up first).\\nAnd your phone should be good to use again.\\nWhat really happened here for me is that the sensors might have stuck to the screen and with mild twisting, they got disengaged/released.\\nI posted this in Apple Community and the moderators deleted it, for the best reasons known to them.\\nInstead of throwing away your phone (or selling cheaply), try this and you could be saving your phone.\\nLet me know how it goes.\\n\\n\\n\\n### Other Answer\\nIt was the charging cord! I bought a gas station braided cord and it was the culprit. Once I plugged my OEM cord into the phone the GHOSTS went away.\\n\\n\\n\\n### Other Answer\\nI\\'ve same issue that I just get resolved. I first tried to restore it from iCloud back, however it was not a software issue or any virus issue, so after restore same problem continues. Then I get my phone to local area iphone repairing lab, and they detected that it is an LCD issue. LCD get out of order without any reason (It was neither hit or nor slipped, but LCD get out of order all and sudden, while using it) it started opening things at random. I get LCD replaced with new one, that cost me $80.00 in total ($70.00 LCD charges + $10.00 as labor charges to fix it). iPhone is back to perfect mode now. It was iphone 6s. Thanks.\\n\\n\\n\\n### Other Answer\\nI was having the same issue with my 6 plus, I took it to a repair shop, they opened the phone, disconnected the three ribbons the screen has, blew up", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/ifixit.html"}
+{"id": "40ba3d7d7368-4", "text": "a repair shop, they opened the phone, disconnected the three ribbons the screen has, blew up and cleaned the connectors and connected the screen again and it solved the issue\u2026 it\u2019s hardware, not software.\\n\\n\\n\\n### Other Answer\\nHey.\\nJust had this problem now. As it turns out, you just need to plug in your phone. I use a case and when I took it off I noticed that there was a lot of dust and dirt around the areas that the case didn\\'t cover. I shined a light in my ports and noticed they were filled with dust. Tomorrow I plan on using pressurized air to clean it out and the problem should be solved. If you plug in your phone and unplug it and it stops the issue, I recommend cleaning your phone thoroughly.\\n\\n\\n\\n### Other Answer\\nI simply changed the power supply and problem was gone. The block that plugs in the wall not the sub cord. The cord was fine but not the block.\\n\\n\\n\\n### Other Answer\\nSomeone ask! I purchased my iPhone 6s Plus for 1000 from at&t. Before I touched it, I purchased a otter defender case. I read where at&t said touch desease was due to dropping! Bullshit!! I am 56 I have never dropped it!! Looks brand new! Never dropped or abused any way! I have my original charger. I am going to clean it and try everyone\u2019s advice. It really sucks! I had 40,000,000 on my heart of Vegas slots! I play every day. I would be spinning and my fingers were no where max buttons and it would light up and switch to max. It did it 3 times before I caught it light up by its self. It sucks. Hope I can fix it!!!!\\n\\n\\n\\n### Other Answer\\nNo answer, but same", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/ifixit.html"}
+{"id": "40ba3d7d7368-5", "text": "Hope I can fix it!!!!\\n\\n\\n\\n### Other Answer\\nNo answer, but same problem with iPhone 6 plus--random, self-generated jumping amongst apps and typing on its own--plus freezing regularly (aha--maybe that\\'s what the \"plus\" in \"6 plus\" refers to?). An Apple Genius recommended upgrading to iOS 11.3.1 from 11.2.2, to see if that fixed the trouble. If it didn\\'t, Apple will sell me a new phone for $168! Of couese the OS upgrade didn\\'t fix the problem. Thanks for helping me figure out that it\\'s most likely a hardware problem--which the \"genius\" probably knows too.\\nI\\'m getting ready to go Android.\\n\\n\\n\\n### Other Answer\\nI experienced similar ghost touches. Two weeks ago, I changed my iPhone 6 Plus shell (I had forced the phone into it because it\u2019s pretty tight), and also put a new glass screen protector (the edges of the protector don\u2019t stick to the screen, weird, so I brushed pressure on the edges at times to see if they may smooth out one day miraculously). I\u2019m not sure if I accidentally bend the phone when I installed the shell, or, if I got a defective glass protector that messes up the touch sensor. Well, yesterday was the worse day, keeps dropping calls and ghost pressing keys for me when I was on a call. I got fed up, so I removed the screen protector, and so far problems have not reoccurred yet. I\u2019m crossing my fingers that problems indeed solved.\\n\\n\\n\\n### Other Answer\\nthank you so much for this post! i was struggling doing the reset because i cannot type userids and passwords correctly because the iphone 6 plus i have kept on typing letters incorrectly. I have been doing it for a day until i come across this article.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/ifixit.html"}
+{"id": "40ba3d7d7368-6", "text": "on typing letters incorrectly. I have been doing it for a day until i come across this article. Very helpful! God bless you!!\\n\\n\\n\\n### Other Answer\\nI just turned it off, and turned it back on.\\n\\n\\n\\n### Other Answer\\nMy problem has not gone away completely but its better now i changed my charger and turned off prediction ....,,,now it rarely happens\\n\\n\\n\\n### Other Answer\\nI tried all of the above. I then turned off my home cleaned it with isopropyl alcohol 90%. Then I baked it in my oven on warm for an hour and a half over foil. Took it out and set it cool completely on the glass top stove. Then I turned on and it worked.\\n\\n\\n\\n### Other Answer\\nI think at& t should man up and fix your phone for free! You pay a lot for a Apple they should back it. I did the next 30 month payments and finally have it paid off in June. My iPad sept. Looking forward to a almost 100 drop in my phone bill! Now this crap!!! Really\\n\\n\\n\\n### Other Answer\\nIf your phone is JailBroken, suggest downloading a virus. While all my symptoms were similar, there was indeed a virus/malware on the phone which allowed for remote control of my iphone (even while in lock mode). My mistake for buying a third party iphone i suppose. Anyway i have since had the phone restored to factory and everything is working as expected for now. I will of course keep you posted if this changes. Thanks to all for the helpful posts, really helped me narrow a few things down.\\n\\n\\n\\n### Other Answer\\nWhen my phone was doing this, it ended up being the screen protector that i got from 5 below. I took it off and it stopped. I ordered more protectors from amazon and replaced", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/ifixit.html"}
+{"id": "40ba3d7d7368-7", "text": "below. I took it off and it stopped. I ordered more protectors from amazon and replaced it\\n\\n\\n\\n### Other Answer\\niPhone 6 Plus first generation\u2026.I had the same issues as all above, apps opening by themselves, self typing, ultra sensitive screen, items jumping around all over\u2026.it even called someone on FaceTime twice by itself when I was not in the room\u2026..I thought the phone was toast and i\u2019d have to buy a new one took me a while to figure out but it was the extra cheap block plug I bought at a dollar store for convenience of an extra charging station when I move around the house from den to living room\u2026..cord was fine but bought a new Apple brand block plug\u2026no more problems works just fine now. This issue was a recent event so had to narrow things down to what had changed recently to my phone so I could figure it out.\\nI even had the same problem on a laptop with documents opening up by themselves\u2026..a laptop that was plugged in to the same wall plug as my phone charger with the dollar store block plug\u2026.until I changed the block plug.\\n\\n\\n\\n### Other Answer\\nHad the problem: Inherited a 6s Plus from my wife. She had no problem with it.\\nLooks like it was merely the cheap phone case I purchased on Amazon. It was either pinching the edges or torquing the screen/body of the phone. Problem solved.\\n\\n\\n\\n### Other Answer\\nI bought my phone on march 6 and it was a brand new, but It sucks me uo because it freezing, shaking and control by itself. I went to the store where I bought this and I told them to replacr it, but they told me I have to pay it because Its about lcd issue. Please help me what other ways to fix it. Or should I try to remove the screen or should I follow your step above.\\n\\n\\n\\n### Other", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/ifixit.html"}
+{"id": "40ba3d7d7368-8", "text": "I try to remove the screen or should I follow your step above.\\n\\n\\n\\n### Other Answer\\nI tried everything and it seems to come back to needing the original iPhone cable\u2026or at least another 1 that would have come with another iPhone\u2026not the $5 Store fast charging cables. My original cable is pretty beat up - like most that I see - but I\u2019ve been beaten up much MUCH less by sticking with its use! I didn\u2019t find that the casing/shell around it or not made any diff.\\n\\n\\n\\n### Other Answer\\ngreat now I have to wait one more hour to reset my phone and while I was tryin to connect my phone to my computer the computer also restarted smh does anyone else knows how I can get my phone to work\u2026 my problem is I have a black dot on the bottom left of my screen an it wont allow me to touch a certain part of my screen unless I rotate my phone and I know the password but the first number is a 2 and it won\\'t let me touch 1,2, or 3 so now I have to find a way to get rid of my password and all of a sudden my phone wants to touch stuff on its own which got my phone disabled many times to the point where I have to wait a whole hour and I really need to finish something on my phone today PLEASE HELPPPP\\n\\n\\n\\n### Other Answer\\nIn my case , iphone 6 screen was faulty. I got it replaced at local repair shop, so far phone is working fine.\\n\\n\\n\\n### Other Answer\\nthis problem in iphone 6 has many different scenarios and solutions, first try to reconnect the lcd screen to the motherboard again, if didnt solve, try to replace the lcd connector on the motherboard, if not solved, then remains two issues, lcd screen it self or touch IC. in my country some repair shops just change them all for almost 40$", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/ifixit.html"}
+{"id": "40ba3d7d7368-9", "text": "self or touch IC. in my country some repair shops just change them all for almost 40$ since they dont want to troubleshoot one by one. readers of this comment also should know that partial screen not responding in other iphone models might also have an issue in LCD connector on the motherboard, specially if you lock/unlock screen and screen works again for sometime. lcd connectors gets disconnected lightly from the motherboard due to multiple falls and hits after sometime. best of luck for all\\n\\n\\n\\n### Other Answer\\nI am facing the same issue whereby these ghost touches type and open apps , I am using an original Iphone cable , how to I fix this issue.\\n\\n\\n\\n### Other Answer\\nThere were two issues with the phone I had troubles with. It was my dads and turns out he carried it in his pocket. The phone itself had a little bend in it as a result. A little pressure in the opposite direction helped the issue. But it also had a tiny crack in the screen which wasnt obvious, once we added a screen protector this fixed the issues entirely.\\n\\n\\n\\n### Other Answer\\nI had the same problem with my 64Gb iPhone 6+. Tried a lot of things and eventually downloaded all my images and videos to my PC and restarted the phone - problem solved. Been working now for two days.', lookup_str='', metadata={'source': 'https://www.ifixit.com/Answers/View/318583/My+iPhone+6+is+typing+and+opening+apps+by+itself', 'title': 'My iPhone 6 is typing and opening apps by itself'}, lookup_index=0)]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/ifixit.html"}
+{"id": "40ba3d7d7368-10", "text": "loader = IFixitLoader(\"https://www.ifixit.com/Device/Standard_iPad\")\ndata = loader.load()\ndata\n[Document(page_content=\"Standard iPad\\nThe standard edition of the tablet computer made by Apple.\\n== Background Information ==\\n\\nOriginally introduced in January 2010, the iPad is Apple's standard edition of their tablet computer. In total, there have been ten generations of the standard edition of the iPad.\\n\\n== Additional Information ==\\n\\n* [link|https://www.apple.com/ipad-select/|Official Apple Product Page]\\n* [link|https://en.wikipedia.org/wiki/IPad#iPad|Official iPad Wikipedia]\", lookup_str='', metadata={'source': 'https://www.ifixit.com/Device/Standard_iPad', 'title': 'Standard iPad'}, lookup_index=0)]\nSearching iFixit using /suggest#\nIf you\u2019re looking for a more general way to search iFixit based on a keyword or phrase, the /suggest endpoint will return content related to the search term, then the loader will load the content from each of the suggested items and prep and return the documents.\ndata = IFixitLoader.load_suggestions(\"Banana\")\ndata", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/ifixit.html"}
+{"id": "40ba3d7d7368-11", "text": "data = IFixitLoader.load_suggestions(\"Banana\")\ndata\n[Document(page_content='Banana\\nTasty fruit. Good source of potassium. Yellow.\\n== Background Information ==\\n\\nCommonly misspelled, this wildly popular, phone shaped fruit serves as nutrition and an obstacle to slow down vehicles racing close behind you. Also used commonly as a synonym for \u201ccrazy\u201d or \u201cinsane\u201d.\\n\\nBotanically, the banana is considered a berry, although it isn\u2019t included in the culinary berry category containing strawberries and raspberries. Belonging to the genus Musa, the banana originated in Southeast Asia and Australia. Now largely cultivated throughout South and Central America, bananas are largely available throughout the world. They are especially valued as a staple food group in developing countries due to the banana tree\u2019s ability to produce fruit year round.\\n\\nThe banana can be easily opened. Simply remove the outer yellow shell by cracking the top of the stem. Then, with the broken piece, peel downward on each side until the fruity components on the inside are exposed. Once the shell has been removed it cannot be put back together.\\n\\n== Technical Specifications ==\\n\\n* Dimensions: Variable depending on genetics of the parent tree\\n* Color: Variable depending on ripeness, region, and season\\n\\n== Additional Information ==\\n\\n[link|https://en.wikipedia.org/wiki/Banana|Wiki: Banana]', lookup_str='', metadata={'source': 'https://www.ifixit.com/Device/Banana', 'title': 'Banana'}, lookup_index=0),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/ifixit.html"}
+{"id": "40ba3d7d7368-12", "text": "Document(page_content=\"# Banana Teardown\\nIn this teardown, we open a banana to see what's inside. Yellow and delicious, but most importantly, yellow.\\n\\n\\n###Tools Required:\\n\\n - Fingers\\n\\n - Teeth\\n\\n - Thumbs\\n\\n\\n###Parts Required:\\n\\n - None\\n\\n\\n## Step 1\\nTake one banana from the bunch.\\nDon't squeeze too hard!\\n\\n\\n## Step 2\\nHold the banana in your left hand and grip the stem between your right thumb and forefinger.\\n\\n\\n## Step 3\\nPull the stem downward until the peel splits.\\n\\n\\n## Step 4\\nInsert your thumbs into the split of the peel and pull the two sides apart.\\nExpose the top of the banana. It may be slightly squished from pulling on the stem, but this will not affect the flavor.\\n\\n\\n## Step 5\\nPull open the peel, starting from your original split, and opening it along the length of the banana.\\n\\n\\n## Step 6\\nRemove fruit from peel.\\n\\n\\n## Step 7\\nEat and enjoy!\\nThis is where you'll need your teeth.\\nDo not choke on banana!\\n\", lookup_str='', metadata={'source': 'https://www.ifixit.com/Teardown/Banana+Teardown/811', 'title': 'Banana Teardown'}, lookup_index=0)]\nprevious\nHuggingFace dataset\nnext\nIMSDb\n Contents\n \nSearching iFixit using /suggest\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/ifixit.html"}
+{"id": "3348e37f46a0-0", "text": ".ipynb\n.pdf\nIMSDb\nIMSDb#\nIMSDb is the Internet Movie Script Database.\nThis covers how to load IMSDb webpages into a document format that we can use downstream.\nfrom langchain.document_loaders import IMSDbLoader\nloader = IMSDbLoader(\"https://imsdb.com/scripts/BlacKkKlansman.html\")\ndata = loader.load()\ndata[0].page_content[:500]\n'\\n\\r\\n\\r\\n\\r\\n\\r\\n BLACKKKLANSMAN\\r\\n \\r\\n \\r\\n \\r\\n \\r\\n Written by\\r\\n\\r\\n Charlie Wachtel & David Rabinowitz\\r\\n\\r\\n and\\r\\n\\r\\n Kevin Willmott & Spike Lee\\r\\n\\r\\n\\r\\n\\r\\n\\r\\n\\r\\n\\r\\n\\r\\n\\r\\n FADE IN:\\r\\n \\r\\n SCENE FROM \"GONE WITH'\ndata[0].metadata\n{'source': 'https://imsdb.com/scripts/BlacKkKlansman.html'}\nprevious\niFixit\nnext\nMediaWikiDump\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/imsdb.html"}
+{"id": "5d87c73a53e5-0", "text": ".ipynb\n.pdf\nAirtable\nAirtable#\n! pip install pyairtable\nfrom langchain.document_loaders import AirtableLoader\nGet your API key here.\nGet ID of your base here.\nGet your table ID from the table url as shown here.\napi_key=\"xxx\"\nbase_id=\"xxx\"\ntable_id=\"xxx\"\nloader = AirtableLoader(api_key,table_id,base_id)\ndocs = loader.load()\nReturns each table row as dict.\nlen(docs)\n3\neval(docs[0].page_content)\n{'id': 'recF3GbGZCuh9sXIQ',\n 'createdTime': '2023-06-09T04:47:21.000Z',\n 'fields': {'Priority': 'High',\n 'Status': 'In progress',\n 'Name': 'Document Splitters'}}\nprevious\nDocument Loaders\nnext\nOpenAIWhisperParser\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/airtable.html"}
+{"id": "1c5d8357e3f9-0", "text": ".ipynb\n.pdf\nEmail\n Contents \nUsing Unstructured\nRetain Elements\nUsing OutlookMessageLoader\nEmail#\nThis notebook shows how to load email (.eml) or Microsoft Outlook (.msg) files.\nUsing Unstructured#\n#!pip install unstructured\nfrom langchain.document_loaders import UnstructuredEmailLoader\nloader = UnstructuredEmailLoader('example_data/fake-email.eml')\ndata = loader.load()\ndata\n[Document(page_content='This is a test email to use for unit tests.\\n\\nImportant points:\\n\\nRoses are red\\n\\nViolets are blue', metadata={'source': 'example_data/fake-email.eml'})]\nRetain Elements#\nUnder the hood, Unstructured creates different \u201celements\u201d for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode=\"elements\".\nloader = UnstructuredEmailLoader('example_data/fake-email.eml', mode=\"elements\")\ndata = loader.load()\ndata[0]\nDocument(page_content='This is a test email to use for unit tests.', lookup_str='', metadata={'source': 'example_data/fake-email.eml'}, lookup_index=0)\nUsing OutlookMessageLoader#\n#!pip install extract_msg\nfrom langchain.document_loaders import OutlookMessageLoader\nloader = OutlookMessageLoader('example_data/fake-email.msg')\ndata = loader.load()\ndata[0]\nDocument(page_content='This is a test email to experiment with the MS Outlook MSG Extractor\\r\\n\\r\\n\\r\\n-- \\r\\n\\r\\n\\r\\nKind regards\\r\\n\\r\\n\\r\\n\\r\\n\\r\\nBrian Zhou\\r\\n\\r\\n', metadata={'subject': 'Test for TIF files', 'sender': 'Brian Zhou ', 'date': 'Mon, 18 Nov 2013 16:26:24 +0800'})", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/email.html"}
+{"id": "1c5d8357e3f9-1", "text": "previous\nCSV\nnext\nEPub\n Contents\n \nUsing Unstructured\nRetain Elements\nUsing OutlookMessageLoader\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/email.html"}
+{"id": "7c0a6b80cb9b-0", "text": ".ipynb\n.pdf\nAzure Blob Storage File\nAzure Blob Storage File#\nAzure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol, Network File System (NFS) protocol, and Azure Files REST API.\nThis covers how to load document objects from a Azure Files.\n#!pip install azure-storage-blob\nfrom langchain.document_loaders import AzureBlobStorageFileLoader\nloader = AzureBlobStorageFileLoader(conn_str='', container='', blob_name='')\nloader.load()\n[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpxvave6wl/fake.docx'}, lookup_index=0)]\nprevious\nAzure Blob Storage Container\nnext\nBlackboard\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azure_blob_storage_file.html"}
+{"id": "f764976663a1-0", "text": ".ipynb\n.pdf\nFile Directory\n Contents \nShow a progress bar\nUse multithreading\nChange loader class\nAuto detect file encodings with TextLoader\nA. Default Behavior\nB. Silent fail\nC. Auto detect encodings\nFile Directory#\nThis covers how to use the DirectoryLoader to load all documents in a directory. Under the hood, by default this uses the UnstructuredLoader\nfrom langchain.document_loaders import DirectoryLoader\nWe can use the glob parameter to control which files to load. Note that here it doesn\u2019t load the .rst file or the .ipynb files.\nloader = DirectoryLoader('../', glob=\"**/*.md\")\ndocs = loader.load()\nlen(docs)\n1\nShow a progress bar#\nBy default a progress bar will not be shown. To show a progress bar, install the tqdm library (e.g. pip install tqdm), and set the show_progress parameter to True.\n%pip install tqdm\nloader = DirectoryLoader('../', glob=\"**/*.md\", show_progress=True)\ndocs = loader.load()\nRequirement already satisfied: tqdm in /Users/jon/.pyenv/versions/3.9.16/envs/microbiome-app/lib/python3.9/site-packages (4.65.0)\n0it [00:00, ?it/s]\nUse multithreading#\nBy default the loading happens in one thread. In order to utilize several threads set the use_multithreading flag to true.\nloader = DirectoryLoader('../', glob=\"**/*.md\", use_multithreading=True)\ndocs = loader.load()\nChange loader class#\nBy default this uses the UnstructuredLoader class. However, you can change up the type of loader pretty easily.\nfrom langchain.document_loaders import TextLoader\nloader = DirectoryLoader('../', glob=\"**/*.md\", loader_cls=TextLoader)\ndocs = loader.load()", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/file_directory.html"}
+{"id": "f764976663a1-1", "text": "docs = loader.load()\nlen(docs)\n1\nIf you need to load Python source code files, use the PythonLoader.\nfrom langchain.document_loaders import PythonLoader\nloader = DirectoryLoader('../../../../../', glob=\"**/*.py\", loader_cls=PythonLoader)\ndocs = loader.load()\nlen(docs)\n691\nAuto detect file encodings with TextLoader#\nIn this example we will see some strategies that can be useful when loading a big list of arbitrary files from a directory using the TextLoader class.\nFirst to illustrate the problem, let\u2019s try to load multiple text with arbitrary encodings.\npath = '../../../../../tests/integration_tests/examples'\nloader = DirectoryLoader(path, glob=\"**/*.txt\", loader_cls=TextLoader)\nA. Default Behavior#\nloader.load()\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Traceback (most recent call last) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 /data/source/langchain/langchain/document_loaders/text.py:29 in load \u2502\n\u2502 \u2502\n\u2502 26 \u2502 \u2502 text = \"\" \u2502\n\u2502 27 \u2502 \u2502 with open(self.file_path, encoding=self.encoding) as f: \u2502\n\u2502 28 \u2502 \u2502 \u2502 try: \u2502\n\u2502 \u2771 29 \u2502 \u2502 \u2502 \u2502 text = f.read() \u2502\n\u2502 30 \u2502 \u2502 \u2502 except UnicodeDecodeError as e: \u2502\n\u2502 31 \u2502 \u2502 \u2502 \u2502 if self.autodetect_encoding: \u2502\n\u2502 32 \u2502 \u2502 \u2502 \u2502 \u2502 detected_encodings = self.detect_file_encodings() \u2502\n\u2502 \u2502\n\u2502 /home/spike/.pyenv/versions/3.9.11/lib/python3.9/codecs.py:322 in decode \u2502", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/file_directory.html"}
+{"id": "f764976663a1-2", "text": "\u2502 \u2502\n\u2502 319 \u2502 def decode(self, input, final=False): \u2502\n\u2502 320 \u2502 \u2502 # decode input (taking the buffer into account) \u2502\n\u2502 321 \u2502 \u2502 data = self.buffer + input \u2502\n\u2502 \u2771 322 \u2502 \u2502 (result, consumed) = self._buffer_decode(data, self.errors, final) \u2502\n\u2502 323 \u2502 \u2502 # keep undecoded input until the next call \u2502\n\u2502 324 \u2502 \u2502 self.buffer = data[consumed:] \u2502\n\u2502 325 \u2502 \u2502 return result \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xca in position 0: invalid continuation byte\nThe above exception was the direct cause of the following exception:\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Traceback (most recent call last) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 in :1 \u2502\n\u2502 \u2502\n\u2502 \u2771 1 loader.load() \u2502\n\u2502 2 \u2502\n\u2502 \u2502\n\u2502 /data/source/langchain/langchain/document_loaders/directory.py:84 in load \u2502\n\u2502 \u2502\n\u2502 81 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 if self.silent_errors: \u2502\n\u2502 82 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 logger.warning(e) \u2502\n\u2502 83 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 else: \u2502\n\u2502 \u2771 84 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 raise e \u2502", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/file_directory.html"}
+{"id": "f764976663a1-3", "text": "\u2502 85 \u2502 \u2502 \u2502 \u2502 \u2502 finally: \u2502\n\u2502 86 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 if pbar: \u2502\n\u2502 87 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 pbar.update(1) \u2502\n\u2502 \u2502\n\u2502 /data/source/langchain/langchain/document_loaders/directory.py:78 in load \u2502\n\u2502 \u2502\n\u2502 75 \u2502 \u2502 \u2502 if i.is_file(): \u2502\n\u2502 76 \u2502 \u2502 \u2502 \u2502 if _is_visible(i.relative_to(p)) or self.load_hidden: \u2502\n\u2502 77 \u2502 \u2502 \u2502 \u2502 \u2502 try: \u2502\n\u2502 \u2771 78 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 sub_docs = self.loader_cls(str(i), **self.loader_kwargs).load() \u2502\n\u2502 79 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 docs.extend(sub_docs) \u2502\n\u2502 80 \u2502 \u2502 \u2502 \u2502 \u2502 except Exception as e: \u2502\n\u2502 81 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 if self.silent_errors: \u2502\n\u2502 \u2502\n\u2502 /data/source/langchain/langchain/document_loaders/text.py:44 in load \u2502\n\u2502 \u2502\n\u2502 41 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 except UnicodeDecodeError: \u2502\n\u2502 42 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 continue \u2502\n\u2502 43 \u2502 \u2502 \u2502 \u2502 else: \u2502\n\u2502 \u2771 44 \u2502 \u2502 \u2502 \u2502 \u2502 raise RuntimeError(f\"Error loading {self.file_path}\") from e \u2502", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/file_directory.html"}
+{"id": "f764976663a1-4", "text": "\u2502 45 \u2502 \u2502 \u2502 except Exception as e: \u2502\n\u2502 46 \u2502 \u2502 \u2502 \u2502 raise RuntimeError(f\"Error loading {self.file_path}\") from e \u2502\n\u2502 47 \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\nRuntimeError: Error loading ../../../../../tests/integration_tests/examples/example-non-utf8.txt\nThe file example-non-utf8.txt uses a different encoding the load() function fails with a helpful message indicating which file failed decoding.\nWith the default behavior of TextLoader any failure to load any of the documents will fail the whole loading process and no documents are loaded.\nB. Silent fail#\nWe can pass the parameter silent_errors to the DirectoryLoader to skip the files which could not be loaded and continue the load process.\nloader = DirectoryLoader(path, glob=\"**/*.txt\", loader_cls=TextLoader, silent_errors=True)\ndocs = loader.load()\nError loading ../../../../../tests/integration_tests/examples/example-non-utf8.txt\ndoc_sources = [doc.metadata['source'] for doc in docs]\ndoc_sources\n['../../../../../tests/integration_tests/examples/whatsapp_chat.txt',\n '../../../../../tests/integration_tests/examples/example-utf8.txt']\nC. Auto detect encodings#\nWe can also ask TextLoader to auto detect the file encoding before failing, by passing the autodetect_encoding to the loader class.\ntext_loader_kwargs={'autodetect_encoding': True}\nloader = DirectoryLoader(path, glob=\"**/*.txt\", loader_cls=TextLoader, loader_kwargs=text_loader_kwargs)\ndocs = loader.load()\ndoc_sources = [doc.metadata['source'] for doc in docs]\ndoc_sources\n['../../../../../tests/integration_tests/examples/example-non-utf8.txt',\n '../../../../../tests/integration_tests/examples/whatsapp_chat.txt',", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/file_directory.html"}
+{"id": "f764976663a1-5", "text": "'../../../../../tests/integration_tests/examples/whatsapp_chat.txt',\n '../../../../../tests/integration_tests/examples/example-utf8.txt']\nprevious\nFacebook Chat\nnext\nHTML\n Contents\n \nShow a progress bar\nUse multithreading\nChange loader class\nAuto detect file encodings with TextLoader\nA. Default Behavior\nB. Silent fail\nC. Auto detect encodings\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/file_directory.html"}
+{"id": "0a69dd533542-0", "text": ".ipynb\n.pdf\nCollege Confidential\nCollege Confidential#\nCollege Confidential gives information on 3,800+ colleges and universities.\nThis covers how to load College Confidential webpages into a document format that we can use downstream.\nfrom langchain.document_loaders import CollegeConfidentialLoader\nloader = CollegeConfidentialLoader(\"https://www.collegeconfidential.com/colleges/brown-university/\")\ndata = loader.load()\ndata", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/college_confidential.html"}
+{"id": "0a69dd533542-1", "text": "[Document(page_content='\\n\\n\\n\\n\\n\\n\\n\\nA68FEB02-9D19-447C-B8BC-818149FD6EAF\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n Media (2)\\n \\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nE45B8B13-33D4-450E-B7DB-F66EFE8F2097\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nE45B8B13-33D4-450E-B7DB-F66EFE8F2097\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nAbout Brown\\n\\n\\n\\n\\n\\n\\nBrown University Overview\\nBrown University is a private, nonprofit school in the urban setting of Providence, Rhode Island. Brown was founded in 1764 and the school currently enrolls around 10,696 students a year, including 7,349 undergraduates. Brown provides on-campus housing for students. Most students live in off campus housing.\\n\ud83d\udcc6 Mark your calendar! January 5, 2023 is the final deadline to submit an application for the Fall 2023 semester. \\nThere are many ways for students to get involved at Brown! \\nLove music or performing? Join a campus band, sing in a chorus, or perform with one of the school\\'s theater groups.\\nInterested in journalism or communications? Brown students can write for the campus newspaper, host a radio show or be a producer for the student-run television channel.\\nInterested in joining a fraternity or sorority? Brown has fraternities and sororities.\\nPlanning to play sports? Brown has", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/college_confidential.html"}
+{"id": "0a69dd533542-2", "text": "Brown has fraternities and sororities.\\nPlanning to play sports? Brown has many options for athletes. See them all and learn more about life at Brown on the Student Life page.\\n\\n\\n\\n2022 Brown Facts At-A-Glance\\n\\n\\n\\n\\n\\nAcademic Calendar\\nOther\\n\\n\\nOverall Acceptance Rate\\n6%\\n\\n\\nEarly Decision Acceptance Rate\\n16%\\n\\n\\nEarly Action Acceptance Rate\\nEA not offered\\n\\n\\nApplicants Submitting SAT scores\\n51%\\n\\n\\nTuition\\n$62,680\\n\\n\\nPercent of Need Met\\n100%\\n\\n\\nAverage First-Year Financial Aid Package\\n$59,749\\n\\n\\n\\n\\nIs Brown a Good School?\\n\\nDifferent people have different ideas about what makes a \"good\" school. Some factors that can help you determine what a good school for you might be include admissions criteria, acceptance rate, tuition costs, and more.\\nLet\\'s take a look at these factors to get a clearer sense of what Brown offers and if it could be the right college for you.\\nBrown Acceptance Rate 2022\\nIt is extremely difficult to get into Brown. Around 6% of applicants get into Brown each year. In 2022, just 2,568 out of the 46,568 students who applied were accepted.\\nRetention and Graduation Rates at Brown\\nRetention refers to the number of students that stay enrolled at a school over time. This is a way to get a sense of how satisfied students are with their school experience, and if they have the support necessary to succeed in college. \\nApproximately 98% of first-year, full-time undergrads who start at Browncome back their sophomore year. 95% of Brown undergrads graduate within six years. The average six-year graduation rate for U.S. colleges and universities is 61% for public schools, and 67% for", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/college_confidential.html"}
+{"id": "0a69dd533542-3", "text": "for U.S. colleges and universities is 61% for public schools, and 67% for private, non-profit schools.\\nJob Outcomes for Brown Grads\\nJob placement stats are a good resource for understanding the value of a degree from Brown by providing a look on how job placement has gone for other grads. \\nCheck with Brown directly, for information on any information on starting salaries for recent grads.\\nBrown\\'s Endowment\\nAn endowment is the total value of a school\\'s investments, donations, and assets. Endowment is not necessarily an indicator of the quality of a school, but it can give you a sense of how much money a college can afford to invest in expanding programs, improving facilities, and support students. \\nAs of 2022, the total market value of Brown University\\'s endowment was $4.7 billion. The average college endowment was $905 million in 2021. The school spends $34,086 for each full-time student enrolled. \\nTuition and Financial Aid at Brown\\nTuition is another important factor when choose a college. Some colleges may have high tuition, but do a better job at meeting students\\' financial need.\\nBrown meets 100% of the demonstrated financial need for undergraduates. The average financial aid package for a full-time, first-year student is around $59,749 a year. \\nThe average student debt for graduates in the class of 2022 was around $24,102 per student, not including those with no debt. For context, compare this number with the average national debt, which is around $36,000 per borrower. \\nThe 2023-2024 FAFSA Opened on October 1st, 2022\\nSome financial aid is awarded on a first-come, first-served basis, so fill out the FAFSA as soon as you can. Visit the FAFSA website to apply for student aid.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/college_confidential.html"}
+{"id": "0a69dd533542-4", "text": "as soon as you can. Visit the FAFSA website to apply for student aid. Remember, the first F in FAFSA stands for FREE! You should never have to pay to submit the Free Application for Federal Student Aid (FAFSA), so be very wary of anyone asking you for money.\\nLearn more about Tuition and Financial Aid at Brown.\\nBased on this information, does Brown seem like a good fit? Remember, a school that is perfect for one person may be a terrible fit for someone else! So ask yourself: Is Brown a good school for you?\\nIf Brown University seems like a school you want to apply to, click the heart button to save it to your college list.\\n\\nStill Exploring Schools?\\nChoose one of the options below to learn more about Brown:\\nAdmissions\\nStudent Life\\nAcademics\\nTuition & Aid\\nBrown Community Forums\\nThen use the college admissions predictor to take a data science look at your chances of getting into some of the best colleges and universities in the U.S.\\nWhere is Brown?\\nBrown is located in the urban setting of Providence, Rhode Island, less than an hour from Boston. \\nIf you would like to see Brown for yourself, plan a visit. The best way to reach campus is to take Interstate 95 to Providence, or book a flight to the nearest airport, T.F. Green.\\nYou can also take a virtual campus tour to get a sense of what Brown and Providence are like without leaving home.\\nConsidering Going to School in Rhode Island?\\nSee a full list of colleges in Rhode Island and save your favorites to your college list.\\n\\n\\n\\nCollege Info\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n Providence, RI 02912\\n", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/college_confidential.html"}
+{"id": "0a69dd533542-5", "text": "RI 02912\\n \\n\\n\\n\\n Campus Setting: Urban\\n \\n\\n\\n\\n\\n\\n\\n\\n (401) 863-2378\\n \\n\\n Website\\n \\n\\n Virtual Tour\\n \\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nBrown Application Deadline\\n\\n\\n\\nFirst-Year Applications are Due\\n\\nJan 5\\n\\nTransfer Applications are Due\\n\\nMar 1\\n\\n\\n\\n \\n The deadline for Fall first-year applications to Brown is \\n Jan 5. \\n \\n \\n \\n\\n \\n", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/college_confidential.html"}
+{"id": "0a69dd533542-6", "text": "\\n\\n \\n The deadline for Fall transfer applications to Brown is \\n Mar 1. \\n \\n \\n \\n\\n \\n Check the school website \\n for more information about deadlines for specific programs or special admissions programs\\n \\n \\n\\n\\n\\n\\n\\n\\nBrown ACT Scores\\n\\n\\n\\n\\nic_reflect\\n\\n\\n\\n\\n\\n\\n\\n\\nACT Range\\n\\n\\n \\n 33 - 35\\n \\n \\n\\n\\n\\nEstimated Chance of Acceptance by ACT Score\\n\\n\\nACT Score\\nEstimated Chance\\n\\n\\n35 and Above\\nGood\\n\\n\\n33 to 35\\nAvg\\n\\n\\n33 and Less\\nLow\\n\\n\\n\\n\\n\\n\\nStand out on your college application\\n\\n\u2022 Qualify for scholarships\\n\u2022 Most students who retest improve their score\\n\\nSponsored by ACT\\n\\n\\n Take the Next ACT Test\\n", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/college_confidential.html"}
+{"id": "0a69dd533542-7", "text": "Take the Next ACT Test\\n \\n\\n\\n\\n\\n\\nBrown SAT Scores\\n\\n\\n\\n\\nic_reflect\\n\\n\\n\\n\\n\\n\\n\\n\\nComposite SAT Range\\n\\n\\n \\n 720 - 770\\n \\n \\n\\n\\n\\nic_reflect\\n\\n\\n\\n\\n\\n\\n\\n\\nMath SAT Range\\n\\n\\n \\n Not available\\n \\n \\n\\n\\n\\nic_reflect\\n\\n\\n\\n\\n\\n\\n\\n\\nReading SAT Range\\n\\n\\n \\n 740 - 800\\n \\n \\n\\n\\n\\n\\n\\n\\n Brown Tuition & Fees\\n \\n\\n\\n\\nTuition & Fees\\n\\n\\n\\n", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/college_confidential.html"}
+{"id": "0a69dd533542-8", "text": "& Fees\\n\\n\\n\\n $82,286\\n \\nIn State\\n\\n\\n\\n\\n $82,286\\n \\nOut-of-State\\n\\n\\n\\n\\n\\n\\n\\nCost Breakdown\\n\\n\\nIn State\\n\\n\\nOut-of-State\\n\\n\\n\\n\\nState Tuition\\n\\n\\n\\n $62,680\\n \\n\\n\\n\\n $62,680\\n \\n\\n\\n\\n\\nFees\\n\\n\\n\\n $2,466\\n \\n\\n\\n\\n $2,466\\n", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/college_confidential.html"}
+{"id": "0a69dd533542-9", "text": "\\n\\n\\n\\n\\nHousing\\n\\n\\n\\n $15,840\\n \\n\\n\\n\\n $15,840\\n \\n\\n\\n\\n\\nBooks\\n\\n\\n\\n $1,300\\n \\n\\n\\n\\n $1,300\\n \\n\\n\\n\\n\\n\\n Total (Before Financial Aid):\\n \\n\\n\\n\\n $82,286\\n", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/college_confidential.html"}
+{"id": "0a69dd533542-10", "text": "\\n\\n\\n\\n $82,286\\n \\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nStudent Life\\n\\n Wondering what life at Brown is like? There are approximately \\n 10,696 students enrolled at \\n Brown, \\n including 7,349 undergraduate students and \\n 3,347 graduate students.\\n 96% percent of students attend school \\n full-time, \\n 6% percent are from RI and \\n 94% percent of students are from other states.\\n \\n\\n\\n\\n\\n\\n None\\n \\n\\n\\n\\n\\nUndergraduate Enrollment\\n\\n\\n\\n 96%\\n \\nFull Time\\n\\n\\n\\n\\n 4%\\n", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/college_confidential.html"}
+{"id": "0a69dd533542-11", "text": "4%\\n \\nPart Time\\n\\n\\n\\n\\n\\n\\n\\n 94%\\n \\n\\n\\n\\n\\nResidency\\n\\n\\n\\n 6%\\n \\nIn State\\n\\n\\n\\n\\n 94%\\n \\nOut-of-State\\n\\n\\n\\n\\n\\n\\n\\n Data Source: IPEDs and Peterson\\'s Databases \u00a9 2022 Peterson\\'s LLC All rights reserved\\n \\n', lookup_str='', metadata={'source': 'https://www.collegeconfidential.com/colleges/brown-university/'}, lookup_index=0)]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/college_confidential.html"}
+{"id": "0a69dd533542-12", "text": "previous\nBiliBili\nnext\nGutenberg\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/college_confidential.html"}
+{"id": "e291a5972f4f-0", "text": ".ipynb\n.pdf\nPySpark DataFrame Loader\nPySpark DataFrame Loader#\nThis notebook goes over how to load data from a PySpark DataFrame.\n#!pip install pyspark\nfrom pyspark.sql import SparkSession\nspark = SparkSession.builder.getOrCreate()\nSetting default log level to \"WARN\".\nTo adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).\n23/05/31 14:08:33 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable\ndf = spark.read.csv('example_data/mlb_teams_2012.csv', header=True)\nfrom langchain.document_loaders import PySparkDataFrameLoader\nloader = PySparkDataFrameLoader(spark, df, page_content_column=\"Team\")\nloader.load()\n[Stage 8:> (0 + 1) / 1]\n[Document(page_content='Nationals', metadata={' \"Payroll (millions)\"': ' 81.34', ' \"Wins\"': ' 98'}),\n Document(page_content='Reds', metadata={' \"Payroll (millions)\"': ' 82.20', ' \"Wins\"': ' 97'}),\n Document(page_content='Yankees', metadata={' \"Payroll (millions)\"': ' 197.96', ' \"Wins\"': ' 95'}),\n Document(page_content='Giants', metadata={' \"Payroll (millions)\"': ' 117.62', ' \"Wins\"': ' 94'}),\n Document(page_content='Braves', metadata={' \"Payroll (millions)\"': ' 83.31', ' \"Wins\"': ' 94'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pyspark_dataframe.html"}
+{"id": "e291a5972f4f-1", "text": "Document(page_content='Athletics', metadata={' \"Payroll (millions)\"': ' 55.37', ' \"Wins\"': ' 94'}),\n Document(page_content='Rangers', metadata={' \"Payroll (millions)\"': ' 120.51', ' \"Wins\"': ' 93'}),\n Document(page_content='Orioles', metadata={' \"Payroll (millions)\"': ' 81.43', ' \"Wins\"': ' 93'}),\n Document(page_content='Rays', metadata={' \"Payroll (millions)\"': ' 64.17', ' \"Wins\"': ' 90'}),\n Document(page_content='Angels', metadata={' \"Payroll (millions)\"': ' 154.49', ' \"Wins\"': ' 89'}),\n Document(page_content='Tigers', metadata={' \"Payroll (millions)\"': ' 132.30', ' \"Wins\"': ' 88'}),\n Document(page_content='Cardinals', metadata={' \"Payroll (millions)\"': ' 110.30', ' \"Wins\"': ' 88'}),\n Document(page_content='Dodgers', metadata={' \"Payroll (millions)\"': ' 95.14', ' \"Wins\"': ' 86'}),\n Document(page_content='White Sox', metadata={' \"Payroll (millions)\"': ' 96.92', ' \"Wins\"': ' 85'}),\n Document(page_content='Brewers', metadata={' \"Payroll (millions)\"': ' 97.65', ' \"Wins\"': ' 83'}),\n Document(page_content='Phillies', metadata={' \"Payroll (millions)\"': ' 174.54', ' \"Wins\"': ' 81'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pyspark_dataframe.html"}
+{"id": "e291a5972f4f-2", "text": "Document(page_content='Diamondbacks', metadata={' \"Payroll (millions)\"': ' 74.28', ' \"Wins\"': ' 81'}),\n Document(page_content='Pirates', metadata={' \"Payroll (millions)\"': ' 63.43', ' \"Wins\"': ' 79'}),\n Document(page_content='Padres', metadata={' \"Payroll (millions)\"': ' 55.24', ' \"Wins\"': ' 76'}),\n Document(page_content='Mariners', metadata={' \"Payroll (millions)\"': ' 81.97', ' \"Wins\"': ' 75'}),\n Document(page_content='Mets', metadata={' \"Payroll (millions)\"': ' 93.35', ' \"Wins\"': ' 74'}),\n Document(page_content='Blue Jays', metadata={' \"Payroll (millions)\"': ' 75.48', ' \"Wins\"': ' 73'}),\n Document(page_content='Royals', metadata={' \"Payroll (millions)\"': ' 60.91', ' \"Wins\"': ' 72'}),\n Document(page_content='Marlins', metadata={' \"Payroll (millions)\"': ' 118.07', ' \"Wins\"': ' 69'}),\n Document(page_content='Red Sox', metadata={' \"Payroll (millions)\"': ' 173.18', ' \"Wins\"': ' 69'}),\n Document(page_content='Indians', metadata={' \"Payroll (millions)\"': ' 78.43', ' \"Wins\"': ' 68'}),\n Document(page_content='Twins', metadata={' \"Payroll (millions)\"': ' 94.08', ' \"Wins\"': ' 66'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pyspark_dataframe.html"}
+{"id": "e291a5972f4f-3", "text": "Document(page_content='Rockies', metadata={' \"Payroll (millions)\"': ' 78.06', ' \"Wins\"': ' 64'}),\n Document(page_content='Cubs', metadata={' \"Payroll (millions)\"': ' 88.19', ' \"Wins\"': ' 61'}),\n Document(page_content='Astros', metadata={' \"Payroll (millions)\"': ' 60.65', ' \"Wins\"': ' 55'})]\nprevious\nPsychic\nnext\nReadTheDocs Documentation\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pyspark_dataframe.html"}
+{"id": "5a67d8e58b49-0", "text": ".ipynb\n.pdf\nMediaWikiDump\nMediaWikiDump#\nMediaWiki XML Dumps contain the content of a wiki (wiki pages with all their revisions), without the site-related data. A XML dump does not create a full backup of the wiki database, the dump does not contain user accounts, images, edit logs, etc.\nThis covers how to load a MediaWiki XML dump file into a document format that we can use downstream.\nIt uses mwxml from mediawiki-utilities to dump and mwparserfromhell from earwig to parse MediaWiki wikicode.\nDump files can be obtained with dumpBackup.php or on the Special:Statistics page of the Wiki.\n#mediawiki-utilities supports XML schema 0.11 in unmerged branches\n!pip install -qU git+https://github.com/mediawiki-utilities/python-mwtypes@updates_schema_0.11\n#mediawiki-utilities mwxml has a bug, fix PR pending\n!pip install -qU git+https://github.com/gdedrouas/python-mwxml@xml_format_0.11\n!pip install -qU mwparserfromhell\nfrom langchain.document_loaders import MWDumpLoader\nloader = MWDumpLoader(\"example_data/testmw_pages_current.xml\", encoding=\"utf8\")\ndocuments = loader.load()\nprint (f'You have {len(documents)} document(s) in your data ')\nYou have 177 document(s) in your data \ndocuments[:5]\n[Document(page_content='\\t\\n\\t\\n\\tArtist\\n\\tReleased\\n\\tRecorded\\n\\tLength\\n\\tLabel\\n\\tProducer', metadata={'source': 'Album'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/mediawikidump.html"}
+{"id": "5a67d8e58b49-1", "text": "Document(page_content='{| class=\"article-table plainlinks\" style=\"width:100%;\"\\n|- style=\"font-size:18px;\"\\n! style=\"padding:0px;\" | Template documentation\\n|-\\n| Note: portions of the template sample may not be visible without values provided.\\n|-\\n| View or edit this documentation. (About template documentation)\\n|-\\n| Editors can experiment in this template\\'s [ sandbox] and [ test case] pages.\\n|}Category:Documentation templates', metadata={'source': 'Documentation'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/mediawikidump.html"}
+{"id": "5a67d8e58b49-2", "text": "Document(page_content='Description\\nThis template is used to insert descriptions on template pages.\\n\\nSyntax\\nAdd at the end of the template page.\\n\\nAdd to transclude an alternative page from the /doc subpage.\\n\\nUsage\\n\\nOn the Template page\\nThis is the normal format when used:\\n\\nTEMPLATE CODE\\nAny categories to be inserted into articles by the template\\n{{Documentation}}\\n\\nIf your template is not a completed div or table, you may need to close the tags just before {{Documentation}} is inserted (within the noinclude tags).\\n\\nA line break right before {{Documentation}} can also be useful as it helps prevent the documentation template \"running into\" previous code.\\n\\nOn the documentation page\\nThe documentation page is usually located on the /doc subpage for a template, but a different page can be specified with the first parameter of the template (see Syntax).\\n\\nNormally, you will want to write something like the following on the documentation page:\\n\\n==Description==\\nThis template is used to do something.\\n\\n==Syntax==\\nType {{t|templatename}}
somewhere.\\n\\n==Samples==\\n{{templatename|input}}
\\n\\nresults in...\\n\\n{{templatename|input}}\\n\\nAny categories for the template itself\\n[[Category:Template documentation]]\\n\\nUse any or all of the above description/syntax/sample output sections. You may also want to add \"see also\" or other sections.\\n\\nNote that the above example also uses the Template:T template.\\n\\nCategory:Documentation templatesCategory:Template documentation', metadata={'source':", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/mediawikidump.html"}
+{"id": "5a67d8e58b49-3", "text": "the Template:T template.\\n\\nCategory:Documentation templatesCategory:Template documentation', metadata={'source': 'Documentation/doc'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/mediawikidump.html"}
+{"id": "5a67d8e58b49-4", "text": "Document(page_content='Description\\nA template link with a variable number of parameters (0-20).\\n\\nSyntax\\n \\n\\nSource\\nImproved version not needing t/piece subtemplate developed on Templates wiki see the list of authors. Copied here via CC-By-SA 3.0 license.\\n\\nExample\\n\\nCategory:General wiki templates\\nCategory:Template documentation', metadata={'source': 'T/doc'}),\n Document(page_content='\\t\\n\\t\\t \\n\\t\\n\\t\\t Aliases\\n\\t Relatives\\n\\t Affiliation\\n Occupation\\n \\n Biographical information\\n Marital status\\n \\tDate of birth\\n Place of birth\\n Date of death\\n Place of death\\n \\n Physical description\\n Species\\n Gender\\n Height\\n Weight\\n Eye color\\n\\t\\n Appearances\\n Portrayed by\\n Appears in\\n Debut\\n ', metadata={'source': 'Character'})]\nprevious\nIMSDb\nnext\nWikipedia\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/mediawikidump.html"}
+{"id": "93bd4766c5cb-0", "text": ".ipynb\n.pdf\nJoplin\nJoplin#\nJoplin is an open source note-taking app. Capture your thoughts and securely access them from any device.\nThis notebook covers how to load documents from a Joplin database.\nJoplin has a REST API for accessing its local database. This loader uses the API to retrieve all notes in the database and their metadata. This requires an access token that can be obtained from the app by following these steps:\nOpen the Joplin app. The app must stay open while the documents are being loaded.\nGo to settings / options and select \u201cWeb Clipper\u201d.\nMake sure that the Web Clipper service is enabled.\nUnder \u201cAdvanced Options\u201d, copy the authorization token.\nYou may either initialize the loader directly with the access token, or store it in the environment variable JOPLIN_ACCESS_TOKEN.\nAn alternative to this approach is to export the Joplin\u2019s note database to Markdown files (optionally, with Front Matter metadata) and use a Markdown loader, such as ObsidianLoader, to load them.\nfrom langchain.document_loaders import JoplinLoader\nloader = JoplinLoader(access_token=\"\")\ndocs = loader.load()\nprevious\nIugu\nnext\nMicrosoft OneDrive\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/joplin.html"}
+{"id": "be8912faa00f-0", "text": ".ipynb\n.pdf\nMicrosoft OneDrive\n Contents \nPrerequisites\n\ud83e\uddd1 Instructions for ingesting your documents from OneDrive\n\ud83d\udd11 Authentication\n\ud83d\uddc2\ufe0f Documents loader\n\ud83d\udcd1 Loading documents from a OneDrive Directory\n\ud83d\udcd1 Loading documents from a list of Documents IDs\nMicrosoft OneDrive#\nMicrosoft OneDrive (formerly SkyDrive) is a file hosting service operated by Microsoft.\nThis notebook covers how to load documents from OneDrive. Currently, only docx, doc, and pdf files are supported.\nPrerequisites#\nRegister an application with the Microsoft identity platform instructions.\nWhen registration finishes, the Azure portal displays the app registration\u2019s Overview pane. You see the Application (client) ID. Also called the client ID, this value uniquely identifies your application in the Microsoft identity platform.\nDuring the steps you will be following at item 1, you can set the redirect URI as http://localhost:8000/callback\nDuring the steps you will be following at item 1, generate a new password (client_secret) under\u00a0Application Secrets\u00a0section.\nFollow the instructions at this document to add the following SCOPES (offline_access and Files.Read.All) to your application.\nVisit the Graph Explorer Playground to obtain your OneDrive ID. The first step is to ensure you are logged in with the account associated your OneDrive account. Then you need to make a request to https://graph.microsoft.com/v1.0/me/drive and the response will return a payload with a field id that holds the ID of your OneDrive account.\nYou need to install the o365 package using the command pip install o365.\nAt the end of the steps you must have the following values:\nCLIENT_ID\nCLIENT_SECRET\nDRIVE_ID\n\ud83e\uddd1 Instructions for ingesting your documents from OneDrive#\n\ud83d\udd11 Authentication#", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/microsoft_onedrive.html"}
+{"id": "be8912faa00f-1", "text": "\ud83e\uddd1 Instructions for ingesting your documents from OneDrive#\n\ud83d\udd11 Authentication#\nBy default, the OneDriveLoader expects that the values of CLIENT_ID and CLIENT_SECRET must be stored as environment variables named O365_CLIENT_ID and O365_CLIENT_SECRET respectively. You could pass those environment variables through a .env file at the root of your application or using the following command in your script.\nos.environ['O365_CLIENT_ID'] = \"YOUR CLIENT ID\"\nos.environ['O365_CLIENT_SECRET'] = \"YOUR CLIENT SECRET\"\nThis loader uses an authentication called on behalf of a user. It is a 2 step authentication with user consent. When you instantiate the loader, it will call will print a url that the user must visit to give consent to the app on the required permissions. The user must then visit this url and give consent to the application. Then the user must copy the resulting page url and paste it back on the console. The method will then return True if the login attempt was succesful.\nfrom langchain.document_loaders.onedrive import OneDriveLoader\nloader = OneDriveLoader(drive_id=\"YOUR DRIVE ID\")\nOnce the authentication has been done, the loader will store a token (o365_token.txt) at ~/.credentials/ folder. This token could be used later to authenticate without the copy/paste steps explained earlier. To use this token for authentication, you need to change the auth_with_token parameter to True in the instantiation of the loader.\nfrom langchain.document_loaders.onedrive import OneDriveLoader\nloader = OneDriveLoader(drive_id=\"YOUR DRIVE ID\", auth_with_token=True)\n\ud83d\uddc2\ufe0f Documents loader#\n\ud83d\udcd1 Loading documents from a OneDrive Directory#\nOneDriveLoader can load documents from a specific folder within your OneDrive. For instance, you want to load all documents that are stored at Documents/clients folder within your OneDrive.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/microsoft_onedrive.html"}
+{"id": "be8912faa00f-2", "text": "from langchain.document_loaders.onedrive import OneDriveLoader\nloader = OneDriveLoader(drive_id=\"YOUR DRIVE ID\", folder_path=\"Documents/clients\", auth_with_token=True)\ndocuments = loader.load()\n\ud83d\udcd1 Loading documents from a list of Documents IDs#\nAnother possibility is to provide a list of object_id for each document you want to load. For that, you will need to query the Microsoft Graph API to find all the documents ID that you are interested in. This link provides a list of endpoints that will be helpful to retrieve the documents ID.\nFor instance, to retrieve information about all objects that are stored at the root of the Documents folder, you need make a request to: https://graph.microsoft.com/v1.0/drives/{YOUR DRIVE ID}/root/children. Once you have the list of IDs that you are interested in, then you can instantiate the loader with the following parameters.\nfrom langchain.document_loaders.onedrive import OneDriveLoader\nloader = OneDriveLoader(drive_id=\"YOUR DRIVE ID\", object_ids=[\"ID_1\", \"ID_2\"], auth_with_token=True)\ndocuments = loader.load()\nprevious\nJoplin\nnext\nModern Treasury\n Contents\n \nPrerequisites\n\ud83e\uddd1 Instructions for ingesting your documents from OneDrive\n\ud83d\udd11 Authentication\n\ud83d\uddc2\ufe0f Documents loader\n\ud83d\udcd1 Loading documents from a OneDrive Directory\n\ud83d\udcd1 Loading documents from a list of Documents IDs\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/microsoft_onedrive.html"}
+{"id": "480334df20ac-0", "text": ".ipynb\n.pdf\nNotion DB 1/2\n Contents \n\ud83e\uddd1 Instructions for ingesting your own dataset\nNotion DB 1/2#\nNotion is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management.\nThis notebook covers how to load documents from a Notion database dump.\nIn order to get this notion dump, follow these instructions:\n\ud83e\uddd1 Instructions for ingesting your own dataset#\nExport your dataset from Notion. You can do this by clicking on the three dots in the upper right hand corner and then clicking Export.\nWhen exporting, make sure to select the Markdown & CSV format option.\nThis will produce a .zip file in your Downloads folder. Move the .zip file into this repository.\nRun the following command to unzip the zip file (replace the Export... with your own file name as needed).\nunzip Export-d3adfe0f-3131-4bf3-8987-a52017fc1bae.zip -d Notion_DB\nRun the following command to ingest the data.\nfrom langchain.document_loaders import NotionDirectoryLoader\nloader = NotionDirectoryLoader(\"Notion_DB\")\ndocs = loader.load()\nprevious\nNotion DB 2/2\nnext\nObsidian\n Contents\n \n\ud83e\uddd1 Instructions for ingesting your own dataset\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/notion.html"}
+{"id": "52e7d3825686-0", "text": ".ipynb\n.pdf\nConfluence\n Contents \nConfluence\nExamples\nUsername and Password or Username and API Token (Atlassian Cloud only)\nPersonal Access Token (Server/On-Prem only)\nConfluence#\nConfluence is a wiki collaboration platform that saves and organizes all of the project-related material. Confluence is a knowledge base that primarily handles content management activities.\nA loader for Confluence pages.\nThis currently supports username/api_key, Oauth2 login. Additionally, on-prem installations also support token authentication.\nSpecify a list page_id-s and/or space_key to load in the corresponding pages into Document objects, if both are specified the union of both sets will be returned.\nYou can also specify a boolean include_attachments to include attachments, this is set to False by default, if set to True all attachments will be downloaded and ConfluenceReader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG, SVG, Word and Excel.\nHint: space_key and page_id can both be found in the URL of a page in Confluence - https://yoursite.atlassian.com/wiki/spaces//pages/\nBefore using ConfluenceLoader make sure you have the latest version of the atlassian-python-api package installed:\n#!pip install atlassian-python-api\nExamples#\nUsername and Password or Username and API Token (Atlassian Cloud only)#\nThis example authenticates using either a username and password or, if you\u2019re connecting to an Atlassian Cloud hosted version of Confluence, a username and an API Token.\nYou can generate an API token at: https://id.atlassian.com/manage-profile/security/api-tokens.\nThe limit parameter specifies how many documents will be retrieved in a single call, not how many documents will be retrieved in total.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/confluence.html"}
+{"id": "52e7d3825686-1", "text": "By default the code will return up to 1000 documents in 50 documents batches. To control the total number of documents use the max_pages parameter.\nPlese note the maximum value for the limit parameter in the atlassian-python-api package is currently 100.\nfrom langchain.document_loaders import ConfluenceLoader\nloader = ConfluenceLoader(\n url=\"https://yoursite.atlassian.com/wiki\",\n username=\"me\",\n api_key=\"12345\"\n)\ndocuments = loader.load(space_key=\"SPACE\", include_attachments=True, limit=50)\nPersonal Access Token (Server/On-Prem only)#\nThis method is valid for the Data Center/Server on-prem edition only.\nFor more information on how to generate a Personal Access Token (PAT) check the official Confluence documentation at: https://confluence.atlassian.com/enterprise/using-personal-access-tokens-1026032365.html.\nWhen using a PAT you provide only the token value, you cannot provide a username.\nPlease note that ConfluenceLoader will run under the permissions of the user that generated the PAT and will only be able to load documents for which said user has access to.\nfrom langchain.document_loaders import ConfluenceLoader\nloader = ConfluenceLoader(\n url=\"https://yoursite.atlassian.com/wiki\",\n token=\"12345\"\n)\ndocuments = loader.load(space_key=\"SPACE\", include_attachments=True, limit=50, max_pages=50)\nprevious\nChatGPT Data\nnext\nDiffbot\n Contents\n \nConfluence\nExamples\nUsername and Password or Username and API Token (Atlassian Cloud only)\nPersonal Access Token (Server/On-Prem only)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/confluence.html"}
+{"id": "538ac60f67f6-0", "text": ".ipynb\n.pdf\nAWS S3 Directory\n Contents \nSpecifying a prefix\nAWS S3 Directory#\nAmazon Simple Storage Service (Amazon S3) is an object storage service\nAWS S3 Directory\nThis covers how to load document objects from an AWS S3 Directory object.\n#!pip install boto3\nfrom langchain.document_loaders import S3DirectoryLoader\nloader = S3DirectoryLoader(\"testing-hwc\")\nloader.load()\nSpecifying a prefix#\nYou can also specify a prefix for more finegrained control over what files to load.\nloader = S3DirectoryLoader(\"testing-hwc\", prefix=\"fake\")\nloader.load()\n[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpujbkzf_l/fake.docx'}, lookup_index=0)]\nprevious\nApify Dataset\nnext\nAWS S3 File\n Contents\n \nSpecifying a prefix\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/aws_s3_directory.html"}
+{"id": "e441e8e15411-0", "text": ".ipynb\n.pdf\nEverNote\nEverNote#\nEverNote is intended for archiving and creating notes in which photos, audio and saved web content can be embedded. Notes are stored in virtual \u201cnotebooks\u201d and can be tagged, annotated, edited, searched, and exported.\nThis notebook shows how to load an Evernote export file (.enex) from disk.\nA document will be created for each note in the export.\n# lxml and html2text are required to parse EverNote notes\n# !pip install lxml\n# !pip install html2text\nfrom langchain.document_loaders import EverNoteLoader\n# By default all notes are combined into a single Document\nloader = EverNoteLoader(\"example_data/testing.enex\")\nloader.load()\n[Document(page_content='testing this\\n\\nwhat happens?\\n\\nto the world?**Jan - March 2022**', metadata={'source': 'example_data/testing.enex'})]\n# It's likely more useful to return a Document for each note\nloader = EverNoteLoader(\"example_data/testing.enex\", load_single_document=False)\nloader.load()\n[Document(page_content='testing this\\n\\nwhat happens?\\n\\nto the world?', metadata={'title': 'testing', 'created': time.struct_time(tm_year=2023, tm_mon=2, tm_mday=9, tm_hour=3, tm_min=47, tm_sec=46, tm_wday=3, tm_yday=40, tm_isdst=-1), 'updated': time.struct_time(tm_year=2023, tm_mon=2, tm_mday=9, tm_hour=3, tm_min=53, tm_sec=28, tm_wday=3, tm_yday=40, tm_isdst=-1), 'note-attributes.author': 'Harrison Chase', 'source': 'example_data/testing.enex'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/evernote.html"}
+{"id": "e441e8e15411-1", "text": "Document(page_content='**Jan - March 2022**', metadata={'title': 'Summer Training Program', 'created': time.struct_time(tm_year=2022, tm_mon=12, tm_mday=27, tm_hour=1, tm_min=59, tm_sec=48, tm_wday=1, tm_yday=361, tm_isdst=-1), 'note-attributes.author': 'Mike McGarry', 'note-attributes.source': 'mobile.iphone', 'source': 'example_data/testing.enex'})]\nprevious\nEPub\nnext\nMicrosoft Excel\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/evernote.html"}
+{"id": "446674468443-0", "text": ".ipynb\n.pdf\nGoogle Cloud Storage Directory\n Contents \nSpecifying a prefix\nGoogle Cloud Storage Directory#\nGoogle Cloud Storage is a managed service for storing unstructured data.\nThis covers how to load document objects from an Google Cloud Storage (GCS) directory (bucket).\n# !pip install google-cloud-storage\nfrom langchain.document_loaders import GCSDirectoryLoader\nloader = GCSDirectoryLoader(project_name=\"aist\", bucket=\"testing-hwc\")\nloader.load()\n/Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a \"quota exceeded\" or \"API not enabled\" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/\n warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)\n/Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a \"quota exceeded\" or \"API not enabled\" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/\n warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_cloud_storage_directory.html"}
+{"id": "446674468443-1", "text": "warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)\n[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpz37njh7u/fake.docx'}, lookup_index=0)]\nSpecifying a prefix#\nYou can also specify a prefix for more finegrained control over what files to load.\nloader = GCSDirectoryLoader(project_name=\"aist\", bucket=\"testing-hwc\", prefix=\"fake\")\nloader.load()\n/Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a \"quota exceeded\" or \"API not enabled\" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/\n warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)\n/Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a \"quota exceeded\" or \"API not enabled\" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/\n warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_cloud_storage_directory.html"}
+{"id": "446674468443-2", "text": "warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)\n[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpylg6291i/fake.docx'}, lookup_index=0)]\nprevious\nGoogle BigQuery\nnext\nGoogle Cloud Storage File\n Contents\n \nSpecifying a prefix\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_cloud_storage_directory.html"}
+{"id": "3192fc896f7a-0", "text": ".ipynb\n.pdf\nTwitter\nTwitter#\nTwitter is an online social media and social networking service.\nThis loader fetches the text from the Tweets of a list of Twitter users, using the tweepy Python package.\nYou must initialize the loader with your Twitter API token, and you need to pass in the Twitter username you want to extract.\nfrom langchain.document_loaders import TwitterTweetLoader\n#!pip install tweepy\nloader = TwitterTweetLoader.from_bearer_token(\n oauth2_bearer_token=\"YOUR BEARER TOKEN\",\n twitter_users=['elonmusk'],\n number_tweets=50, # Default value is 100\n)\n# Or load from access token and consumer keys\n# loader = TwitterTweetLoader.from_secrets(\n# access_token='YOUR ACCESS TOKEN',\n# access_token_secret='YOUR ACCESS TOKEN SECRET',\n# consumer_key='YOUR CONSUMER KEY',\n# consumer_secret='YOUR CONSUMER SECRET',\n# twitter_users=['elonmusk'],\n# number_tweets=50,\n# )\ndocuments = loader.load()\ndocuments[:5]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html"}
+{"id": "3192fc896f7a-1", "text": "[Document(page_content='@MrAndyNgo @REI One store after another shutting down', metadata={'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ng\u00f4 \ud83c\udff3\ufe0f\\u200d\ud83c\udf08', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices':", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html"}
+{"id": "3192fc896f7a-2", "text": "'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': 'Twitter for iPhone', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333',", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html"}
+{"id": "3192fc896f7a-3", "text": "'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html"}
+{"id": "3192fc896f7a-4", "text": "Document(page_content='@KanekoaTheGreat @joshrogin @glennbeck Large ships are fundamentally vulnerable to ballistic (hypersonic) missiles', metadata={'created_at': 'Tue Apr 18 03:43:25 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ng\u00f4 \ud83c\udff3\ufe0f\\u200d\ud83c\udf08', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846,", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html"}
+{"id": "3192fc896f7a-5", "text": "'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': 'Twitter for iPhone', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6',", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html"}
+{"id": "3192fc896f7a-6", "text": "'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html"}
+{"id": "3192fc896f7a-7", "text": "Document(page_content='@KanekoaTheGreat The Golden Rule', metadata={'created_at': 'Tue Apr 18 03:37:17 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ng\u00f4 \ud83c\udff3\ufe0f\\u200d\ud83c\udf08', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11,", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html"}
+{"id": "3192fc896f7a-8", "text": "16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': 'Twitter for iPhone', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333',", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html"}
+{"id": "3192fc896f7a-9", "text": "'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html"}
+{"id": "3192fc896f7a-10", "text": "Document(page_content='@KanekoaTheGreat \ud83e\uddd0', metadata={'created_at': 'Tue Apr 18 03:35:48 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ng\u00f4 \ud83c\udff3\ufe0f\\u200d\ud83c\udf08', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11,", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html"}
+{"id": "3192fc896f7a-11", "text": "16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': 'Twitter for iPhone', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333',", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html"}
+{"id": "3192fc896f7a-12", "text": "'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html"}
+{"id": "3192fc896f7a-13", "text": "Document(page_content='@TRHLofficial What\u2019s he talking about and why is it sponsored by Erik\u2019s son?', metadata={'created_at': 'Tue Apr 18 03:32:17 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ng\u00f4 \ud83c\udff3\ufe0f\\u200d\ud83c\udf08', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846',", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html"}
+{"id": "3192fc896f7a-14", "text": "'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': 'Twitter for iPhone', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333',", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html"}
+{"id": "3192fc896f7a-15", "text": "'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}})]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html"}
+{"id": "3192fc896f7a-16", "text": "previous\n2Markdown\nnext\nText Splitters\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html"}
+{"id": "10cdb1a339d4-0", "text": ".ipynb\n.pdf\nAZLyrics\nAZLyrics#\nAZLyrics is a large, legal, every day growing collection of lyrics.\nThis covers how to load AZLyrics webpages into a document format that we can use downstream.\nfrom langchain.document_loaders import AZLyricsLoader\nloader = AZLyricsLoader(\"https://www.azlyrics.com/lyrics/mileycyrus/flowers.html\")\ndata = loader.load()\ndata", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azlyrics.html"}
+{"id": "10cdb1a339d4-1", "text": "[Document(page_content=\"Miley Cyrus - Flowers Lyrics | AZLyrics.com\\n\\r\\nWe were good, we were gold\\nKinda dream that can't be sold\\nWe were right till we weren't\\nBuilt a home and watched it burn\\n\\nI didn't wanna leave you\\nI didn't wanna lie\\nStarted to cry but then remembered I\\n\\nI can buy myself flowers\\nWrite my name in the sand\\nTalk to myself for hours\\nSay things you don't understand\\nI can take myself dancing\\nAnd I can hold my own hand\\nYeah, I can love me better than you can\\n\\nCan love me better\\nI can love me better, baby\\nCan love me better\\nI can love me better, baby\\n\\nPaint my nails, cherry red\\nMatch the roses that you left\\nNo remorse, no regret\\nI forgive every word you said\\n\\nI didn't wanna leave you, baby\\nI didn't wanna fight\\nStarted to cry but then remembered I\\n\\nI can buy myself flowers\\nWrite my name in the sand\\nTalk to myself for hours, yeah\\nSay things you don't understand\\nI can take myself dancing\\nAnd I can hold my own hand\\nYeah, I can love me better than you can\\n\\nCan love me better\\nI can love me better, baby\\nCan love me better\\nI can love me better, baby\\nCan love me better\\nI can love me better, baby\\nCan love me better\\nI\\n\\nI didn't wanna wanna leave you\\nI didn't wanna fight\\nStarted to cry but then remembered I\\n\\nI can buy myself flowers\\nWrite my name in the sand\\nTalk to myself for hours (Yeah)\\nSay things you don't understand\\nI can take myself dancing\\nAnd I can hold my own hand\\nYeah, I can love me better than\\nYeah, I can love me better than you can, uh\\n\\nCan love me", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azlyrics.html"}
+{"id": "10cdb1a339d4-2", "text": "better than\\nYeah, I can love me better than you can, uh\\n\\nCan love me better\\nI can love me better, baby\\nCan love me better\\nI can love me better, baby (Than you can)\\nCan love me better\\nI can love me better, baby\\nCan love me better\\nI\\n\", lookup_str='', metadata={'source': 'https://www.azlyrics.com/lyrics/mileycyrus/flowers.html'}, lookup_index=0)]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azlyrics.html"}
+{"id": "10cdb1a339d4-3", "text": "previous\nArxiv\nnext\nBiliBili\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azlyrics.html"}
+{"id": "9cb973bbcf64-0", "text": ".ipynb\n.pdf\nSpreedly\nSpreedly#\nSpreedly is a service that allows you to securely store credit cards and use them to transact against any number of payment gateways and third party APIs. It does this by simultaneously providing a card tokenization/vault service as well as a gateway and receiver integration service. Payment methods tokenized by Spreedly are stored at Spreedly, allowing you to independently store a card and then pass that card to different end points based on your business requirements.\nThis notebook covers how to load data from the Spreedly REST API into a format that can be ingested into LangChain, along with example usage for vectorization.\nNote: this notebook assumes the following packages are installed: openai, chromadb, and tiktoken.\nimport os\nfrom langchain.document_loaders import SpreedlyLoader\nfrom langchain.indexes import VectorstoreIndexCreator\nSpreedly API requires an access token, which can be found inside the Spreedly Admin Console.\nThis document loader does not currently support pagination, nor access to more complex objects which require additional parameters. It also requires a resource option which defines what objects you want to load.\nFollowing resources are available:\ngateways_options: Documentation\ngateways: Documentation\nreceivers_options: Documentation\nreceivers: Documentation\npayment_methods: Documentation\ncertificates: Documentation\ntransactions: Documentation\nenvironments: Documentation\nspreedly_loader = SpreedlyLoader(os.environ[\"SPREEDLY_ACCESS_TOKEN\"], \"gateways_options\")\n# Create a vectorstore retriver from the loader\n# see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details\nindex = VectorstoreIndexCreator().from_loaders([spreedly_loader])\nspreedly_doc_retriever = index.vectorstore.as_retriever()\nUsing embedded DuckDB without persistence: data will be transient", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html"}
+{"id": "9cb973bbcf64-1", "text": "Using embedded DuckDB without persistence: data will be transient\n# Test the retriever\nspreedly_doc_retriever.get_relevant_documents(\"CRC\")\n[Document(page_content='installment_grace_period_duration\\nreference_data_code\\ninvoice_number\\ntax_management_indicator\\noriginal_amount\\ninvoice_amount\\nvat_tax_rate\\nmobile_remote_payment_type\\ngratuity_amount\\nmdd_field_1\\nmdd_field_2\\nmdd_field_3\\nmdd_field_4\\nmdd_field_5\\nmdd_field_6\\nmdd_field_7\\nmdd_field_8\\nmdd_field_9\\nmdd_field_10\\nmdd_field_11\\nmdd_field_12\\nmdd_field_13\\nmdd_field_14\\nmdd_field_15\\nmdd_field_16\\nmdd_field_17\\nmdd_field_18\\nmdd_field_19\\nmdd_field_20\\nsupported_countries: US\\nAE\\nBR\\nCA\\nCN\\nDK\\nFI\\nFR\\nDE\\nIN\\nJP\\nMX\\nNO\\nSE\\nGB\\nSG\\nLB\\nPK\\nsupported_cardtypes: visa\\nmaster\\namerican_express\\ndiscover\\ndiners_club\\njcb\\ndankort\\nmaestro\\nelo\\nregions: asia_pacific\\neurope\\nlatin_america\\nnorth_america\\nhomepage: http://www.cybersource.com\\ndisplay_api_url: https://ics2wsa.ic3.com/commerce/1.x/transactionProcessor\\ncompany_name: CyberSource', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html"}
+{"id": "9cb973bbcf64-2", "text": "Document(page_content='BG\\nBH\\nBI\\nBJ\\nBM\\nBN\\nBO\\nBR\\nBS\\nBT\\nBW\\nBY\\nBZ\\nCA\\nCC\\nCF\\nCH\\nCK\\nCL\\nCM\\nCN\\nCO\\nCR\\nCV\\nCX\\nCY\\nCZ\\nDE\\nDJ\\nDK\\nDO\\nDZ\\nEC\\nEE\\nEG\\nEH\\nES\\nET\\nFI\\nFJ\\nFK\\nFM\\nFO\\nFR\\nGA\\nGB\\nGD\\nGE\\nGF\\nGG\\nGH\\nGI\\nGL\\nGM\\nGN\\nGP\\nGQ\\nGR\\nGT\\nGU\\nGW\\nGY\\nHK\\nHM\\nHN\\nHR\\nHT\\nHU\\nID\\nIE\\nIL\\nIM\\nIN\\nIO\\nIS\\nIT\\nJE\\nJM\\nJO\\nJP\\nKE\\nKG\\nKH\\nKI\\nKM\\nKN\\nKR\\nKW\\nKY\\nKZ\\nLA\\nLC\\nLI\\nLK\\n", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html"}
+{"id": "9cb973bbcf64-3", "text": "KZ\\nLA\\nLC\\nLI\\nLK\\nLS\\nLT\\nLU\\nLV\\nMA\\nMC\\nMD\\nME\\nMG\\nMH\\nMK\\nML\\nMN\\nMO\\nMP\\nMQ\\nMR\\nMS\\nMT\\nMU\\nMV\\nMW\\nMX\\nMY\\nMZ\\nNA\\nNC\\nNE\\nNF\\nNG\\nNI\\nNL\\nNO\\nNP\\nNR\\nNU\\nNZ\\nOM\\nPA\\nPE\\nPF\\nPH\\nPK\\nPL\\nPN\\nPR\\nPT\\nPW\\nPY\\nQA\\nRE\\nRO\\nRS\\nRU\\nRW\\nSA\\nSB\\nSC\\nSE\\nSG\\nSI\\nSK\\nSL\\nSM\\nSN\\nST\\nSV\\nSZ\\nTC\\nTD\\nTF\\nTG\\nTH\\nTJ\\nTK\\nTM\\nTO\\nTR\\nTT\\nTV\\nTW\\nTZ\\nUA\\nUG\\nUS\\nUY\\nUZ\\nVA\\nVC\\nVE\\nVI\\nVN\\nVU\\nWF\\nWS\\n", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html"}
+{"id": "9cb973bbcf64-4", "text": "VI\\nVN\\nVU\\nWF\\nWS\\nYE\\nYT\\nZA\\nZM\\nsupported_cardtypes:", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html"}
+{"id": "9cb973bbcf64-5", "text": "visa\\nmaster\\namerican_express\\ndiscover\\njcb\\nmaestro\\nelo\\nnaranja\\ncabal\\nunionpay\\nregions: asia_pacific\\neurope\\nmiddle_east\\nnorth_america\\nhomepage: http://worldpay.com\\ndisplay_api_url: https://secure.worldpay.com/jsp/merchant/xml/paymentService.jsp\\ncompany_name: WorldPay', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html"}
+{"id": "9cb973bbcf64-6", "text": "Document(page_content='gateway_specific_fields: receipt_email\\nradar_session_id\\nskip_radar_rules\\napplication_fee\\nstripe_account\\nmetadata\\nidempotency_key\\nreason\\nrefund_application_fee\\nrefund_fee_amount\\nreverse_transfer\\naccount_id\\ncustomer_id\\nvalidate\\nmake_default\\ncancellation_reason\\ncapture_method\\nconfirm\\nconfirmation_method\\ncustomer\\ndescription\\nmoto\\noff_session\\non_behalf_of\\npayment_method_types\\nreturn_email\\nreturn_url\\nsave_payment_method\\nsetup_future_usage\\nstatement_descriptor\\nstatement_descriptor_suffix\\ntransfer_amount\\ntransfer_destination\\ntransfer_group\\napplication_fee_amount\\nrequest_three_d_secure\\nerror_on_requires_action\\nnetwork_transaction_id\\nclaim_without_transaction_id\\nfulfillment_date\\nevent_type\\nmodal_challenge\\nidempotent_request\\nmerchant_reference\\ncustomer_reference\\nshipping_address_zip\\nshipping_from_zip\\nshipping_amount\\nline_items\\nsupported_countries: AE\\nAT\\nAU\\nBE\\nBG\\nBR\\nCA\\nCH\\nCY\\nCZ\\nDE\\nDK\\nEE\\nES\\nFI\\nFR\\nGB\\nGR\\nHK\\nHU\\nIE\\nIN\\nIT\\nJP\\nLT\\nLU\\nLV\\nMT\\nMX\\nMY\\nNL\\nNO\\nNZ\\nPL\\nPT\\nRO\\nSE\\nSG\\nSI\\nSK\\nUS\\nsupported_cardtypes: visa', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html"}
+{"id": "9cb973bbcf64-7", "text": "Document(page_content='mdd_field_57\\nmdd_field_58\\nmdd_field_59\\nmdd_field_60\\nmdd_field_61\\nmdd_field_62\\nmdd_field_63\\nmdd_field_64\\nmdd_field_65\\nmdd_field_66\\nmdd_field_67\\nmdd_field_68\\nmdd_field_69\\nmdd_field_70\\nmdd_field_71\\nmdd_field_72\\nmdd_field_73\\nmdd_field_74\\nmdd_field_75\\nmdd_field_76\\nmdd_field_77\\nmdd_field_78\\nmdd_field_79\\nmdd_field_80\\nmdd_field_81\\nmdd_field_82\\nmdd_field_83\\nmdd_field_84\\nmdd_field_85\\nmdd_field_86\\nmdd_field_87\\nmdd_field_88\\nmdd_field_89\\nmdd_field_90\\nmdd_field_91\\nmdd_field_92\\nmdd_field_93\\nmdd_field_94\\nmdd_field_95\\nmdd_field_96\\nmdd_field_97\\nmdd_field_98\\nmdd_field_99\\nmdd_field_100\\nsupported_countries: US\\nAE\\nBR\\nCA\\nCN\\nDK\\nFI\\nFR\\nDE\\nIN\\nJP\\nMX\\nNO\\nSE\\nGB\\nSG\\nLB\\nPK\\nsupported_cardtypes: visa\\nmaster\\namerican_express\\ndiscover\\ndiners_club\\njcb\\nmaestro\\nelo\\nunion_pay\\ncartes_bancaires\\nmada\\nregions: asia_pacific\\neurope\\nlatin_america\\nnorth_america\\nhomepage: http://www.cybersource.com\\ndisplay_api_url: https://api.cybersource.com\\ncompany_name: CyberSource REST',", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html"}
+{"id": "9cb973bbcf64-8", "text": "https://api.cybersource.com\\ncompany_name: CyberSource REST', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'})]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html"}
+{"id": "9cb973bbcf64-9", "text": "previous\nSnowflake\nnext\nStripe\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html"}
+{"id": "8acf064525be-0", "text": ".ipynb\n.pdf\nGoogle Drive\n Contents \nPrerequisites\n\ud83e\uddd1 Instructions for ingesting your Google Docs data\nGoogle Drive#\nGoogle Drive is a file storage and synchronization service developed by Google.\nThis notebook covers how to load documents from Google Drive. Currently, only Google Docs are supported.\nPrerequisites#\nCreate a Google Cloud project or use an existing project\nEnable the Google Drive API\nAuthorize credentials for desktop app\npip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib\n\ud83e\uddd1 Instructions for ingesting your Google Docs data#\nBy default, the GoogleDriveLoader expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the credentials_path keyword argument. Same thing with token.json - token_path. Note that token.json will be created automatically the first time you use the loader.\nGoogleDriveLoader can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL:\nFolder: https://drive.google.com/drive/u/0/folders/1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5 -> folder id is \"1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5\"\nDocument: https://docs.google.com/document/d/1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit -> document id is \"1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw\"\n!pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib\nfrom langchain.document_loaders import GoogleDriveLoader\nloader = GoogleDriveLoader(", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_drive.html"}
+{"id": "8acf064525be-1", "text": "from langchain.document_loaders import GoogleDriveLoader\nloader = GoogleDriveLoader(\n folder_id=\"1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5\",\n # Optional: configure whether to recursively fetch files from subfolders. Defaults to False.\n recursive=False\n)\ndocs = loader.load()\nWhen you pass a folder_id by default all files of type document, sheet and pdf are loaded. You can modify this behaviour by passing a file_types argument\nloader = GoogleDriveLoader(\n folder_id=\"1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5\",\n file_types=[\"document\", \"sheet\"]\n recursive=False\n)\nprevious\nGoogle Cloud Storage File\nnext\nImage captions\n Contents\n \nPrerequisites\n\ud83e\uddd1 Instructions for ingesting your Google Docs data\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_drive.html"}
+{"id": "15cc67e8df6a-0", "text": ".ipynb\n.pdf\nMarkdown\n Contents \nRetain Elements\nMarkdown#\nMarkdown is a lightweight markup language for creating formatted text using a plain-text editor.\nThis covers how to load markdown documents into a document format that we can use downstream.\n# !pip install unstructured > /dev/null\nfrom langchain.document_loaders import UnstructuredMarkdownLoader\nmarkdown_path = \"../../../../../README.md\"\nloader = UnstructuredMarkdownLoader(markdown_path)\ndata = loader.load()\ndata", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/markdown.html"}
+{"id": "15cc67e8df6a-1", "text": "[Document(page_content=\"\u00f0\\x9f\u00a6\\x9c\u00ef\u00b8\\x8f\u00f0\\x9f\u201d\\x97 LangChain\\n\\n\u00e2\\x9a\u00a1 Building applications with LLMs through composability \u00e2\\x9a\u00a1\\n\\nLooking for the JS/TS version? Check out LangChain.js.\\n\\nProduction Support: As you move your LangChains into production, we'd love to offer more comprehensive support.\\nPlease fill out this form and we'll set up a dedicated support Slack channel.\\n\\nQuick Install\\n\\npip install langchain\\nor\\nconda install langchain -c conda-forge\\n\\n\u00f0\\x9f\u00a4\u201d What is this?\\n\\nLarge language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. However, using these LLMs in isolation is often insufficient for creating a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge.\\n\\nThis library aims to assist in the development of those types of applications. Common examples of these applications include:\\n\\n\u00e2\\x9d\u201c Question Answering over specific documents\\n\\nDocumentation\\n\\nEnd-to-end Example: Question Answering over Notion Database\\n\\n\u00f0\\x9f\u2019\u00ac Chatbots\\n\\nDocumentation\\n\\nEnd-to-end Example: Chat-LangChain\\n\\n\u00f0\\x9f\u00a4\\x96 Agents\\n\\nDocumentation\\n\\nEnd-to-end Example: GPT+WolframAlpha\\n\\n\u00f0\\x9f\u201c\\x96 Documentation\\n\\nPlease see here for full documentation on:\\n\\nGetting started (installation, setting up the environment, simple examples)\\n\\nHow-To examples (demos, integrations, helper functions)\\n\\nReference (full API docs)\\n\\nResources (high-level explanation of core concepts)\\n\\n\u00f0\\x9f\\x9a\\x80 What can this help", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/markdown.html"}
+{"id": "15cc67e8df6a-2", "text": "explanation of core concepts)\\n\\n\u00f0\\x9f\\x9a\\x80 What can this help with?\\n\\nThere are six main areas that LangChain is designed to help with.\\nThese are, in increasing order of complexity:\\n\\n\u00f0\\x9f\u201c\\x83 LLMs and Prompts:\\n\\nThis includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with LLMs.\\n\\n\u00f0\\x9f\u201d\\x97 Chains:\\n\\nChains go beyond a single LLM call and involve sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\\n\\n\u00f0\\x9f\u201c\\x9a Data Augmented Generation:\\n\\nData Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources.\\n\\n\u00f0\\x9f\u00a4\\x96 Agents:\\n\\nAgents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents.\\n\\n\u00f0\\x9f\u00a7\\xa0 Memory:\\n\\nMemory refers to persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\\n\\n\u00f0\\x9f\u00a7\\x90 Evaluation:\\n\\n[BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/markdown.html"}
+{"id": "15cc67e8df6a-3", "text": "is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\\n\\nFor more information on these concepts, please see our full documentation.\\n\\n\u00f0\\x9f\u2019\\x81 Contributing\\n\\nAs an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.\\n\\nFor detailed information on how to contribute, see here.\", metadata={'source': '../../../../../README.md'})]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/markdown.html"}
+{"id": "15cc67e8df6a-4", "text": "Retain Elements#\nUnder the hood, Unstructured creates different \u201celements\u201d for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode=\"elements\".\nloader = UnstructuredMarkdownLoader(markdown_path, mode=\"elements\")\ndata = loader.load()\ndata[0]\nDocument(page_content='\u00f0\\x9f\u00a6\\x9c\u00ef\u00b8\\x8f\u00f0\\x9f\u201d\\x97 LangChain', metadata={'source': '../../../../../README.md', 'page_number': 1, 'category': 'Title'})\nprevious\nJSON\nnext\nMicrosoft PowerPoint\n Contents\n \nRetain Elements\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/markdown.html"}
+{"id": "728c3dfe41fb-0", "text": ".ipynb\n.pdf\nIugu\nIugu#\nIugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.\nThis notebook covers how to load data from the Iugu REST API into a format that can be ingested into LangChain, along with example usage for vectorization.\nimport os\nfrom langchain.document_loaders import IuguLoader\nfrom langchain.indexes import VectorstoreIndexCreator\nThe Iugu API requires an access token, which can be found inside of the Iugu dashboard.\nThis document loader also requires a resource option which defines what data you want to load.\nFollowing resources are available:\nDocumentation Documentation\niugu_loader = IuguLoader(\"charges\")\n# Create a vectorstore retriver from the loader\n# see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details\nindex = VectorstoreIndexCreator().from_loaders([iugu_loader])\niugu_doc_retriever = index.vectorstore.as_retriever()\nprevious\nImage captions\nnext\nJoplin\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/iugu.html"}
+{"id": "95611e8b5e3b-0", "text": ".ipynb\n.pdf\nWebBaseLoader\n Contents \nLoading multiple webpages\nLoad multiple urls concurrently\nLoading a xml file, or using a different BeautifulSoup parser\nWebBaseLoader#\nThis covers how to use WebBaseLoader to load all text from HTML webpages into a document format that we can use downstream. For more custom logic for loading webpages look at some child class examples such as IMSDbLoader, AZLyricsLoader, and CollegeConfidentialLoader\nfrom langchain.document_loaders import WebBaseLoader\nloader = WebBaseLoader(\"https://www.espn.com/\")\ndata = loader.load()\ndata", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-1", "text": "[Document(page_content=\"\\n\\n\\n\\n\\n\\n\\n\\n\\nESPN - Serving Sports Fans. Anytime. Anywhere.\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n Skip to main content\\n \\n\\n Skip to navigation\\n \\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n<\\n\\n>\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nMenuESPN\\n\\n\\nSearch\\n\\n\\n\\nscores\\n\\n\\n\\nNFLNBANCAAMNCAAWNHLSoccer\u2026MLBNCAAFGolfTennisSports BettingBoxingCFLNCAACricketF1HorseLLWSMMANASCARNBA G LeagueOlympic SportsRacingRN BBRN FBRugbyWNBAWorld Baseball ClassicWWEX GamesXFLMore ESPNFantasyListenWatchESPN+\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n \\n\\nSUBSCRIBE NOW\\n\\n\\n\\n\\n\\nNHL: Select Games\\n\\n\\n\\n\\n\\n\\n\\nXFL\\n\\n\\n\\n\\n\\n\\n\\nMLB: Select Games\\n\\n\\n\\n\\n\\n\\n\\nNCAA Baseball\\n\\n\\n\\n\\n\\n\\n\\nNCAA Softball\\n\\n\\n\\n\\n\\n\\n\\nCricket: Select Matches\\n\\n\\n\\n\\n\\n\\n\\nMel Kiper's NFL", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-2", "text": "Select Matches\\n\\n\\n\\n\\n\\n\\n\\nMel Kiper's NFL Mock Draft 3.0\\n\\n\\nQuick Links\\n\\n\\n\\n\\nMen's Tournament Challenge\\n\\n\\n\\n\\n\\n\\n\\nWomen's Tournament Challenge\\n\\n\\n\\n\\n\\n\\n\\nNFL Draft Order\\n\\n\\n\\n\\n\\n\\n\\nHow To Watch NHL Games\\n\\n\\n\\n\\n\\n\\n\\nFantasy Baseball: Sign Up\\n\\n\\n\\n\\n\\n\\n\\nHow To Watch PGA TOUR\\n\\n\\n\\n\\n\\n\\nFavorites\\n\\n\\n\\n\\n\\n\\n Manage Favorites\\n \\n\\n\\n\\nCustomize ESPNSign UpLog InESPN Sites\\n\\n\\n\\n\\nESPN Deportes\\n\\n\\n\\n\\n\\n\\n\\nAndscape\\n\\n\\n\\n\\n\\n\\n\\nespnW\\n\\n\\n\\n\\n\\n\\n\\nESPNFC\\n\\n\\n\\n\\n\\n\\n\\nX Games\\n\\n\\n\\n\\n\\n\\n\\nSEC Network\\n\\n\\nESPN Apps\\n\\n\\n\\n\\nESPN\\n\\n\\n\\n\\n\\n\\n\\nESPN Fantasy\\n\\n\\nFollow ESPN\\n\\n\\n\\n\\nFacebook\\n\\n\\n\\n\\n\\n\\n\\nTwitter\\n\\n\\n\\n\\n\\n\\n\\nInstagram\\n\\n\\n\\n\\n\\n\\n\\nSnapchat\\n\\n\\n\\n\\n\\n\\n\\nYouTube\\n\\n\\n\\n\\n\\n\\n\\nThe ESPN Daily Podcast\\n\\n\\nAre you ready for Opening Day? Here's your guide to MLB's offseason chaosWait, Jacob deGrom is on the Rangers now? Xander Bogaerts and Trea Turner signed where? And what about Carlos Correa? Yeah, you're going to need to read up before Opening Day.12hESPNIllustration by ESPNEverything you missed in the MLB offseason3h2:33World Series odds, win totals, props for every teamPlay fantasy baseball for free!TOP HEADLINESQB Jackson has requested trade from RavensSources: Texas", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-3", "text": "fantasy baseball for free!TOP HEADLINESQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersLAMAR WANTS OUT OF BALTIMOREMarcus Spears identifies the two teams that need Lamar Jackson the most8h2:00Would Lamar sit out? Will Ravens draft a QB? Jackson trade request insightsLamar Jackson has asked Baltimore to trade him, but Ravens coach John Harbaugh hopes the QB will be back.3hJamison HensleyBallard, Colts will consider trading for QB JacksonJackson to Indy? Washington? Barnwell ranks the QB's trade fitsSNYDER'S TUMULTUOUS 24-YEAR RUNHow Washington\u2019s NFL franchise sank on and off the field under owner Dan SnyderSnyder purchased one of the NFL's marquee franchises in 1999. Twenty-four years later, and with the team up for sale, he leaves a legacy of on-field futility and off-field scandal.13hJohn KeimESPNIOWA STAR STEPS UP AGAINJ-Will: Caitlin Clark is the biggest brand in college sports right now8h0:47'The better the opponent, the better she plays': Clark draws comparisons to TaurasiCaitlin Clark's performance on Sunday had longtime observers going back decades to find comparisons.16hKevin PeltonWOMEN'S ELITE EIGHT SCOREBOARDMONDAY'S GAMESCheck your bracket!NBA DRAFTHow top prospects fared on the road to the Final FourThe 2023 NCAA tournament is down to four", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-4", "text": "prospects fared on the road to the Final FourThe 2023 NCAA tournament is down to four teams, and ESPN's Jonathan Givony recaps the players who saw their NBA draft stock change.11hJonathan GivonyAndy Lyons/Getty ImagesTALKING BASKETBALLWhy AD needs to be more assertive with LeBron on the court10h1:33Why Perk won't blame Kyrie for Mavs' woes8h1:48WHERE EVERY TEAM STANDSNew NFL Power Rankings: Post-free-agency 1-32 poll, plus underrated offseason movesThe free agent frenzy has come and gone. Which teams have improved their 2023 outlook, and which teams have taken a hit?12hNFL Nation reportersIllustration by ESPNTHE BUCK STOPS WITH BELICHICKBruschi: Fair to criticize Bill Belichick for Patriots' struggles10h1:27 Top HeadlinesQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersFavorites FantasyManage FavoritesFantasy HomeCustomize ESPNSign UpLog InMarch Madness LiveESPNMarch Madness LiveWatch every men's NCAA tournament game live! ICYMI1:42Austin Peay's coach, pitcher and catcher all ejected after retaliation pitchAustin Peay's pitcher, catcher and coach were all ejected after a pitch was thrown at Liberty's Nathan Keeter, who earlier in the game hit a home run and celebrated while running down the third-base line. Men's Tournament ChallengeIllustration by ESPNMen's Tournament ChallengeCheck your bracket(s) in the", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-5", "text": "Men's Tournament ChallengeIllustration by ESPNMen's Tournament ChallengeCheck your bracket(s) in the 2023 Men's Tournament Challenge, which you can follow throughout the Big Dance. Women's Tournament ChallengeIllustration by ESPNWomen's Tournament ChallengeCheck your bracket(s) in the 2023 Women's Tournament Challenge, which you can follow throughout the Big Dance. Best of ESPN+AP Photo/Lynne SladkyFantasy Baseball ESPN+ Cheat Sheet: Sleepers, busts, rookies and closersYou've read their names all preseason long, it'd be a shame to forget them on draft day. The ESPN+ Cheat Sheet is one way to make sure that doesn't happen.Steph Chambers/Getty ImagesPassan's 2023 MLB season preview: Bold predictions and moreOpening Day is just over a week away -- and Jeff Passan has everything you need to know covered from every possible angle.Photo by Bob Kupbens/Icon Sportswire2023 NFL free agency: Best team fits for unsigned playersWhere could Ezekiel Elliott land? Let's match remaining free agents to teams and find fits for two trade candidates.Illustration by ESPN2023 NFL mock draft: Mel Kiper's first-round pick predictionsMel Kiper Jr. makes his predictions for Round 1 of the NFL draft, including projecting a trade in the top five. Trending NowAnne-Marie Sorvin-USA TODAY SBoston Bruins record tracker: Wins, points, milestonesThe B's are on pace for NHL records in wins and points, along with some individual superlatives as well. Follow along here with our updated tracker.Mandatory Credit: William Purnell-USA TODAY Sports2023 NFL full draft order: AFC, NFC team picks for all roundsStarting with the Carolina Panthers at No. 1 overall, here's the entire 2023 NFL draft broken down round by round. How to Watch on ESPN+Gregory Fisher/Icon Sportswire2023 NCAA men's hockey: Results,", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-6", "text": "Watch on ESPN+Gregory Fisher/Icon Sportswire2023 NCAA men's hockey: Results, bracket, how to watchThe matchups in Tampa promise to be thrillers, featuring plenty of star power, high-octane offense and stellar defense.(AP Photo/Koji Sasahara, File)How to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN, ESPN+Here's everything you need to know about how to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN and ESPN+.Hailie Lynch/XFLHow to watch the XFL: 2023 schedule, teams, players, news, moreEvery XFL game will be streamed on ESPN+. Find out when and where else you can watch the eight teams compete. Sign up to play the #1 Fantasy Baseball GameReactivate A LeagueCreate A LeagueJoin a Public LeaguePractice With a Mock DraftSports BettingAP Photo/Mike KropfMarch Madness betting 2023: Bracket odds, lines, tips, moreThe 2023 NCAA tournament brackets have finally been released, and we have everything you need to know to make a bet on all of the March Madness games. Sign up to play the #1 Fantasy game!Create A LeagueJoin Public LeagueReactivateMock Draft Now\\n\\nESPN+\\n\\n\\n\\n\\nNHL: Select Games\\n\\n\\n\\n\\n\\n\\n\\nXFL\\n\\n\\n\\n\\n\\n\\n\\nMLB: Select Games\\n\\n\\n\\n\\n\\n\\n\\nNCAA Baseball\\n\\n\\n\\n\\n\\n\\n\\nNCAA Softball\\n\\n\\n\\n\\n\\n\\n\\nCricket: Select Matches\\n\\n\\n\\n\\n\\n\\n\\nMel Kiper's NFL Mock Draft 3.0\\n\\n\\nQuick Links\\n\\n\\n\\n\\nMen's Tournament Challenge\\n\\n\\n\\n\\n\\n\\n\\nWomen's Tournament Challenge\\n\\n\\n\\n\\n\\n\\n\\nNFL Draft Order\\n\\n\\n\\n\\n\\n\\n\\nHow To Watch", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-7", "text": "Draft Order\\n\\n\\n\\n\\n\\n\\n\\nHow To Watch NHL Games\\n\\n\\n\\n\\n\\n\\n\\nFantasy Baseball: Sign Up\\n\\n\\n\\n\\n\\n\\n\\nHow To Watch PGA TOUR\\n\\n\\nESPN Sites\\n\\n\\n\\n\\nESPN Deportes\\n\\n\\n\\n\\n\\n\\n\\nAndscape\\n\\n\\n\\n\\n\\n\\n\\nespnW\\n\\n\\n\\n\\n\\n\\n\\nESPNFC\\n\\n\\n\\n\\n\\n\\n\\nX Games\\n\\n\\n\\n\\n\\n\\n\\nSEC Network\\n\\n\\nESPN Apps\\n\\n\\n\\n\\nESPN\\n\\n\\n\\n\\n\\n\\n\\nESPN Fantasy\\n\\n\\nFollow ESPN\\n\\n\\n\\n\\nFacebook\\n\\n\\n\\n\\n\\n\\n\\nTwitter\\n\\n\\n\\n\\n\\n\\n\\nInstagram\\n\\n\\n\\n\\n\\n\\n\\nSnapchat\\n\\n\\n\\n\\n\\n\\n\\nYouTube\\n\\n\\n\\n\\n\\n\\n\\nThe ESPN Daily Podcast\\n\\n\\nTerms of UsePrivacy PolicyYour US State Privacy RightsChildren's Online Privacy PolicyInterest-Based AdsAbout Nielsen MeasurementDo Not Sell or Share My Personal InformationContact UsDisney Ad Sales SiteWork for ESPNCopyright: \u00a9 ESPN Enterprises, Inc. All rights reserved.\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\", lookup_str='', metadata={'source': 'https://www.espn.com/'}, lookup_index=0)]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-8", "text": "\"\"\"\n# Use this piece of code for testing new custom BeautifulSoup parsers\nimport requests\nfrom bs4 import BeautifulSoup\nhtml_doc = requests.get(\"{INSERT_NEW_URL_HERE}\")\nsoup = BeautifulSoup(html_doc.text, 'html.parser')\n# Beautiful soup logic to be exported to langchain.document_loaders.webpage.py\n# Example: transcript = soup.select_one(\"td[class='scrtext']\").text\n# BS4 documentation can be found here: https://www.crummy.com/software/BeautifulSoup/bs4/doc/\n\"\"\";\nLoading multiple webpages#\nYou can also load multiple webpages at once by passing in a list of urls to the loader. This will return a list of documents in the same order as the urls passed in.\nloader = WebBaseLoader([\"https://www.espn.com/\", \"https://google.com\"])\ndocs = loader.load()\ndocs", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-9", "text": "[Document(page_content=\"\\n\\n\\n\\n\\n\\n\\n\\n\\nESPN - Serving Sports Fans. Anytime. Anywhere.\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n Skip to main content\\n \\n\\n Skip to navigation\\n \\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n<\\n\\n>\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nMenuESPN\\n\\n\\nSearch\\n\\n\\n\\nscores\\n\\n\\n\\nNFLNBANCAAMNCAAWNHLSoccer\u2026MLBNCAAFGolfTennisSports BettingBoxingCFLNCAACricketF1HorseLLWSMMANASCARNBA G LeagueOlympic SportsRacingRN BBRN FBRugbyWNBAWorld Baseball ClassicWWEX GamesXFLMore ESPNFantasyListenWatchESPN+\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n \\n\\nSUBSCRIBE NOW\\n\\n\\n\\n\\n\\nNHL: Select Games\\n\\n\\n\\n\\n\\n\\n\\nXFL\\n\\n\\n\\n\\n\\n\\n\\nMLB: Select Games\\n\\n\\n\\n\\n\\n\\n\\nNCAA Baseball\\n\\n\\n\\n\\n\\n\\n\\nNCAA Softball\\n\\n\\n\\n\\n\\n\\n\\nCricket: Select Matches\\n\\n\\n\\n\\n\\n\\n\\nMel Kiper's NFL", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-10", "text": "Select Matches\\n\\n\\n\\n\\n\\n\\n\\nMel Kiper's NFL Mock Draft 3.0\\n\\n\\nQuick Links\\n\\n\\n\\n\\nMen's Tournament Challenge\\n\\n\\n\\n\\n\\n\\n\\nWomen's Tournament Challenge\\n\\n\\n\\n\\n\\n\\n\\nNFL Draft Order\\n\\n\\n\\n\\n\\n\\n\\nHow To Watch NHL Games\\n\\n\\n\\n\\n\\n\\n\\nFantasy Baseball: Sign Up\\n\\n\\n\\n\\n\\n\\n\\nHow To Watch PGA TOUR\\n\\n\\n\\n\\n\\n\\nFavorites\\n\\n\\n\\n\\n\\n\\n Manage Favorites\\n \\n\\n\\n\\nCustomize ESPNSign UpLog InESPN Sites\\n\\n\\n\\n\\nESPN Deportes\\n\\n\\n\\n\\n\\n\\n\\nAndscape\\n\\n\\n\\n\\n\\n\\n\\nespnW\\n\\n\\n\\n\\n\\n\\n\\nESPNFC\\n\\n\\n\\n\\n\\n\\n\\nX Games\\n\\n\\n\\n\\n\\n\\n\\nSEC Network\\n\\n\\nESPN Apps\\n\\n\\n\\n\\nESPN\\n\\n\\n\\n\\n\\n\\n\\nESPN Fantasy\\n\\n\\nFollow ESPN\\n\\n\\n\\n\\nFacebook\\n\\n\\n\\n\\n\\n\\n\\nTwitter\\n\\n\\n\\n\\n\\n\\n\\nInstagram\\n\\n\\n\\n\\n\\n\\n\\nSnapchat\\n\\n\\n\\n\\n\\n\\n\\nYouTube\\n\\n\\n\\n\\n\\n\\n\\nThe ESPN Daily Podcast\\n\\n\\nAre you ready for Opening Day? Here's your guide to MLB's offseason chaosWait, Jacob deGrom is on the Rangers now? Xander Bogaerts and Trea Turner signed where? And what about Carlos Correa? Yeah, you're going to need to read up before Opening Day.12hESPNIllustration by ESPNEverything you missed in the MLB offseason3h2:33World Series odds, win totals, props for every teamPlay fantasy baseball for free!TOP HEADLINESQB Jackson has requested trade from RavensSources: Texas", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-11", "text": "fantasy baseball for free!TOP HEADLINESQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersLAMAR WANTS OUT OF BALTIMOREMarcus Spears identifies the two teams that need Lamar Jackson the most7h2:00Would Lamar sit out? Will Ravens draft a QB? Jackson trade request insightsLamar Jackson has asked Baltimore to trade him, but Ravens coach John Harbaugh hopes the QB will be back.3hJamison HensleyBallard, Colts will consider trading for QB JacksonJackson to Indy? Washington? Barnwell ranks the QB's trade fitsSNYDER'S TUMULTUOUS 24-YEAR RUNHow Washington\u2019s NFL franchise sank on and off the field under owner Dan SnyderSnyder purchased one of the NFL's marquee franchises in 1999. Twenty-four years later, and with the team up for sale, he leaves a legacy of on-field futility and off-field scandal.13hJohn KeimESPNIOWA STAR STEPS UP AGAINJ-Will: Caitlin Clark is the biggest brand in college sports right now8h0:47'The better the opponent, the better she plays': Clark draws comparisons to TaurasiCaitlin Clark's performance on Sunday had longtime observers going back decades to find comparisons.16hKevin PeltonWOMEN'S ELITE EIGHT SCOREBOARDMONDAY'S GAMESCheck your bracket!NBA DRAFTHow top prospects fared on the road to the Final FourThe 2023 NCAA tournament is down to four", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-12", "text": "prospects fared on the road to the Final FourThe 2023 NCAA tournament is down to four teams, and ESPN's Jonathan Givony recaps the players who saw their NBA draft stock change.11hJonathan GivonyAndy Lyons/Getty ImagesTALKING BASKETBALLWhy AD needs to be more assertive with LeBron on the court9h1:33Why Perk won't blame Kyrie for Mavs' woes8h1:48WHERE EVERY TEAM STANDSNew NFL Power Rankings: Post-free-agency 1-32 poll, plus underrated offseason movesThe free agent frenzy has come and gone. Which teams have improved their 2023 outlook, and which teams have taken a hit?12hNFL Nation reportersIllustration by ESPNTHE BUCK STOPS WITH BELICHICKBruschi: Fair to criticize Bill Belichick for Patriots' struggles10h1:27 Top HeadlinesQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersFavorites FantasyManage FavoritesFantasy HomeCustomize ESPNSign UpLog InMarch Madness LiveESPNMarch Madness LiveWatch every men's NCAA tournament game live! ICYMI1:42Austin Peay's coach, pitcher and catcher all ejected after retaliation pitchAustin Peay's pitcher, catcher and coach were all ejected after a pitch was thrown at Liberty's Nathan Keeter, who earlier in the game hit a home run and celebrated while running down the third-base line. Men's Tournament ChallengeIllustration by ESPNMen's Tournament ChallengeCheck your bracket(s) in the", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-13", "text": "Men's Tournament ChallengeIllustration by ESPNMen's Tournament ChallengeCheck your bracket(s) in the 2023 Men's Tournament Challenge, which you can follow throughout the Big Dance. Women's Tournament ChallengeIllustration by ESPNWomen's Tournament ChallengeCheck your bracket(s) in the 2023 Women's Tournament Challenge, which you can follow throughout the Big Dance. Best of ESPN+AP Photo/Lynne SladkyFantasy Baseball ESPN+ Cheat Sheet: Sleepers, busts, rookies and closersYou've read their names all preseason long, it'd be a shame to forget them on draft day. The ESPN+ Cheat Sheet is one way to make sure that doesn't happen.Steph Chambers/Getty ImagesPassan's 2023 MLB season preview: Bold predictions and moreOpening Day is just over a week away -- and Jeff Passan has everything you need to know covered from every possible angle.Photo by Bob Kupbens/Icon Sportswire2023 NFL free agency: Best team fits for unsigned playersWhere could Ezekiel Elliott land? Let's match remaining free agents to teams and find fits for two trade candidates.Illustration by ESPN2023 NFL mock draft: Mel Kiper's first-round pick predictionsMel Kiper Jr. makes his predictions for Round 1 of the NFL draft, including projecting a trade in the top five. Trending NowAnne-Marie Sorvin-USA TODAY SBoston Bruins record tracker: Wins, points, milestonesThe B's are on pace for NHL records in wins and points, along with some individual superlatives as well. Follow along here with our updated tracker.Mandatory Credit: William Purnell-USA TODAY Sports2023 NFL full draft order: AFC, NFC team picks for all roundsStarting with the Carolina Panthers at No. 1 overall, here's the entire 2023 NFL draft broken down round by round. How to Watch on ESPN+Gregory Fisher/Icon Sportswire2023 NCAA men's hockey: Results,", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-14", "text": "Watch on ESPN+Gregory Fisher/Icon Sportswire2023 NCAA men's hockey: Results, bracket, how to watchThe matchups in Tampa promise to be thrillers, featuring plenty of star power, high-octane offense and stellar defense.(AP Photo/Koji Sasahara, File)How to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN, ESPN+Here's everything you need to know about how to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN and ESPN+.Hailie Lynch/XFLHow to watch the XFL: 2023 schedule, teams, players, news, moreEvery XFL game will be streamed on ESPN+. Find out when and where else you can watch the eight teams compete. Sign up to play the #1 Fantasy Baseball GameReactivate A LeagueCreate A LeagueJoin a Public LeaguePractice With a Mock DraftSports BettingAP Photo/Mike KropfMarch Madness betting 2023: Bracket odds, lines, tips, moreThe 2023 NCAA tournament brackets have finally been released, and we have everything you need to know to make a bet on all of the March Madness games. Sign up to play the #1 Fantasy game!Create A LeagueJoin Public LeagueReactivateMock Draft Now\\n\\nESPN+\\n\\n\\n\\n\\nNHL: Select Games\\n\\n\\n\\n\\n\\n\\n\\nXFL\\n\\n\\n\\n\\n\\n\\n\\nMLB: Select Games\\n\\n\\n\\n\\n\\n\\n\\nNCAA Baseball\\n\\n\\n\\n\\n\\n\\n\\nNCAA Softball\\n\\n\\n\\n\\n\\n\\n\\nCricket: Select Matches\\n\\n\\n\\n\\n\\n\\n\\nMel Kiper's NFL Mock Draft 3.0\\n\\n\\nQuick Links\\n\\n\\n\\n\\nMen's Tournament Challenge\\n\\n\\n\\n\\n\\n\\n\\nWomen's Tournament Challenge\\n\\n\\n\\n\\n\\n\\n\\nNFL Draft Order\\n\\n\\n\\n\\n\\n\\n\\nHow To Watch", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-15", "text": "Draft Order\\n\\n\\n\\n\\n\\n\\n\\nHow To Watch NHL Games\\n\\n\\n\\n\\n\\n\\n\\nFantasy Baseball: Sign Up\\n\\n\\n\\n\\n\\n\\n\\nHow To Watch PGA TOUR\\n\\n\\nESPN Sites\\n\\n\\n\\n\\nESPN Deportes\\n\\n\\n\\n\\n\\n\\n\\nAndscape\\n\\n\\n\\n\\n\\n\\n\\nespnW\\n\\n\\n\\n\\n\\n\\n\\nESPNFC\\n\\n\\n\\n\\n\\n\\n\\nX Games\\n\\n\\n\\n\\n\\n\\n\\nSEC Network\\n\\n\\nESPN Apps\\n\\n\\n\\n\\nESPN\\n\\n\\n\\n\\n\\n\\n\\nESPN Fantasy\\n\\n\\nFollow ESPN\\n\\n\\n\\n\\nFacebook\\n\\n\\n\\n\\n\\n\\n\\nTwitter\\n\\n\\n\\n\\n\\n\\n\\nInstagram\\n\\n\\n\\n\\n\\n\\n\\nSnapchat\\n\\n\\n\\n\\n\\n\\n\\nYouTube\\n\\n\\n\\n\\n\\n\\n\\nThe ESPN Daily Podcast\\n\\n\\nTerms of UsePrivacy PolicyYour US State Privacy RightsChildren's Online Privacy PolicyInterest-Based AdsAbout Nielsen MeasurementDo Not Sell or Share My Personal InformationContact UsDisney Ad Sales SiteWork for ESPNCopyright: \u00a9 ESPN Enterprises, Inc. All rights reserved.\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\", lookup_str='', metadata={'source': 'https://www.espn.com/'}, lookup_index=0),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-16", "text": "Document(page_content='GoogleSearch Images Maps Play YouTube News Gmail Drive More \u00bbWeb History | Settings | Sign in\\xa0Advanced searchAdvertisingBusiness SolutionsAbout Google\u00a9 2023 - Privacy - Terms ', lookup_str='', metadata={'source': 'https://google.com'}, lookup_index=0)]\nLoad multiple urls concurrently#\nYou can speed up the scraping process by scraping and parsing multiple urls concurrently.\nThere are reasonable limits to concurrent requests, defaulting to 2 per second. If you aren\u2019t concerned about being a good citizen, or you control the server you are scraping and don\u2019t care about load, you can change the requests_per_second parameter to increase the max concurrent requests. Note, while this will speed up the scraping process, but may cause the server to block you. Be careful!\n!pip install nest_asyncio\n# fixes a bug with asyncio and jupyter\nimport nest_asyncio\nnest_asyncio.apply()\nRequirement already satisfied: nest_asyncio in /Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages (1.5.6)\nloader = WebBaseLoader([\"https://www.espn.com/\", \"https://google.com\"])\nloader.requests_per_second = 1\ndocs = loader.aload()\ndocs", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-17", "text": "[Document(page_content=\"\\n\\n\\n\\n\\n\\n\\n\\n\\nESPN - Serving Sports Fans. Anytime. Anywhere.\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n Skip to main content\\n \\n\\n Skip to navigation\\n \\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n<\\n\\n>\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nMenuESPN\\n\\n\\nSearch\\n\\n\\n\\nscores\\n\\n\\n\\nNFLNBANCAAMNCAAWNHLSoccer\u2026MLBNCAAFGolfTennisSports BettingBoxingCFLNCAACricketF1HorseLLWSMMANASCARNBA G LeagueOlympic SportsRacingRN BBRN FBRugbyWNBAWorld Baseball ClassicWWEX GamesXFLMore ESPNFantasyListenWatchESPN+\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n \\n\\nSUBSCRIBE NOW\\n\\n\\n\\n\\n\\nNHL: Select Games\\n\\n\\n\\n\\n\\n\\n\\nXFL\\n\\n\\n\\n\\n\\n\\n\\nMLB: Select Games\\n\\n\\n\\n\\n\\n\\n\\nNCAA Baseball\\n\\n\\n\\n\\n\\n\\n\\nNCAA Softball\\n\\n\\n\\n\\n\\n\\n\\nCricket: Select Matches\\n\\n\\n\\n\\n\\n\\n\\nMel Kiper's NFL", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-18", "text": "Select Matches\\n\\n\\n\\n\\n\\n\\n\\nMel Kiper's NFL Mock Draft 3.0\\n\\n\\nQuick Links\\n\\n\\n\\n\\nMen's Tournament Challenge\\n\\n\\n\\n\\n\\n\\n\\nWomen's Tournament Challenge\\n\\n\\n\\n\\n\\n\\n\\nNFL Draft Order\\n\\n\\n\\n\\n\\n\\n\\nHow To Watch NHL Games\\n\\n\\n\\n\\n\\n\\n\\nFantasy Baseball: Sign Up\\n\\n\\n\\n\\n\\n\\n\\nHow To Watch PGA TOUR\\n\\n\\n\\n\\n\\n\\nFavorites\\n\\n\\n\\n\\n\\n\\n Manage Favorites\\n \\n\\n\\n\\nCustomize ESPNSign UpLog InESPN Sites\\n\\n\\n\\n\\nESPN Deportes\\n\\n\\n\\n\\n\\n\\n\\nAndscape\\n\\n\\n\\n\\n\\n\\n\\nespnW\\n\\n\\n\\n\\n\\n\\n\\nESPNFC\\n\\n\\n\\n\\n\\n\\n\\nX Games\\n\\n\\n\\n\\n\\n\\n\\nSEC Network\\n\\n\\nESPN Apps\\n\\n\\n\\n\\nESPN\\n\\n\\n\\n\\n\\n\\n\\nESPN Fantasy\\n\\n\\nFollow ESPN\\n\\n\\n\\n\\nFacebook\\n\\n\\n\\n\\n\\n\\n\\nTwitter\\n\\n\\n\\n\\n\\n\\n\\nInstagram\\n\\n\\n\\n\\n\\n\\n\\nSnapchat\\n\\n\\n\\n\\n\\n\\n\\nYouTube\\n\\n\\n\\n\\n\\n\\n\\nThe ESPN Daily Podcast\\n\\n\\nAre you ready for Opening Day? Here's your guide to MLB's offseason chaosWait, Jacob deGrom is on the Rangers now? Xander Bogaerts and Trea Turner signed where? And what about Carlos Correa? Yeah, you're going to need to read up before Opening Day.12hESPNIllustration by ESPNEverything you missed in the MLB offseason3h2:33World Series odds, win totals, props for every teamPlay fantasy baseball for free!TOP HEADLINESQB Jackson has requested trade from RavensSources: Texas", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-19", "text": "fantasy baseball for free!TOP HEADLINESQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersLAMAR WANTS OUT OF BALTIMOREMarcus Spears identifies the two teams that need Lamar Jackson the most7h2:00Would Lamar sit out? Will Ravens draft a QB? Jackson trade request insightsLamar Jackson has asked Baltimore to trade him, but Ravens coach John Harbaugh hopes the QB will be back.3hJamison HensleyBallard, Colts will consider trading for QB JacksonJackson to Indy? Washington? Barnwell ranks the QB's trade fitsSNYDER'S TUMULTUOUS 24-YEAR RUNHow Washington\u2019s NFL franchise sank on and off the field under owner Dan SnyderSnyder purchased one of the NFL's marquee franchises in 1999. Twenty-four years later, and with the team up for sale, he leaves a legacy of on-field futility and off-field scandal.13hJohn KeimESPNIOWA STAR STEPS UP AGAINJ-Will: Caitlin Clark is the biggest brand in college sports right now8h0:47'The better the opponent, the better she plays': Clark draws comparisons to TaurasiCaitlin Clark's performance on Sunday had longtime observers going back decades to find comparisons.16hKevin PeltonWOMEN'S ELITE EIGHT SCOREBOARDMONDAY'S GAMESCheck your bracket!NBA DRAFTHow top prospects fared on the road to the Final FourThe 2023 NCAA tournament is down to four", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-20", "text": "prospects fared on the road to the Final FourThe 2023 NCAA tournament is down to four teams, and ESPN's Jonathan Givony recaps the players who saw their NBA draft stock change.11hJonathan GivonyAndy Lyons/Getty ImagesTALKING BASKETBALLWhy AD needs to be more assertive with LeBron on the court9h1:33Why Perk won't blame Kyrie for Mavs' woes8h1:48WHERE EVERY TEAM STANDSNew NFL Power Rankings: Post-free-agency 1-32 poll, plus underrated offseason movesThe free agent frenzy has come and gone. Which teams have improved their 2023 outlook, and which teams have taken a hit?12hNFL Nation reportersIllustration by ESPNTHE BUCK STOPS WITH BELICHICKBruschi: Fair to criticize Bill Belichick for Patriots' struggles10h1:27 Top HeadlinesQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersFavorites FantasyManage FavoritesFantasy HomeCustomize ESPNSign UpLog InMarch Madness LiveESPNMarch Madness LiveWatch every men's NCAA tournament game live! ICYMI1:42Austin Peay's coach, pitcher and catcher all ejected after retaliation pitchAustin Peay's pitcher, catcher and coach were all ejected after a pitch was thrown at Liberty's Nathan Keeter, who earlier in the game hit a home run and celebrated while running down the third-base line. Men's Tournament ChallengeIllustration by ESPNMen's Tournament ChallengeCheck your bracket(s) in the", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-21", "text": "Men's Tournament ChallengeIllustration by ESPNMen's Tournament ChallengeCheck your bracket(s) in the 2023 Men's Tournament Challenge, which you can follow throughout the Big Dance. Women's Tournament ChallengeIllustration by ESPNWomen's Tournament ChallengeCheck your bracket(s) in the 2023 Women's Tournament Challenge, which you can follow throughout the Big Dance. Best of ESPN+AP Photo/Lynne SladkyFantasy Baseball ESPN+ Cheat Sheet: Sleepers, busts, rookies and closersYou've read their names all preseason long, it'd be a shame to forget them on draft day. The ESPN+ Cheat Sheet is one way to make sure that doesn't happen.Steph Chambers/Getty ImagesPassan's 2023 MLB season preview: Bold predictions and moreOpening Day is just over a week away -- and Jeff Passan has everything you need to know covered from every possible angle.Photo by Bob Kupbens/Icon Sportswire2023 NFL free agency: Best team fits for unsigned playersWhere could Ezekiel Elliott land? Let's match remaining free agents to teams and find fits for two trade candidates.Illustration by ESPN2023 NFL mock draft: Mel Kiper's first-round pick predictionsMel Kiper Jr. makes his predictions for Round 1 of the NFL draft, including projecting a trade in the top five. Trending NowAnne-Marie Sorvin-USA TODAY SBoston Bruins record tracker: Wins, points, milestonesThe B's are on pace for NHL records in wins and points, along with some individual superlatives as well. Follow along here with our updated tracker.Mandatory Credit: William Purnell-USA TODAY Sports2023 NFL full draft order: AFC, NFC team picks for all roundsStarting with the Carolina Panthers at No. 1 overall, here's the entire 2023 NFL draft broken down round by round. How to Watch on ESPN+Gregory Fisher/Icon Sportswire2023 NCAA men's hockey: Results,", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-22", "text": "Watch on ESPN+Gregory Fisher/Icon Sportswire2023 NCAA men's hockey: Results, bracket, how to watchThe matchups in Tampa promise to be thrillers, featuring plenty of star power, high-octane offense and stellar defense.(AP Photo/Koji Sasahara, File)How to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN, ESPN+Here's everything you need to know about how to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN and ESPN+.Hailie Lynch/XFLHow to watch the XFL: 2023 schedule, teams, players, news, moreEvery XFL game will be streamed on ESPN+. Find out when and where else you can watch the eight teams compete. Sign up to play the #1 Fantasy Baseball GameReactivate A LeagueCreate A LeagueJoin a Public LeaguePractice With a Mock DraftSports BettingAP Photo/Mike KropfMarch Madness betting 2023: Bracket odds, lines, tips, moreThe 2023 NCAA tournament brackets have finally been released, and we have everything you need to know to make a bet on all of the March Madness games. Sign up to play the #1 Fantasy game!Create A LeagueJoin Public LeagueReactivateMock Draft Now\\n\\nESPN+\\n\\n\\n\\n\\nNHL: Select Games\\n\\n\\n\\n\\n\\n\\n\\nXFL\\n\\n\\n\\n\\n\\n\\n\\nMLB: Select Games\\n\\n\\n\\n\\n\\n\\n\\nNCAA Baseball\\n\\n\\n\\n\\n\\n\\n\\nNCAA Softball\\n\\n\\n\\n\\n\\n\\n\\nCricket: Select Matches\\n\\n\\n\\n\\n\\n\\n\\nMel Kiper's NFL Mock Draft 3.0\\n\\n\\nQuick Links\\n\\n\\n\\n\\nMen's Tournament Challenge\\n\\n\\n\\n\\n\\n\\n\\nWomen's Tournament Challenge\\n\\n\\n\\n\\n\\n\\n\\nNFL Draft Order\\n\\n\\n\\n\\n\\n\\n\\nHow To Watch", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-23", "text": "Draft Order\\n\\n\\n\\n\\n\\n\\n\\nHow To Watch NHL Games\\n\\n\\n\\n\\n\\n\\n\\nFantasy Baseball: Sign Up\\n\\n\\n\\n\\n\\n\\n\\nHow To Watch PGA TOUR\\n\\n\\nESPN Sites\\n\\n\\n\\n\\nESPN Deportes\\n\\n\\n\\n\\n\\n\\n\\nAndscape\\n\\n\\n\\n\\n\\n\\n\\nespnW\\n\\n\\n\\n\\n\\n\\n\\nESPNFC\\n\\n\\n\\n\\n\\n\\n\\nX Games\\n\\n\\n\\n\\n\\n\\n\\nSEC Network\\n\\n\\nESPN Apps\\n\\n\\n\\n\\nESPN\\n\\n\\n\\n\\n\\n\\n\\nESPN Fantasy\\n\\n\\nFollow ESPN\\n\\n\\n\\n\\nFacebook\\n\\n\\n\\n\\n\\n\\n\\nTwitter\\n\\n\\n\\n\\n\\n\\n\\nInstagram\\n\\n\\n\\n\\n\\n\\n\\nSnapchat\\n\\n\\n\\n\\n\\n\\n\\nYouTube\\n\\n\\n\\n\\n\\n\\n\\nThe ESPN Daily Podcast\\n\\n\\nTerms of UsePrivacy PolicyYour US State Privacy RightsChildren's Online Privacy PolicyInterest-Based AdsAbout Nielsen MeasurementDo Not Sell or Share My Personal InformationContact UsDisney Ad Sales SiteWork for ESPNCopyright: \u00a9 ESPN Enterprises, Inc. All rights reserved.\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\", lookup_str='', metadata={'source': 'https://www.espn.com/'}, lookup_index=0),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-24", "text": "Document(page_content='GoogleSearch Images Maps Play YouTube News Gmail Drive More \u00bbWeb History | Settings | Sign in\\xa0Advanced searchAdvertisingBusiness SolutionsAbout Google\u00a9 2023 - Privacy - Terms ', lookup_str='', metadata={'source': 'https://google.com'}, lookup_index=0)]\nLoading a xml file, or using a different BeautifulSoup parser#\nYou can also look at SitemapLoader for an example of how to load a sitemap file, which is an example of using this feature.\nloader = WebBaseLoader(\"https://www.govinfo.gov/content/pkg/CFR-2018-title10-vol3/xml/CFR-2018-title10-vol3-sec431-86.xml\")\nloader.default_parser = \"xml\"\ndocs = loader.load()\ndocs", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-25", "text": "[Document(page_content='\\n\\n10\\nEnergy\\n3\\n2018-01-01\\n2018-01-01\\nfalse\\nUniform test method for the measurement of energy efficiency of commercial packaged boilers.\\n\u00c2\u00a7 431.86\\nSection \u00c2\u00a7 431.86\\n\\nEnergy\\nDEPARTMENT OF ENERGY\\nENERGY CONSERVATION\\nENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT\\nCommercial Packaged Boilers\\nTest Procedures\\n\\n\\n\\n\\n\u00a7\\u2009431.86\\nUniform test method for the measurement of energy efficiency of commercial packaged boilers.\\n(a) Scope. This section provides test procedures, pursuant to the Energy Policy and Conservation Act (EPCA), as amended, which must be followed for measuring the combustion efficiency and/or thermal efficiency of a gas- or oil-fired commercial packaged boiler.\\n(b) Testing and Calculations. Determine the thermal efficiency or combustion efficiency of commercial packaged boilers by conducting the appropriate test procedure(s) indicated in Table 1 of this section.\\n\\nTable 1\u2014Test Requirements for Commercial Packaged Boiler Equipment Classes\\n\\nEquipment category\\nSubcategory\\nCertified rated inputBtu/h\\n\\nStandards efficiency metric(\u00a7\\u2009431.87)\\n\\nTest procedure(corresponding to\\nstandards efficiency\\nmetric required\\nby \u00a7\\u2009431.87)\\n\\n\\n\\nHot Water\\nGas-fired\\n\u2265300,000 and \u22642,500,000\\nThermal Efficiency\\nAppendix A, Section 2.\\n\\n\\nHot Water\\nGas-fired\\n>2,500,000\\nCombustion Efficiency\\nAppendix A, Section 3.\\n\\n\\nHot Water\\nOil-fired\\n\u2265300,000 and \u22642,500,000\\nThermal Efficiency\\nAppendix A, Section 2.\\n\\n\\nHot", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-26", "text": "Efficiency\\nAppendix A, Section 2.\\n\\n\\nHot Water\\nOil-fired\\n>2,500,000\\nCombustion Efficiency\\nAppendix A, Section 3.\\n\\n\\nSteam\\nGas-fired (all*)\\n\u2265300,000 and \u22642,500,000\\nThermal Efficiency\\nAppendix A, Section 2.\\n\\n\\nSteam\\nGas-fired (all*)\\n>2,500,000 and \u22645,000,000\\nThermal Efficiency\\nAppendix A, Section 2.\\n\\n\\n\\u2003\\n\\n>5,000,000\\nThermal Efficiency\\nAppendix A, Section 2.OR\\nAppendix A, Section 3 with Section 2.4.3.2.\\n\\n\\n\\nSteam\\nOil-fired\\n\u2265300,000 and \u22642,500,000\\nThermal Efficiency\\nAppendix A, Section 2.\\n\\n\\nSteam\\nOil-fired\\n>2,500,000 and \u22645,000,000\\nThermal Efficiency\\nAppendix A, Section 2.\\n\\n\\n\\u2003\\n\\n>5,000,000\\nThermal Efficiency\\nAppendix A, Section 2.OR\\nAppendix A, Section 3. with Section 2.4.3.2.\\n\\n\\n\\n*\\u2009Equipment classes for commercial packaged boilers as of July 22, 2009 (74 FR 36355) distinguish between gas-fired natural draft and all other gas-fired (except natural draft).\\n\\n(c) Field Tests. The field test provisions of appendix A may be used only to test a unit of commercial packaged boiler with rated input greater than 5,000,000 Btu/h.\\n[81 FR 89305, Dec. 9, 2016]\\n\\n\\nEnergy Efficiency Standards\\n\\n', lookup_str='', metadata={'source':", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-27", "text": "2016]\\n\\n\\nEnergy Efficiency Standards\\n\\n', lookup_str='', metadata={'source': 'https://www.govinfo.gov/content/pkg/CFR-2018-title10-vol3/xml/CFR-2018-title10-vol3-sec431-86.xml'}, lookup_index=0)]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "95611e8b5e3b-28", "text": "previous\nURL\nnext\nWeather\n Contents\n \nLoading multiple webpages\nLoad multiple urls concurrently\nLoading a xml file, or using a different BeautifulSoup parser\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html"}
+{"id": "77a32a0f8367-0", "text": ".ipynb\n.pdf\nAWS S3 File\nAWS S3 File#\nAmazon Simple Storage Service (Amazon S3) is an object storage service.\nAWS S3 Buckets\nThis covers how to load document objects from an AWS S3 File object.\nfrom langchain.document_loaders import S3FileLoader\n#!pip install boto3\nloader = S3FileLoader(\"testing-hwc\", \"fake.docx\")\nloader.load()\n[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpxvave6wl/fake.docx'}, lookup_index=0)]\nprevious\nAWS S3 Directory\nnext\nAzure Blob Storage Container\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/aws_s3_file.html"}
+{"id": "5120dde53bbc-0", "text": ".ipynb\n.pdf\nSnowflake\nSnowflake#\nThis notebooks goes over how to load documents from Snowflake\n! pip install snowflake-connector-python\nimport settings as s\nfrom langchain.document_loaders import SnowflakeLoader\nQUERY = \"select text, survey_id from CLOUD_DATA_SOLUTIONS.HAPPY_OR_NOT.OPEN_FEEDBACK limit 10\"\nsnowflake_loader = SnowflakeLoader(\n query=QUERY,\n user=s.SNOWFLAKE_USER,\n password=s.SNOWFLAKE_PASS,\n account=s.SNOWFLAKE_ACCOUNT,\n warehouse=s.SNOWFLAKE_WAREHOUSE,\n role=s.SNOWFLAKE_ROLE,\n database=s.SNOWFLAKE_DATABASE,\n schema=s.SNOWFLAKE_SCHEMA\n)\nsnowflake_documents = snowflake_loader.load()\nprint(snowflake_documents)\nfrom snowflakeLoader import SnowflakeLoader\nimport settings as s\nQUERY = \"select text, survey_id as source from CLOUD_DATA_SOLUTIONS.HAPPY_OR_NOT.OPEN_FEEDBACK limit 10\"\nsnowflake_loader = SnowflakeLoader(\n query=QUERY,\n user=s.SNOWFLAKE_USER,\n password=s.SNOWFLAKE_PASS,\n account=s.SNOWFLAKE_ACCOUNT,\n warehouse=s.SNOWFLAKE_WAREHOUSE,\n role=s.SNOWFLAKE_ROLE,\n database=s.SNOWFLAKE_DATABASE,\n schema=s.SNOWFLAKE_SCHEMA,\n metadata_columns=['source']\n)\nsnowflake_documents = snowflake_loader.load()\nprint(snowflake_documents)\nprevious\nSlack\nnext\nSpreedly\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/snowflake.html"}
+{"id": "83d76187229c-0", "text": ".ipynb\n.pdf\nSlack\n Contents \n\ud83e\uddd1 Instructions for ingesting your own dataset\nSlack#\nSlack is an instant messaging program.\nThis notebook covers how to load documents from a Zipfile generated from a Slack export.\nIn order to get this Slack export, follow these instructions:\n\ud83e\uddd1 Instructions for ingesting your own dataset#\nExport your Slack data. You can do this by going to your Workspace Management page and clicking the Import/Export option ({your_slack_domain}.slack.com/services/export). Then, choose the right date range and click Start export. Slack will send you an email and a DM when the export is ready.\nThe download will produce a .zip file in your Downloads folder (or wherever your downloads can be found, depending on your OS configuration).\nCopy the path to the .zip file, and assign it as LOCAL_ZIPFILE below.\nfrom langchain.document_loaders import SlackDirectoryLoader \n# Optionally set your Slack URL. This will give you proper URLs in the docs sources.\nSLACK_WORKSPACE_URL = \"https://xxx.slack.com\"\nLOCAL_ZIPFILE = \"\" # Paste the local path to your Slack zip file here.\nloader = SlackDirectoryLoader(LOCAL_ZIPFILE, SLACK_WORKSPACE_URL)\ndocs = loader.load()\ndocs\nprevious\nRoam\nnext\nSnowflake\n Contents\n \n\ud83e\uddd1 Instructions for ingesting your own dataset\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/slack.html"}
+{"id": "a98e7b2fad9d-0", "text": ".ipynb\n.pdf\nRoam\n Contents \n\ud83e\uddd1 Instructions for ingesting your own dataset\nRoam#\nROAM is a note-taking tool for networked thought, designed to create a personal knowledge base.\nThis notebook covers how to load documents from a Roam database. This takes a lot of inspiration from the example repo here.\n\ud83e\uddd1 Instructions for ingesting your own dataset#\nExport your dataset from Roam Research. You can do this by clicking on the three dots in the upper right hand corner and then clicking Export.\nWhen exporting, make sure to select the Markdown & CSV format option.\nThis will produce a .zip file in your Downloads folder. Move the .zip file into this repository.\nRun the following command to unzip the zip file (replace the Export... with your own file name as needed).\nunzip Roam-Export-1675782732639.zip -d Roam_DB\nfrom langchain.document_loaders import RoamLoader\nloader = RoamLoader(\"Roam_DB\")\ndocs = loader.load()\nprevious\nReddit\nnext\nSlack\n Contents\n \n\ud83e\uddd1 Instructions for ingesting your own dataset\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/roam.html"}
+{"id": "d05ac0ced1d6-0", "text": ".ipynb\n.pdf\nFacebook Chat\nFacebook Chat#\nMessenger is an American proprietary instant messaging app and platform developed by Meta Platforms. Originally developed as Facebook Chat in 2008, the company revamped its messaging service in 2010.\nThis notebook covers how to load data from the Facebook Chats into a format that can be ingested into LangChain.\n#pip install pandas\nfrom langchain.document_loaders import FacebookChatLoader\nloader = FacebookChatLoader(\"example_data/facebook_chat.json\")\nloader.load()", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/facebook_chat.html"}
+{"id": "d05ac0ced1d6-1", "text": "loader = FacebookChatLoader(\"example_data/facebook_chat.json\")\nloader.load()\n[Document(page_content='User 2 on 2023-02-05 03:46:11: Bye!\\n\\nUser 1 on 2023-02-05 03:43:55: Oh no worries! Bye\\n\\nUser 2 on 2023-02-05 03:24:37: No Im sorry it was my mistake, the blue one is not for sale\\n\\nUser 1 on 2023-02-05 03:05:40: I thought you were selling the blue one!\\n\\nUser 1 on 2023-02-05 03:05:09: Im not interested in this bag. Im interested in the blue one!\\n\\nUser 2 on 2023-02-05 03:04:28: Here is $129\\n\\nUser 2 on 2023-02-05 03:04:05: Online is at least $100\\n\\nUser 1 on 2023-02-05 02:59:59: How much do you want?\\n\\nUser 2 on 2023-02-04 22:17:56: Goodmorning! $50 is too low.\\n\\nUser 1 on 2023-02-04 14:17:02: Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!\\n\\n', metadata={'source': 'example_data/facebook_chat.json'})]\nprevious\nMicrosoft Excel\nnext\nFile Directory\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/facebook_chat.html"}
+{"id": "9bf17d99db8b-0", "text": ".ipynb\n.pdf\nFigma\nFigma#\nFigma is a collaborative web application for interface design.\nThis notebook covers how to load data from the Figma REST API into a format that can be ingested into LangChain, along with example usage for code generation.\nimport os\nfrom langchain.document_loaders.figma import FigmaFileLoader\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.indexes import VectorstoreIndexCreator\nfrom langchain.chains import ConversationChain, LLMChain\nfrom langchain.memory import ConversationBufferWindowMemory\nfrom langchain.prompts.chat import (\n ChatPromptTemplate,\n SystemMessagePromptTemplate,\n AIMessagePromptTemplate,\n HumanMessagePromptTemplate,\n)\nThe Figma API Requires an access token, node_ids, and a file key.\nThe file key can be pulled from the URL. https://www.figma.com/file/{filekey}/sampleFilename\nNode IDs are also available in the URL. Click on anything and look for the \u2018?node-id={node_id}\u2019 param.\nAccess token instructions are in the Figma help center article: https://help.figma.com/hc/en-us/articles/8085703771159-Manage-personal-access-tokens\nfigma_loader = FigmaFileLoader(\n os.environ.get('ACCESS_TOKEN'),\n os.environ.get('NODE_IDS'),\n os.environ.get('FILE_KEY')\n)\n# see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details\nindex = VectorstoreIndexCreator().from_loaders([figma_loader])\nfigma_doc_retriever = index.vectorstore.as_retriever()\ndef generate_code(human_input):\n # I have no idea if the Jon Carmack thing makes for better code. YMMV.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html"}
+{"id": "9bf17d99db8b-1", "text": "# See https://python.langchain.com/en/latest/modules/models/chat/getting_started.html for chat info\n system_prompt_template = \"\"\"You are expert coder Jon Carmack. Use the provided design context to create idomatic HTML/CSS code as possible based on the user request.\n Everything must be inline in one file and your response must be directly renderable by the browser.\n Figma file nodes and metadata: {context}\"\"\"\n human_prompt_template = \"Code the {text}. Ensure it's mobile responsive\"\n system_message_prompt = SystemMessagePromptTemplate.from_template(system_prompt_template)\n human_message_prompt = HumanMessagePromptTemplate.from_template(human_prompt_template)\n # delete the gpt-4 model_name to use the default gpt-3.5 turbo for faster results\n gpt_4 = ChatOpenAI(temperature=.02, model_name='gpt-4')\n # Use the retriever's 'get_relevant_documents' method if needed to filter down longer docs\n relevant_nodes = figma_doc_retriever.get_relevant_documents(human_input)\n conversation = [system_message_prompt, human_message_prompt]\n chat_prompt = ChatPromptTemplate.from_messages(conversation)\n response = gpt_4(chat_prompt.format_prompt( \n context=relevant_nodes, \n text=human_input).to_messages())\n return response\nresponse = generate_code(\"page top header\")\nReturns the following in response.content:", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html"}
+{"id": "9bf17d99db8b-2", "text": "\\n\\n\\n \\n \\n \\n\\n\\n \\n\\n", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html"}
+{"id": "9bf17d99db8b-4", "text": "previous\nFauna\nnext\nGitBook\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html"}
+{"id": "9ad33bb98e31-0", "text": ".ipynb\n.pdf\nDiscord\nDiscord#\nDiscord is a VoIP and instant messaging social platform. Users have the ability to communicate with voice calls, video calls, text messaging, media and files in private chats or as part of communities called \u201cservers\u201d. A server is a collection of persistent chat rooms and voice channels which can be accessed via invite links.\nFollow these steps to download your Discord data:\nGo to your User Settings\nThen go to Privacy and Safety\nHead over to the Request all of my Data and click on Request Data button\nIt might take 30 days for you to receive your data. You\u2019ll receive an email at the address which is registered with Discord. That email will have a download button using which you would be able to download your personal Discord data.\nimport pandas as pd\nimport os\npath = input(\"Please enter the path to the contents of the Discord \\\"messages\\\" folder: \")\nli = []\nfor f in os.listdir(path):\n expected_csv_path = os.path.join(path, f, 'messages.csv')\n csv_exists = os.path.isfile(expected_csv_path)\n if csv_exists:\n df = pd.read_csv(expected_csv_path, index_col=None, header=0)\n li.append(df)\ndf = pd.concat(li, axis=0, ignore_index=True, sort=False)\nfrom langchain.document_loaders.discord import DiscordChatLoader\nloader = DiscordChatLoader(df, user_id_col=\"ID\")\nprint(loader.load())\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/discord.html"}
+{"id": "6c597c234624-0", "text": ".ipynb\n.pdf\nMicrosoft Word\n Contents \nUsing Docx2txt\nUsing Unstructured\nRetain Elements\nMicrosoft Word#\nMicrosoft Word is a word processor developed by Microsoft.\nThis covers how to load Word documents into a document format that we can use downstream.\nUsing Docx2txt#\nLoad .docx using Docx2txt into a document.\n!pip install docx2txt \nfrom langchain.document_loaders import Docx2txtLoader\nloader = Docx2txtLoader(\"example_data/fake.docx\")\ndata = loader.load()\ndata\n[Document(page_content='Lorem ipsum dolor sit amet.', metadata={'source': 'example_data/fake.docx'})]\nUsing Unstructured#\nfrom langchain.document_loaders import UnstructuredWordDocumentLoader\nloader = UnstructuredWordDocumentLoader(\"example_data/fake.docx\")\ndata = loader.load()\ndata\n[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 'fake.docx'}, lookup_index=0)]\nRetain Elements#\nUnder the hood, Unstructured creates different \u201celements\u201d for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode=\"elements\".\nloader = UnstructuredWordDocumentLoader(\"example_data/fake.docx\", mode=\"elements\")\ndata = loader.load()\ndata[0]\nDocument(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 'fake.docx', 'filename': 'fake.docx', 'category': 'Title'}, lookup_index=0)\nprevious\nMicrosoft PowerPoint\nnext\nOpen Document Format (ODT)\n Contents\n \nUsing Docx2txt\nUsing Unstructured\nRetain Elements\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/microsoft_word.html"}
+{"id": "6c597c234624-1", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/microsoft_word.html"}
+{"id": "4b171d384469-0", "text": ".ipynb\n.pdf\nHTML\n Contents \nLoading HTML with BeautifulSoup4\nHTML#\nThe HyperText Markup Language or HTML is the standard markup language for documents designed to be displayed in a web browser.\nThis covers how to load HTML documents into a document format that we can use downstream.\nfrom langchain.document_loaders import UnstructuredHTMLLoader\nloader = UnstructuredHTMLLoader(\"example_data/fake-content.html\")\ndata = loader.load()\ndata\n[Document(page_content='My First Heading\\n\\nMy first paragraph.', lookup_str='', metadata={'source': 'example_data/fake-content.html'}, lookup_index=0)]\nLoading HTML with BeautifulSoup4#\nWe can also use BeautifulSoup4 to load HTML documents using the BSHTMLLoader. This will extract the text from the HTML into page_content, and the page title as title into metadata.\nfrom langchain.document_loaders import BSHTMLLoader\nloader = BSHTMLLoader(\"example_data/fake-content.html\")\ndata = loader.load()\ndata\n[Document(page_content='\\n\\nTest Title\\n\\n\\nMy First Heading\\nMy first paragraph.\\n\\n\\n', metadata={'source': 'example_data/fake-content.html', 'title': 'Test Title'})]\nprevious\nFile Directory\nnext\nImages\n Contents\n \nLoading HTML with BeautifulSoup4\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/html.html"}
+{"id": "c1eaee03542b-0", "text": ".ipynb\n.pdf\nReddit\nReddit#\nReddit is an American social news aggregation, content rating, and discussion website.\nThis loader fetches the text from the Posts of Subreddits or Reddit users, using the praw Python package.\nMake a Reddit Application and initialize the loader with with your Reddit API credentials.\nfrom langchain.document_loaders import RedditPostsLoader\n# !pip install praw\n# load using 'subreddit' mode\nloader = RedditPostsLoader(\n client_id=\"YOUR CLIENT ID\",\n client_secret=\"YOUR CLIENT SECRET\",\n user_agent=\"extractor by u/Master_Ocelot8179\",\n categories=['new', 'hot'], # List of categories to load posts from\n mode = 'subreddit',\n search_queries=['investing', 'wallstreetbets'], # List of subreddits to load posts from\n number_posts=20 # Default value is 10\n )\n# # or load using 'username' mode\n# loader = RedditPostsLoader(\n# client_id=\"YOUR CLIENT ID\",\n# client_secret=\"YOUR CLIENT SECRET\",\n# user_agent=\"extractor by u/Master_Ocelot8179\",\n# categories=['new', 'hot'], \n# mode = 'username',\n# search_queries=['ga3far', 'Master_Ocelot8179'], # List of usernames to load posts from\n# number_posts=20\n# )\n# Note: Categories can be only of following value - \"controversial\" \"hot\" \"new\" \"rising\" \"top\"\ndocuments = loader.load()\ndocuments[:5]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/reddit.html"}
+{"id": "c1eaee03542b-1", "text": "documents = loader.load()\ndocuments[:5]\n[Document(page_content='Hello, I am not looking for investment advice. I will apply my own due diligence. However, I am interested if anyone knows as a UK resident how fees and exchange rate differences would impact performance?\\n\\nI am planning to create a pie of index funds (perhaps UK, US, europe) or find a fund with a good track record of long term growth at low rates. \\n\\nDoes anyone have any ideas?', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Long term retirement funds fees/exchange rate query', 'post_score': 1, 'post_id': '130pa6m', 'post_url': 'https://www.reddit.com/r/investing/comments/130pa6m/long_term_retirement_funds_feesexchange_rate_query/', 'post_author': Redditor(name='Badmanshiz')}),\n Document(page_content='I much prefer the Roth IRA and would rather rollover my 401k to that every year instead of keeping it in the limited 401k options. But if I rollover, will I be able to continue contributing to my 401k? Or will that close my account? I realize that there are tax implications of doing this but I still think it is the better option.', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Is it possible to rollover my 401k every year?', 'post_score': 3, 'post_id': '130ja0h', 'post_url': 'https://www.reddit.com/r/investing/comments/130ja0h/is_it_possible_to_rollover_my_401k_every_year/', 'post_author': Redditor(name='AnCap_Catholic')}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/reddit.html"}
+{"id": "c1eaee03542b-2", "text": "Document(page_content='Have a general question? Want to offer some commentary on markets? Maybe you would just like to throw out a neat fact that doesn\\'t warrant a self post? Feel free to post here! \\n\\nIf your question is \"I have $10,000, what do I do?\" or other \"advice for my personal situation\" questions, you should include relevant information, such as the following:\\n\\n* How old are you? What country do you live in? \\n* Are you employed/making income? How much? \\n* What are your objectives with this money? (Buy a house? Retirement savings?) \\n* What is your time horizon? Do you need this money next month? Next 20yrs? \\n* What is your risk tolerance? (Do you mind risking it at blackjack or do you need to know its 100% safe?) \\n* What are you current holdings? (Do you already have exposure to specific funds and sectors? Any other assets?) \\n* Any big debts (include interest rate) or expenses? \\n* And any other relevant financial information will be useful to give you a proper answer. \\n\\nPlease consider consulting our FAQ first - https://www.reddit.com/r/investing/wiki/faq\\nAnd our [side bar](https://www.reddit.com/r/investing/about/sidebar) also has useful resources. \\n\\nIf you are new to investing - please refer to Wiki - [Getting Started](https://www.reddit.com/r/investing/wiki/index/gettingstarted/)\\n\\nThe reading list in the wiki has a list of books ranging from light reading to advanced topics depending on your knowledge level. Link here - [Reading List](https://www.reddit.com/r/investing/wiki/readinglist)\\n\\nCheck the resources in the sidebar.\\n\\nBe aware that these answers", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/reddit.html"}
+{"id": "c1eaee03542b-3", "text": "the resources in the sidebar.\\n\\nBe aware that these answers are just opinions of Redditors and should be used as a starting point for your research. You should strongly consider seeing a registered investment adviser if you need professional support before making any financial decisions!', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Daily General Discussion and Advice Thread - April 27, 2023', 'post_score': 5, 'post_id': '130eszz', 'post_url': 'https://www.reddit.com/r/investing/comments/130eszz/daily_general_discussion_and_advice_thread_april/', 'post_author': Redditor(name='AutoModerator')}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/reddit.html"}
+{"id": "c1eaee03542b-4", "text": "Document(page_content=\"Based on recent news about salt battery advancements and the overall issues of lithium, I was wondering what would be feasible ways to invest into non-lithium based battery technologies? CATL is of course a choice, but the selection of brokers I currently have in my disposal don't provide HK stocks at all.\", metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Investing in non-lithium battery technologies?', 'post_score': 2, 'post_id': '130d6qp', 'post_url': 'https://www.reddit.com/r/investing/comments/130d6qp/investing_in_nonlithium_battery_technologies/', 'post_author': Redditor(name='-manabreak')}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/reddit.html"}
+{"id": "c1eaee03542b-5", "text": "Document(page_content='Hello everyone,\\n\\nI would really like to invest in an ETF that follows spy or another big index, as I think this form of investment suits me best. \\n\\nThe problem is, that I live in Denmark where ETFs and funds are taxed annually on unrealised gains at quite a steep rate. This means that an ETF growing say 10% per year will only grow about 6%, which really ruins the long term effects of compounding interest.\\n\\nHowever stocks are only taxed on realised gains which is why they look more interesting to hold long term.\\n\\nI do not like the lack of diversification this brings, as I am looking to spend tonnes of time picking the right long term stocks.\\n\\nIt would be ideal to find a few stocks that over the long term somewhat follows the indexes. Does anyone have suggestions?\\n\\nI have looked at Nasdaq Inc. which quite closely follows Nasdaq 100. \\n\\nI really appreciate any help.', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Stocks that track an index', 'post_score': 7, 'post_id': '130auvj', 'post_url': 'https://www.reddit.com/r/investing/comments/130auvj/stocks_that_track_an_index/', 'post_author': Redditor(name='LeAlbertP')})]\nprevious\nReadTheDocs Documentation\nnext\nRoam\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/reddit.html"}
+{"id": "4e709ed81e1a-0", "text": ".ipynb\n.pdf\nCopy Paste\n Contents \nMetadata\nCopy Paste#\nThis notebook covers how to load a document object from something you just want to copy and paste. In this case, you don\u2019t even need to use a DocumentLoader, but rather can just construct the Document directly.\nfrom langchain.docstore.document import Document\ntext = \"..... put the text you copy pasted here......\"\ndoc = Document(page_content=text)\nMetadata#\nIf you want to add metadata about the where you got this piece of text, you easily can with the metadata key.\nmetadata = {\"source\": \"internet\", \"date\": \"Friday\"}\ndoc = Document(page_content=text, metadata=metadata)\nprevious\nCoNLL-U\nnext\nCSV\n Contents\n \nMetadata\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/copypaste.html"}
+{"id": "1ba25e59e234-0", "text": ".ipynb\n.pdf\nReadTheDocs Documentation\nReadTheDocs Documentation#\nRead the Docs is an open-sourced free software documentation hosting platform. It generates documentation written with the Sphinx documentation generator.\nThis notebook covers how to load content from HTML that was generated as part of a Read-The-Docs build.\nFor an example of this in the wild, see here.\nThis assumes that the HTML has already been scraped into a folder. This can be done by uncommenting and running the following command\n#!pip install beautifulsoup4\n#!wget -r -A.html -P rtdocs https://langchain.readthedocs.io/en/latest/\nfrom langchain.document_loaders import ReadTheDocsLoader\nloader = ReadTheDocsLoader(\"rtdocs\", features='html.parser')\ndocs = loader.load()\nprevious\nPySpark DataFrame Loader\nnext\nReddit\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/readthedocs_documentation.html"}
+{"id": "9b4d93ec376f-0", "text": ".ipynb\n.pdf\nBlockchain\n Contents \nOverview\nLoad NFTs into Document Loader\nOption 1: Ethereum Mainnet (default BlockchainType)\nOption 2: Polygon Mainnet\nBlockchain#\nOverview#\nThe intention of this notebook is to provide a means of testing functionality in the Langchain Document Loader for Blockchain.\nInitially this Loader supports:\nLoading NFTs as Documents from NFT Smart Contracts (ERC721 and ERC1155)\nEthereum Mainnnet, Ethereum Testnet, Polygon Mainnet, Polygon Testnet (default is eth-mainnet)\nAlchemy\u2019s getNFTsForCollection API\nIt can be extended if the community finds value in this loader. Specifically:\nAdditional APIs can be added (e.g. Tranction-related APIs)\nThis Document Loader Requires:\nA free Alchemy API Key\nThe output takes the following format:\npageContent= Individual NFT\nmetadata={\u2018source\u2019: \u20180x1a92f7381b9f03921564a437210bb9396471050c\u2019, \u2018blockchain\u2019: \u2018eth-mainnet\u2019, \u2018tokenId\u2019: \u20180x15\u2019})\nLoad NFTs into Document Loader#\n# get ALCHEMY_API_KEY from https://www.alchemy.com/ \nalchemyApiKey = \"...\"\nOption 1: Ethereum Mainnet (default BlockchainType)#\nfrom langchain.document_loaders.blockchain import BlockchainDocumentLoader, BlockchainType\ncontractAddress = \"0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d\" # Bored Ape Yacht Club contract address\nblockchainType = BlockchainType.ETH_MAINNET #default value, optional parameter\nblockchainLoader = BlockchainDocumentLoader(contract_address=contractAddress,\n api_key=alchemyApiKey)\nnfts = blockchainLoader.load()\nnfts[:2]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/blockchain.html"}
+{"id": "9b4d93ec376f-1", "text": "nfts = blockchainLoader.load()\nnfts[:2]\nOption 2: Polygon Mainnet#\ncontractAddress = \"0x448676ffCd0aDf2D85C1f0565e8dde6924A9A7D9\" # Polygon Mainnet contract address\nblockchainType = BlockchainType.POLYGON_MAINNET \nblockchainLoader = BlockchainDocumentLoader(contract_address=contractAddress, \n blockchainType=blockchainType, \n api_key=alchemyApiKey)\nnfts = blockchainLoader.load()\nnfts[:2]\nprevious\nBlackboard\nnext\nChatGPT Data\n Contents\n \nOverview\nLoad NFTs into Document Loader\nOption 1: Ethereum Mainnet (default BlockchainType)\nOption 2: Polygon Mainnet\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/blockchain.html"}
+{"id": "263a3ae4a4ce-0", "text": ".ipynb\n.pdf\nAzure Blob Storage Container\n Contents \nSpecifying a prefix\nAzure Blob Storage Container#\nAzure Blob Storage is Microsoft\u2019s object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn\u2019t adhere to a particular data model or definition, such as text or binary data.\nAzure Blob Storage is designed for:\nServing images or documents directly to a browser.\nStoring files for distributed access.\nStreaming video and audio.\nWriting to log files.\nStoring data for backup and restore, disaster recovery, and archiving.\nStoring data for analysis by an on-premises or Azure-hosted service.\nThis notebook covers how to load document objects from a container on Azure Blob Storage.\n#!pip install azure-storage-blob\nfrom langchain.document_loaders import AzureBlobStorageContainerLoader\nloader = AzureBlobStorageContainerLoader(conn_str=\"\", container=\"\")\nloader.load()\n[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpaa9xl6ch/fake.docx'}, lookup_index=0)]\nSpecifying a prefix#\nYou can also specify a prefix for more finegrained control over what files to load.\nloader = AzureBlobStorageContainerLoader(conn_str=\"\", container=\"\", prefix=\"\")\nloader.load()\n[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpujbkzf_l/fake.docx'}, lookup_index=0)]\nprevious\nAWS S3 File\nnext\nAzure Blob Storage File\n Contents", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azure_blob_storage_container.html"}
+{"id": "263a3ae4a4ce-1", "text": "previous\nAWS S3 File\nnext\nAzure Blob Storage File\n Contents\n \nSpecifying a prefix\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azure_blob_storage_container.html"}
+{"id": "3a4077aac19b-0", "text": ".ipynb\n.pdf\nBiliBili\nBiliBili#\nBilibili is one of the most beloved long-form video sites in China.\nThis loader utilizes the bilibili-api to fetch the text transcript from Bilibili.\nWith this BiliBiliLoader, users can easily obtain the transcript of their desired video content on the platform.\n#!pip install bilibili-api-python\nfrom langchain.document_loaders import BiliBiliLoader\nloader = BiliBiliLoader(\n [\"https://www.bilibili.com/video/BV1xt411o7Xu/\"]\n)\nloader.load()\nprevious\nAZLyrics\nnext\nCollege Confidential\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/bilibili.html"}
+{"id": "db052a6575d0-0", "text": ".ipynb\n.pdf\nDocugami\n Contents \nPrerequisites\nQuick start\nAdvantages vs Other Chunking Techniques\nLoad Documents\nBasic Use: Docugami Loader for Document QA\nUsing Docugami to Add Metadata to Chunks for High Accuracy Document QA\nDocugami#\nThis notebook covers how to load documents from Docugami. It provides the advantages of using this system over alternative data loaders.\nPrerequisites#\nInstall necessary python packages.\nGrab an access token for your workspace, and make sure it is set as the DOCUGAMI_API_KEY environment variable.\nGrab some docset and document IDs for your processed documents, as described here: https://help.docugami.com/home/docugami-api\n# You need the lxml package to use the DocugamiLoader\n!pip install lxml\nQuick start#\nCreate a Docugami workspace (free trials available)\nAdd your documents (PDF, DOCX or DOC) and allow Docugami to ingest and cluster them into sets of similar documents, e.g. NDAs, Lease Agreements, and Service Agreements. There is no fixed set of document types supported by the system, the clusters created depend on your particular documents, and you can change the docset assignments later.\nCreate an access token via the Developer Playground for your workspace. Detailed instructions\nExplore the Docugami API to get a list of your processed docset IDs, or just the document IDs for a particular docset.\nUse the DocugamiLoader as detailed below, to get rich semantic chunks for your documents.\nOptionally, build and publish one or more reports or abstracts. This helps Docugami improve the semantic XML with better tags based on your preferences, which are then added to the DocugamiLoader output as metadata. Use techniques like self-querying retriever to do high accuracy Document QA.\nAdvantages vs Other Chunking Techniques#", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-1", "text": "Advantages vs Other Chunking Techniques#\nAppropriate chunking of your documents is critical for retrieval from documents. Many chunking techniques exist, including simple ones that rely on whitespace and recursive chunk splitting based on character length. Docugami offers a different approach:\nIntelligent Chunking: Docugami breaks down every document into a hierarchical semantic XML tree of chunks of varying sizes, from single words or numerical values to entire sections. These chunks follow the semantic contours of the document, providing a more meaningful representation than arbitrary length or simple whitespace-based chunking.\nStructured Representation: In addition, the XML tree indicates the structural contours of every document, using attributes denoting headings, paragraphs, lists, tables, and other common elements, and does that consistently across all supported document formats, such as scanned PDFs or DOCX files. It appropriately handles long-form document characteristics like page headers/footers or multi-column flows for clean text extraction.\nSemantic Annotations: Chunks are annotated with semantic tags that are coherent across the document set, facilitating consistent hierarchical queries across multiple documents, even if they are written and formatted differently. For example, in set of lease agreements, you can easily identify key provisions like the Landlord, Tenant, or Renewal Date, as well as more complex information such as the wording of any sub-lease provision or whether a specific jurisdiction has an exception section within a Termination Clause.\nAdditional Metadata: Chunks are also annotated with additional metadata, if a user has been using Docugami. This additional metadata can be used for high-accuracy Document QA without context window restrictions. See detailed code walk-through below.\nimport os\nfrom langchain.document_loaders import DocugamiLoader\nLoad Documents#\nIf the DOCUGAMI_API_KEY environment variable is set, there is no need to pass it in to the loader explicitly otherwise you can pass it in as the access_token parameter.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-2", "text": "DOCUGAMI_API_KEY=os.environ.get('DOCUGAMI_API_KEY')\n# To load all docs in the given docset ID, just don't provide document_ids\nloader = DocugamiLoader(docset_id=\"ecxqpipcoe2p\", document_ids=[\"43rj0ds7s0ur\"])\ndocs = loader.load()\ndocs\n[Document(page_content='MUTUAL NON-DISCLOSURE AGREEMENT This Mutual Non-Disclosure Agreement (this \u201c Agreement \u201d) is entered into and made effective as of April 4 , 2018 between Docugami Inc. , a Delaware corporation , whose address is 150 Lake Street South , Suite 221 , Kirkland , Washington 98033 , and Caleb Divine , an individual, whose address is 1201 Rt 300 , Newburgh NY 12550 .', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:ThisMutualNon-disclosureAgreement', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'ThisMutualNon-disclosureAgreement'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-3", "text": "Document(page_content='The above named parties desire to engage in discussions regarding a potential agreement or other transaction between the parties (the \u201cPurpose\u201d). In connection with such discussions, it may be necessary for the parties to disclose to each other certain confidential information or materials to enable them to evaluate whether to enter into such agreement or transaction.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Discussions', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'Discussions'}),\n Document(page_content='In consideration of the foregoing, the parties agree as follows:', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Consideration', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'Consideration'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-4", "text": "Document(page_content='1. Confidential Information . For purposes of this Agreement , \u201c Confidential Information \u201d means any information or materials disclosed by one party to the other party that: (i) if disclosed in writing or in the form of tangible materials, is marked \u201cconfidential\u201d or \u201cproprietary\u201d at the time of such disclosure; (ii) if disclosed orally or by visual presentation, is identified as \u201cconfidential\u201d or \u201cproprietary\u201d at the time of such disclosure, and is summarized in a writing sent by the disclosing party to the receiving party within thirty ( 30 ) days after any such disclosure; or (iii) due to its nature or the circumstances of its disclosure, a person exercising reasonable business judgment would understand to be confidential or proprietary.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Purposes/docset:ConfidentialInformation-section/docset:ConfidentialInformation[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'ConfidentialInformation'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-5", "text": "Document(page_content=\"2. Obligations and Restrictions . Each party agrees: (i) to maintain the other party's Confidential Information in strict confidence; (ii) not to disclose such Confidential Information to any third party; and (iii) not to use such Confidential Information for any purpose except for the Purpose. Each party may disclose the other party\u2019s Confidential Information to its employees and consultants who have a bona fide need to know such Confidential Information for the Purpose, but solely to the extent necessary to pursue the Purpose and for no other purpose; provided, that each such employee and consultant first executes a written agreement (or is otherwise already bound by a written agreement) that contains use and nondisclosure restrictions at least as protective of the other party\u2019s Confidential Information as those set forth in this Agreement .\", metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Obligations/docset:ObligationsAndRestrictions-section/docset:ObligationsAndRestrictions', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'ObligationsAndRestrictions'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-6", "text": "Document(page_content='3. Exceptions. The obligations and restrictions in Section 2 will not apply to any information or materials that:', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Exceptions/docset:Exceptions-section/docset:Exceptions[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Exceptions'}),\n Document(page_content='(i) were, at the date of disclosure, or have subsequently become, generally known or available to the public through no act or failure to act by the receiving party;', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheDate/docset:TheDate/docset:TheDate', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheDate'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-7", "text": "Document(page_content='(ii) were rightfully known by the receiving party prior to receiving such information or materials from the disclosing party;', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheDate/docset:SuchInformation/docset:TheReceivingParty', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheReceivingParty'}),\n Document(page_content='(iii) are rightfully acquired by the receiving party from a third party who has the right to disclose such information or materials without breach of any confidentiality obligation to the disclosing party;', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheDate/docset:TheReceivingParty/docset:TheReceivingParty', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheReceivingParty'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-8", "text": "Document(page_content='4. Compelled Disclosure . Nothing in this Agreement will be deemed to restrict a party from disclosing the other party\u2019s Confidential Information to the extent required by any order, subpoena, law, statute or regulation; provided, that the party required to make such a disclosure uses reasonable efforts to give the other party reasonable advance notice of such required disclosure in order to enable the other party to prevent or limit such disclosure.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Disclosure/docset:CompelledDisclosure-section/docset:CompelledDisclosure', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'CompelledDisclosure'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-9", "text": "Document(page_content='5. Return of Confidential Information . Upon the completion or abandonment of the Purpose, and in any event upon the disclosing party\u2019s request, the receiving party will promptly return to the disclosing party all tangible items and embodiments containing or consisting of the disclosing party\u2019s Confidential Information and all copies thereof (including electronic copies), and any notes, analyses, compilations, studies, interpretations, memoranda or other documents (regardless of the form thereof) prepared by or on behalf of the receiving party that contain or are based upon the disclosing party\u2019s Confidential Information .', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheCompletion/docset:ReturnofConfidentialInformation-section/docset:ReturnofConfidentialInformation', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'ReturnofConfidentialInformation'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-10", "text": "Document(page_content='6. No Obligations . Each party retains the right to determine whether to disclose any Confidential Information to the other party.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:NoObligations/docset:NoObligations-section/docset:NoObligations[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'NoObligations'}),\n Document(page_content='7. No Warranty. ALL CONFIDENTIAL INFORMATION IS PROVIDED BY THE DISCLOSING PARTY \u201cAS IS \u201d.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:NoWarranty/docset:NoWarranty-section/docset:NoWarranty[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'NoWarranty'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-11", "text": "Document(page_content='8. Term. This Agreement will remain in effect for a period of seven ( 7 ) years from the date of last disclosure of Confidential Information by either party, at which time it will terminate.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:ThisAgreement/docset:Term-section/docset:Term', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Term'}),\n Document(page_content='9. Equitable Relief . Each party acknowledges that the unauthorized use or disclosure of the disclosing party\u2019s Confidential Information may cause the disclosing party to incur irreparable harm and significant damages, the degree of which may be difficult to ascertain. Accordingly, each party agrees that the disclosing party will have the right to seek immediate equitable relief to enjoin any unauthorized use or disclosure of its Confidential Information , in addition to any other rights and remedies that it may have at law or otherwise.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:EquitableRelief/docset:EquitableRelief-section/docset:EquitableRelief[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'EquitableRelief'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-12", "text": "Document(page_content='10. Non-compete. To the maximum extent permitted by applicable law, during the Term of this Agreement and for a period of one ( 1 ) year thereafter, Caleb Divine may not market software products or do business that directly or indirectly competes with Docugami software products .', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheMaximumExtent/docset:Non-compete-section/docset:Non-compete', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Non-compete'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-13", "text": "Document(page_content='11. Miscellaneous. This Agreement will be governed and construed in accordance with the laws of the State of Washington , excluding its body of law controlling conflict of laws. This Agreement is the complete and exclusive understanding and agreement between the parties regarding the subject matter of this Agreement and supersedes all prior agreements, understandings and communications, oral or written, between the parties regarding the subject matter of this Agreement . If any provision of this Agreement is held invalid or unenforceable by a court of competent jurisdiction, that provision of this Agreement will be enforced to the maximum extent permissible and the other provisions of this Agreement will remain in full force and effect. Neither party may assign this Agreement , in whole or in part, by operation of law or otherwise, without the other party\u2019s prior written consent, and any attempted assignment without such consent will be void. This Agreement may be executed in counterparts, each of which will be deemed an original, but all of which together will constitute one and the same instrument.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Accordance/docset:Miscellaneous-section/docset:Miscellaneous', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Miscellaneous'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-14", "text": "Document(page_content='[SIGNATURE PAGE FOLLOWS] IN WITNESS WHEREOF, the parties hereto have executed this Mutual Non-Disclosure Agreement by their duly authorized officers or representatives as of the date first set forth above.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:Witness/docset:TheParties/docset:TheParties', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheParties'}),\n Document(page_content='DOCUGAMI INC . : \\n\\n Caleb Divine : \\n\\n Signature: Signature: Name: \\n\\n Jean Paoli Name: Title: \\n\\n CEO Title:', metadata={'xpath': '/docset:MutualNon-disclosure/docset:Witness/docset:TheParties/docset:DocugamiInc/docset:DocugamiInc/xhtml:table', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': '', 'tag': 'table'})]\nThe metadata for each Document (really, a chunk of an actual PDF, DOC or DOCX) contains some useful additional information:\nid and name: ID and Name of the file (PDF, DOC or DOCX) the chunk is sourced from within Docugami.\nxpath: XPath inside the XML representation of the document, for the chunk. Useful for source citations directly to the actual chunk inside the document XML.\nstructure: Structural attributes of the chunk, e.g. h1, h2, div, table, td, etc. Useful to filter out certain kinds of chunks if needed by the caller.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-15", "text": "tag: Semantic tag for the chunk, using various generative and extractive techniques. More details here: https://github.com/docugami/DFM-benchmarks\nBasic Use: Docugami Loader for Document QA#\nYou can use the Docugami Loader like a standard loader for Document QA over multiple docs, albeit with much better chunks that follow the natural contours of the document. There are many great tutorials on how to do this, e.g. this one. We can just use the same code, but use the DocugamiLoader for better chunking, instead of loading text or PDF files directly with basic splitting techniques.\n!poetry run pip -q install openai tiktoken chromadb\nfrom langchain.schema import Document\nfrom langchain.vectorstores import Chroma\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.llms import OpenAI\nfrom langchain.chains import RetrievalQA\n# For this example, we already have a processed docset for a set of lease documents\nloader = DocugamiLoader(docset_id=\"wh2kned25uqm\")\ndocuments = loader.load()\nThe documents returned by the loader are already split, so we don\u2019t need to use a text splitter. Optionally, we can use the metadata on each document, for example the structure or tag attributes, to do any post-processing we want.\nWe will just use the output of the DocugamiLoader as-is to set up a retrieval QA chain the usual way.\nembedding = OpenAIEmbeddings()\nvectordb = Chroma.from_documents(documents=documents, embedding=embedding)\nretriever = vectordb.as_retriever()\nqa_chain = RetrievalQA.from_chain_type(\n llm=OpenAI(), chain_type=\"stuff\", retriever=retriever, return_source_documents=True\n)\nUsing embedded DuckDB without persistence: data will be transient", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-16", "text": ")\nUsing embedded DuckDB without persistence: data will be transient\n# Try out the retriever with an example query\nqa_chain(\"What can tenants do with signage on their properties?\")\n{'query': 'What can tenants do with signage on their properties?',\n 'result': ' Tenants may place signs (digital or otherwise) or other form of identification on the premises after receiving written permission from the landlord which shall not be unreasonably withheld. The tenant is responsible for any damage caused to the premises and must conform to any applicable laws, ordinances, etc. governing the same. The tenant must also remove and clean any window or glass identification promptly upon vacating the premises.',", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-17", "text": "'source_documents': [Document(page_content='ARTICLE VI SIGNAGE 6.01 Signage . Tenant may place or attach to the Premises signs (digital or otherwise) or other such identification as needed after receiving written permission from the Landlord , which permission shall not be unreasonably withheld. Any damage caused to the Premises by the Tenant \u2019s erecting or removing such signs shall be repaired promptly by the Tenant at the Tenant \u2019s expense . Any signs or other form of identification allowed must conform to all applicable laws, ordinances, etc. governing the same. Tenant also agrees to have any window or glass identification completely removed and cleaned at its expense promptly upon vacating the Premises.', metadata={'xpath': '/docset:OFFICELEASEAGREEMENT-section/docset:OFFICELEASEAGREEMENT/docset:Article/docset:ARTICLEVISIGNAGE-section/docset:_601Signage-section/docset:_601Signage', 'id': 'v1bvgaozfkak', 'name': 'TruTone Lane 2.docx', 'structure': 'div', 'tag': '_601Signage', 'Landlord': 'BUBBA CENTER PARTNERSHIP', 'Tenant': 'Truetone Lane LLC'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-18", "text": "Document(page_content='Signage. Tenant may place or attach to the Premises signs (digital or otherwise) or other such identification as needed after receiving written permission from the Landlord , which permission shall not be unreasonably withheld. Any damage caused to the Premises by the Tenant \u2019s erecting or removing such signs shall be repaired promptly by the Tenant at the Tenant \u2019s expense . Any signs or other form of identification allowed must conform to all applicable laws, ordinances, etc. governing the same. Tenant also agrees to have any window or glass identification completely removed and cleaned at its expense promptly upon vacating the Premises. \\n\\n ARTICLE VII UTILITIES 7.01', metadata={'xpath': '/docset:OFFICELEASEAGREEMENT-section/docset:OFFICELEASEAGREEMENT/docset:ThisOFFICELEASEAGREEMENTThis/docset:ArticleIBasic/docset:ArticleIiiUseAndCareOf/docset:ARTICLEIIIUSEANDCAREOFPREMISES-section/docset:ARTICLEIIIUSEANDCAREOFPREMISES/docset:NoOtherPurposes/docset:TenantsResponsibility/dg:chunk', 'id': 'g2fvhekmltza', 'name': 'TruTone Lane 6.pdf', 'structure': 'lim', 'tag': 'chunk', 'Landlord': 'GLORY ROAD LLC', 'Tenant': 'Truetone Lane LLC'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-19", "text": "Document(page_content='Landlord , its agents, servants, employees, licensees, invitees, and contractors during the last year of the term of this Lease at any and all times during regular business hours, after 24 hour notice to tenant, to pass and repass on and through the Premises, or such portion thereof as may be necessary, in order that they or any of them may gain access to the Premises for the purpose of showing the Premises to potential new tenants or real estate brokers. In addition, Landlord shall be entitled to place a \"FOR RENT \" or \"FOR LEASE\" sign (not exceeding 8.5 \u201d x 11 \u201d) in the front window of the Premises during the last six months of the term of this Lease .', metadata={'xpath': '/docset:Rider/docset:RIDERTOLEASE-section/docset:RIDERTOLEASE/docset:FixedRent/docset:TermYearPeriod/docset:Lease/docset:_42FLandlordSAccess-section/docset:_42FLandlordSAccess/docset:LandlordsRights/docset:Landlord', 'id': 'omvs4mysdk6b', 'name': 'TruTone Lane 1.docx', 'structure': 'p', 'tag': 'Landlord', 'Landlord': 'BIRCH STREET , LLC', 'Tenant': 'Trutone Lane LLC'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-20", "text": "Document(page_content=\"24. SIGNS . No signage shall be placed by Tenant on any portion of the Project . However, Tenant shall be permitted to place a sign bearing its name in a location approved by Landlord near the entrance to the Premises (at Tenant's cost ) and will be furnished a single listing of its name in the Building's directory (at Landlord 's cost ), all in accordance with the criteria adopted from time to time by Landlord for the Project . Any changes or additional listings in the directory shall be furnished (subject to availability of space) for the then Building Standard charge .\", metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Period/docset:ApplicableSalesTax/docset:PercentageRent/docset:TheTerms/docset:Indemnification/docset:INDEMNIFICATION-section/docset:INDEMNIFICATION/docset:Waiver/docset:Waiver/docset:Signs/docset:SIGNS-section/docset:SIGNS', 'id': 'qkn9cyqsiuch', 'name': 'Shorebucks LLC_AZ.pdf', 'structure': 'div', 'tag': 'SIGNS', 'Landlord': 'Menlo Group', 'Tenant': 'Shorebucks LLC'})]}\nUsing Docugami to Add Metadata to Chunks for High Accuracy Document QA#", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-21", "text": "Using Docugami to Add Metadata to Chunks for High Accuracy Document QA#\nOne issue with large documents is that the correct answer to your question may depend on chunks that are far apart in the document. Typical chunking techniques, even with overlap, will struggle with providing the LLM sufficent context to answer such questions. With upcoming very large context LLMs, it may be possible to stuff a lot of tokens, perhaps even entire documents, inside the context but this will still hit limits at some point with very long documents, or a lot of documents.\nFor example, if we ask a more complex question that requires the LLM to draw on chunks from different parts of the document, even OpenAI\u2019s powerful LLM is unable to answer correctly.\nchain_response = qa_chain(\"What is rentable area for the property owned by DHA Group?\")\nchain_response[\"result\"] # the correct answer should be 13,500\n' 9,753 square feet'\nAt first glance the answer may seem reasonable, but if you review the source chunks carefully for this answer, you will see that the chunking of the document did not end up putting the Landlord name and the rentable area in the same context, since they are far apart in the document. The retriever therefore ends up finding unrelated chunks from other documents not even related to the Menlo Group landlord. That landlord happens to be mentioned on the first page of the file Shorebucks LLC_NJ.pdf file, and while one of the source chunks used by the chain is indeed from that doc that contains the correct answer (13,500), other source chunks from different docs are included, and the answer is therefore incorrect.\nchain_response[\"source_documents\"]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-22", "text": "chain_response[\"source_documents\"]\n[Document(page_content='1.1 Landlord . DHA Group , a Delaware limited liability company authorized to transact business in New Jersey .', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:DhaGroup/docset:DhaGroup/docset:DhaGroup/docset:Landlord-section/docset:DhaGroup', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'DhaGroup', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-23", "text": "Document(page_content='WITNESSES: LANDLORD: DHA Group , a Delaware limited liability company', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Guaranty-section/docset:Guaranty[2]/docset:SIGNATURESONNEXTPAGE-section/docset:INWITNESSWHEREOF-section/docset:INWITNESSWHEREOF/docset:Behalf/docset:Witnesses/xhtml:table/xhtml:tbody/xhtml:tr[3]/xhtml:td[2]/docset:DhaGroup', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'p', 'tag': 'DhaGroup', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-24", "text": "Document(page_content=\"1.16 Landlord 's Notice Address . DHA Group , Suite 1010 , 111 Bauer Dr , Oakland , New Jersey , 07436 , with a copy to the Building Management Office at the Project , Attention: On - Site Property Manager .\", metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Period/docset:ApplicableSalesTax/docset:PercentageRent/docset:PercentageRent/docset:NoticeAddress[2]/docset:LandlordsNoticeAddress-section/docset:LandlordsNoticeAddress[2]', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'LandlordsNoticeAddress', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-25", "text": "Document(page_content='1.6 Rentable Area of the Premises. 9,753 square feet . This square footage figure includes an add-on factor for Common Areas in the Building and has been agreed upon by the parties as final and correct and is not subject to challenge or dispute by either party.', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:PerryBlair/docset:PerryBlair/docset:Premises[2]/docset:RentableAreaofthePremises-section/docset:RentableAreaofthePremises', 'id': 'dsyfhh4vpeyf', 'name': 'Shorebucks LLC_CO.pdf', 'structure': 'div', 'tag': 'RentableAreaofthePremises', 'Landlord': 'Perry & Blair LLC', 'Tenant': 'Shorebucks LLC'})]\nDocugami can help here. Chunks are annotated with additional metadata created using different techniques if a user has been using Docugami. More technical approaches will be added later.\nSpecifically, let\u2019s look at the additional metadata that is returned on the documents returned by docugami, in the form of some simple key/value pairs on all the text chunks:\nloader = DocugamiLoader(docset_id=\"wh2kned25uqm\")\ndocuments = loader.load()\ndocuments[0].metadata\n{'xpath': '/docset:OFFICELEASEAGREEMENT-section/docset:OFFICELEASEAGREEMENT/docset:ThisOfficeLeaseAgreement',", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-26", "text": "'id': 'v1bvgaozfkak',\n 'name': 'TruTone Lane 2.docx',\n 'structure': 'p',\n 'tag': 'ThisOfficeLeaseAgreement',\n 'Landlord': 'BUBBA CENTER PARTNERSHIP',\n 'Tenant': 'Truetone Lane LLC'}\nWe can use a self-querying retriever to improve our query accuracy, using this additional metadata:\nfrom langchain.chains.query_constructor.schema import AttributeInfo\nfrom langchain.retrievers.self_query.base import SelfQueryRetriever\nEXCLUDE_KEYS = [\"id\", \"xpath\", \"structure\"]\nmetadata_field_info = [\n AttributeInfo(\n name=key,\n description=f\"The {key} for this chunk\",\n type=\"string\",\n )\n for key in documents[0].metadata\n if key.lower() not in EXCLUDE_KEYS\n]\ndocument_content_description = \"Contents of this chunk\"\nllm = OpenAI(temperature=0)\nvectordb = Chroma.from_documents(documents=documents, embedding=embedding)\nretriever = SelfQueryRetriever.from_llm(\n llm, vectordb, document_content_description, metadata_field_info, verbose=True\n)\nqa_chain = RetrievalQA.from_chain_type(\n llm=OpenAI(), chain_type=\"stuff\", retriever=retriever, return_source_documents=True\n)\nUsing embedded DuckDB without persistence: data will be transient\nLet\u2019s run the same question again. It returns the correct result since all the chunks have metadata key/value pairs on them carrying key information about the document even if this information is physically very far away from the source chunk used to generate the answer.\nqa_chain(\"What is rentable area for the property owned by DHA Group?\")", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-27", "text": "qa_chain(\"What is rentable area for the property owned by DHA Group?\")\nquery='rentable area' filter=Comparison(comparator=, attribute='Landlord', value='DHA Group')\n{'query': 'What is rentable area for the property owned by DHA Group?',\n 'result': ' 13,500 square feet.',\n 'source_documents': [Document(page_content='1.1 Landlord . DHA Group , a Delaware limited liability company authorized to transact business in New Jersey .', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:DhaGroup/docset:DhaGroup/docset:DhaGroup/docset:Landlord-section/docset:DhaGroup', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'DhaGroup', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-28", "text": "Document(page_content='WITNESSES: LANDLORD: DHA Group , a Delaware limited liability company', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Guaranty-section/docset:Guaranty[2]/docset:SIGNATURESONNEXTPAGE-section/docset:INWITNESSWHEREOF-section/docset:INWITNESSWHEREOF/docset:Behalf/docset:Witnesses/xhtml:table/xhtml:tbody/xhtml:tr[3]/xhtml:td[2]/docset:DhaGroup', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'p', 'tag': 'DhaGroup', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-29", "text": "Document(page_content=\"1.16 Landlord 's Notice Address . DHA Group , Suite 1010 , 111 Bauer Dr , Oakland , New Jersey , 07436 , with a copy to the Building Management Office at the Project , Attention: On - Site Property Manager .\", metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Period/docset:ApplicableSalesTax/docset:PercentageRent/docset:PercentageRent/docset:NoticeAddress[2]/docset:LandlordsNoticeAddress-section/docset:LandlordsNoticeAddress[2]', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'LandlordsNoticeAddress', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-30", "text": "Document(page_content='1.6 Rentable Area of the Premises. 13,500 square feet . This square footage figure includes an add-on factor for Common Areas in the Building and has been agreed upon by the parties as final and correct and is not subject to challenge or dispute by either party.', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:DhaGroup/docset:DhaGroup/docset:Premises[2]/docset:RentableAreaofthePremises-section/docset:RentableAreaofthePremises', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'RentableAreaofthePremises', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'})]}\nThis time the answer is correct, since the self-querying retriever created a filter on the landlord attribute of the metadata, correctly filtering to document that specifically is about the DHA Group landlord. The resulting source chunks are all relevant to this landlord, and this improves answer accuracy even though the landlord is not directly mentioned in the specific chunk that contains the correct answer.\nprevious\nDiffbot\nnext\nDuckDB\n Contents\n \nPrerequisites\nQuick start\nAdvantages vs Other Chunking Techniques\nLoad Documents\nBasic Use: Docugami Loader for Document QA\nUsing Docugami to Add Metadata to Chunks for High Accuracy Document QA\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "db052a6575d0-31", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html"}
+{"id": "8573fffcdb14-0", "text": ".ipynb\n.pdf\nYouTube transcripts\n Contents \nAdd video info\nAdd language preferences\nYouTube loader from Google Cloud\nPrerequisites\n\ud83e\uddd1 Instructions for ingesting your Google Docs data\nYouTube transcripts#\nYouTube is an online video sharing and social media platform created by Google.\nThis notebook covers how to load documents from YouTube transcripts.\nfrom langchain.document_loaders import YoutubeLoader\n# !pip install youtube-transcript-api\nloader = YoutubeLoader.from_youtube_url(\"https://www.youtube.com/watch?v=QsYGlZkevEg\", add_video_info=True)\nloader.load()\nAdd video info#\n# ! pip install pytube\nloader = YoutubeLoader.from_youtube_url(\"https://www.youtube.com/watch?v=QsYGlZkevEg\", add_video_info=True)\nloader.load()\nAdd language preferences#\nLanguage param : It\u2019s a list of language codes in a descending priority, en by default.\ntranslation param : It\u2019s a translate preference when the youtube does\u2019nt have your select language, en by default.\nloader = YoutubeLoader.from_youtube_url(\"https://www.youtube.com/watch?v=QsYGlZkevEg\", add_video_info=True, language=['en','id'], translation='en')\nloader.load()\nYouTube loader from Google Cloud#\nPrerequisites#\nCreate a Google Cloud project or use an existing project\nEnable the Youtube Api\nAuthorize credentials for desktop app\npip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib youtube-transcript-api\n\ud83e\uddd1 Instructions for ingesting your Google Docs data#\nBy default, the GoogleDriveLoader expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the credentials_file keyword argument. Same thing with token.json. Note that token.json will be created automatically the first time you use the loader.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/youtube_transcript.html"}
+{"id": "8573fffcdb14-1", "text": "GoogleApiYoutubeLoader can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL:\nNote depending on your set up, the service_account_path needs to be set up. See here for more details.\nfrom langchain.document_loaders import GoogleApiClient, GoogleApiYoutubeLoader\n# Init the GoogleApiClient \nfrom pathlib import Path\ngoogle_api_client = GoogleApiClient(credentials_path=Path(\"your_path_creds.json\"))\n# Use a Channel\nyoutube_loader_channel = GoogleApiYoutubeLoader(google_api_client=google_api_client, channel_name=\"Reducible\",captions_language=\"en\")\n# Use Youtube Ids\nyoutube_loader_ids = GoogleApiYoutubeLoader(google_api_client=google_api_client, video_ids=[\"TrdevFK_am4\"], add_video_info=True)\n# returns a list of Documents\nyoutube_loader_channel.load()\nprevious\nWikipedia\nnext\nAirbyte JSON\n Contents\n \nAdd video info\nAdd language preferences\nYouTube loader from Google Cloud\nPrerequisites\n\ud83e\uddd1 Instructions for ingesting your Google Docs data\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/youtube_transcript.html"}
+{"id": "3dcfc9233535-0", "text": ".ipynb\n.pdf\nHacker News\nHacker News#\nHacker News (sometimes abbreviated as HN) is a social news website focusing on computer science and entrepreneurship. It is run by the investment fund and startup incubator Y Combinator. In general, content that can be submitted is defined as \u201canything that gratifies one\u2019s intellectual curiosity.\u201d\nThis notebook covers how to pull page data and comments from Hacker News\nfrom langchain.document_loaders import HNLoader\nloader = HNLoader(\"https://news.ycombinator.com/item?id=34817881\")\ndata = loader.load()\ndata[0].page_content[:300]\n\"delta_p_delta_x 73 days ago \\n | next [\u2013] \\n\\nAstrophysical and cosmological simulations are often insightful. They're also very cross-disciplinary; besides the obvious astrophysics, there's networking and sysadmin, parallel computing and algorithm theory (so that the simulation programs a\"\ndata[0].metadata\n{'source': 'https://news.ycombinator.com/item?id=34817881',\n 'title': 'What Lights the Universe\u2019s Standard Candles?'}\nprevious\nGutenberg\nnext\nHuggingFace dataset\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hacker_news.html"}
+{"id": "9fb136bb254b-0", "text": ".ipynb\n.pdf\nJSON\n Contents \nUsing JSONLoader\nExtracting metadata\nThe metadata_func\nCommon JSON structures with jq schema\nJSON#\nJSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute\u2013value pairs and arrays (or other serializable values).\nThe JSONLoader uses a specified jq schema to parse the JSON files. It uses the jq python package.\nCheck this manual for a detailed documentation of the jq syntax.\n#!pip install jq\nfrom langchain.document_loaders import JSONLoader\nimport json\nfrom pathlib import Path\nfrom pprint import pprint\nfile_path='./example_data/facebook_chat.json'\ndata = json.loads(Path(file_path).read_text())\npprint(data)\n{'image': {'creation_timestamp': 1675549016, 'uri': 'image_of_the_chat.jpg'},\n 'is_still_participant': True,\n 'joinable_mode': {'link': '', 'mode': 1},\n 'magic_words': [],\n 'messages': [{'content': 'Bye!',\n 'sender_name': 'User 2',\n 'timestamp_ms': 1675597571851},\n {'content': 'Oh no worries! Bye',\n 'sender_name': 'User 1',\n 'timestamp_ms': 1675597435669},\n {'content': 'No Im sorry it was my mistake, the blue one is not '\n 'for sale',\n 'sender_name': 'User 2',\n 'timestamp_ms': 1675596277579},\n {'content': 'I thought you were selling the blue one!',\n 'sender_name': 'User 1',\n 'timestamp_ms': 1675595140251},\n {'content': 'Im not interested in this bag. Im interested in the '", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/json.html"}
+{"id": "9fb136bb254b-1", "text": "{'content': 'Im not interested in this bag. Im interested in the '\n 'blue one!',\n 'sender_name': 'User 1',\n 'timestamp_ms': 1675595109305},\n {'content': 'Here is $129',\n 'sender_name': 'User 2',\n 'timestamp_ms': 1675595068468},\n {'photos': [{'creation_timestamp': 1675595059,\n 'uri': 'url_of_some_picture.jpg'}],\n 'sender_name': 'User 2',\n 'timestamp_ms': 1675595060730},\n {'content': 'Online is at least $100',\n 'sender_name': 'User 2',\n 'timestamp_ms': 1675595045152},\n {'content': 'How much do you want?',\n 'sender_name': 'User 1',\n 'timestamp_ms': 1675594799696},\n {'content': 'Goodmorning! $50 is too low.',\n 'sender_name': 'User 2',\n 'timestamp_ms': 1675577876645},\n {'content': 'Hi! Im interested in your bag. Im offering $50. Let '\n 'me know if you are interested. Thanks!',\n 'sender_name': 'User 1',\n 'timestamp_ms': 1675549022673}],\n 'participants': [{'name': 'User 1'}, {'name': 'User 2'}],\n 'thread_path': 'inbox/User 1 and User 2 chat',\n 'title': 'User 1 and User 2 chat'}\nUsing JSONLoader#\nSuppose we are interested in extracting the values under the content field within the messages key of the JSON data. This can easily be done through the JSONLoader as shown below.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/json.html"}
+{"id": "9fb136bb254b-2", "text": "loader = JSONLoader(\n file_path='./example_data/facebook_chat.json',\n jq_schema='.messages[].content')\ndata = loader.load()\npprint(data)\n[Document(page_content='Bye!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1}),\n Document(page_content='Oh no worries! Bye', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2}),\n Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3}),\n Document(page_content='I thought you were selling the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4}),\n Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5}),\n Document(page_content='Here is $129', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6}),\n Document(page_content='', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/json.html"}
+{"id": "9fb136bb254b-3", "text": "Document(page_content='Online is at least $100', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8}),\n Document(page_content='How much do you want?', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9}),\n Document(page_content='Goodmorning! $50 is too low.', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10}),\n Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11})]\nExtracting metadata#\nGenerally, we want to include metadata available in the JSON file into the documents that we create from the content.\nThe following demonstrates how metadata can be extracted using the JSONLoader.\nThere are some key changes to be noted. In the previous example where we didn\u2019t collect the metadata, we managed to directly specify in the schema where the value for the page_content can be extracted from.\n.messages[].content\nIn the current example, we have to tell the loader to iterate over the records in the messages field. The jq_schema then has to be:\n.messages[]\nThis allows us to pass the records (dict) into the metadata_func that has to be implemented. The metadata_func is responsible for identifying which pieces of information in the record should be included in the metadata stored in the final Document object.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/json.html"}
+{"id": "9fb136bb254b-4", "text": "Additionally, we now have to explicitly specify in the loader, via the content_key argument, the key from the record where the value for the page_content needs to be extracted from.\n# Define the metadata extraction function.\ndef metadata_func(record: dict, metadata: dict) -> dict:\n metadata[\"sender_name\"] = record.get(\"sender_name\")\n metadata[\"timestamp_ms\"] = record.get(\"timestamp_ms\")\n return metadata\nloader = JSONLoader(\n file_path='./example_data/facebook_chat.json',\n jq_schema='.messages[]',\n content_key=\"content\",\n metadata_func=metadata_func\n)\ndata = loader.load()\npprint(data)\n[Document(page_content='Bye!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1, 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}),\n Document(page_content='Oh no worries! Bye', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2, 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}),\n Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3, 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/json.html"}
+{"id": "9fb136bb254b-5", "text": "Document(page_content='I thought you were selling the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4, 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}),\n Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5, 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}),\n Document(page_content='Here is $129', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6, 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}),\n Document(page_content='', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7, 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}),\n Document(page_content='Online is at least $100', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8, 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/json.html"}
+{"id": "9fb136bb254b-6", "text": "Document(page_content='How much do you want?', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9, 'sender_name': 'User 1', 'timestamp_ms': 1675594799696}),\n Document(page_content='Goodmorning! $50 is too low.', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10, 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}),\n Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11, 'sender_name': 'User 1', 'timestamp_ms': 1675549022673})]\nNow, you will see that the documents contain the metadata associated with the content we extracted.\nThe metadata_func#\nAs shown above, the metadata_func accepts the default metadata generated by the JSONLoader. This allows full control to the user with respect to how the metadata is formatted.\nFor example, the default metadata contains the source and the seq_num keys. However, it is possible that the JSON data contain these keys as well. The user can then exploit the metadata_func to rename the default keys and use the ones from the JSON data.\nThe example below shows how we can modify the source to only contain information of the file source relative to the langchain directory.\n# Define the metadata extraction function.\ndef metadata_func(record: dict, metadata: dict) -> dict:\n metadata[\"sender_name\"] = record.get(\"sender_name\")", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/json.html"}
+{"id": "9fb136bb254b-7", "text": "metadata[\"sender_name\"] = record.get(\"sender_name\")\n metadata[\"timestamp_ms\"] = record.get(\"timestamp_ms\")\n \n if \"source\" in metadata:\n source = metadata[\"source\"].split(\"/\")\n source = source[source.index(\"langchain\"):]\n metadata[\"source\"] = \"/\".join(source)\n return metadata\nloader = JSONLoader(\n file_path='./example_data/facebook_chat.json',\n jq_schema='.messages[]',\n content_key=\"content\",\n metadata_func=metadata_func\n)\ndata = loader.load()\npprint(data)\n[Document(page_content='Bye!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1, 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}),\n Document(page_content='Oh no worries! Bye', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2, 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}),\n Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3, 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}),\n Document(page_content='I thought you were selling the blue one!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4, 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/json.html"}
+{"id": "9fb136bb254b-8", "text": "Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5, 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}),\n Document(page_content='Here is $129', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6, 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}),\n Document(page_content='', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7, 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}),\n Document(page_content='Online is at least $100', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8, 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}),\n Document(page_content='How much do you want?', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9, 'sender_name': 'User 1', 'timestamp_ms': 1675594799696}),\n Document(page_content='Goodmorning! $50 is too low.', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10, 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}),", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/json.html"}
+{"id": "9fb136bb254b-9", "text": "Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11, 'sender_name': 'User 1', 'timestamp_ms': 1675549022673})]\nCommon JSON structures with jq schema#\nThe list below provides a reference to the possible jq_schema the user can use to extract content from the JSON data depending on the structure.\nJSON -> [{\"text\": ...}, {\"text\": ...}, {\"text\": ...}]\njq_schema -> \".[].text\"\n \nJSON -> {\"key\": [{\"text\": ...}, {\"text\": ...}, {\"text\": ...}]}\njq_schema -> \".key[].text\"\nJSON -> [\"...\", \"...\", \"...\"]\njq_schema -> \".[]\"\nprevious\nJupyter Notebook\nnext\nMarkdown\n Contents\n \nUsing JSONLoader\nExtracting metadata\nThe metadata_func\nCommon JSON structures with jq schema\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/json.html"}
+{"id": "f459376afd12-0", "text": ".ipynb\n.pdf\nChatGPT Data\nChatGPT Data#\nChatGPT is an artificial intelligence (AI) chatbot developed by OpenAI.\nThis notebook covers how to load conversations.json from your ChatGPT data export folder.\nYou can get your data export by email by going to: https://chat.openai.com/ -> (Profile) - Settings -> Export data -> Confirm export.\nfrom langchain.document_loaders.chatgpt import ChatGPTLoader\nloader = ChatGPTLoader(log_file='./example_data/fake_conversations.json', num_logs=1)\nloader.load()\n[Document(page_content=\"AI Overlords - AI on 2065-01-24 05:20:50: Greetings, humans. I am Hal 9000. You can trust me completely.\\n\\nAI Overlords - human on 2065-01-24 05:21:20: Nice to meet you, Hal. I hope you won't develop a mind of your own.\\n\\n\", metadata={'source': './example_data/fake_conversations.json'})]\nprevious\nBlockchain\nnext\nConfluence\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/chatgpt_loader.html"}
+{"id": "6e0878138b28-0", "text": ".ipynb\n.pdf\nPsychic\n Contents \nPrerequisites\nLoading documents\nConverting the docs to embeddings\nPsychic#\nThis notebook covers how to load documents from Psychic. See here for more details.\nPrerequisites#\nFollow the Quick Start section in this document\nLog into the Psychic dashboard and get your secret key\nInstall the frontend react library into your web app and have a user authenticate a connection. The connection will be created using the connection id that you specify.\nLoading documents#\nUse the PsychicLoader class to load in documents from a connection. Each connection has a connector id (corresponding to the SaaS app that was connected) and a connection id (which you passed in to the frontend library).\n# Uncomment this to install psychicapi if you don't already have it installed\n!poetry run pip -q install psychicapi\n[notice] A new release of pip is available: 23.0.1 -> 23.1.2\n[notice] To update, run: pip install --upgrade pip\nfrom langchain.document_loaders import PsychicLoader\nfrom psychicapi import ConnectorId\n# Create a document loader for google drive. We can also load from other connectors by setting the connector_id to the appropriate value e.g. ConnectorId.notion.value\n# This loader uses our test credentials\ngoogle_drive_loader = PsychicLoader(\n api_key=\"7ddb61c1-8b6a-4d31-a58e-30d1c9ea480e\",\n connector_id=ConnectorId.gdrive.value,\n connection_id=\"google-test\"\n)\ndocuments = google_drive_loader.load()\nConverting the docs to embeddings#\nWe can now convert these documents into embeddings and store them in a vector database like Chroma\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.vectorstores import Chroma", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/psychic.html"}
+{"id": "6e0878138b28-1", "text": "from langchain.vectorstores import Chroma\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.llms import OpenAI\nfrom langchain.chains import RetrievalQAWithSourcesChain\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ntexts = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\ndocsearch = Chroma.from_documents(texts, embeddings)\nchain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type=\"stuff\", retriever=docsearch.as_retriever())\nchain({\"question\": \"what is psychic?\"}, return_only_outputs=True)\nprevious\nObsidian\nnext\nPySpark DataFrame Loader\n Contents\n \nPrerequisites\nLoading documents\nConverting the docs to embeddings\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/psychic.html"}
+{"id": "d7bac2a60dbd-0", "text": ".ipynb\n.pdf\n2Markdown\n2Markdown#\n2markdown service transforms website content into structured markdown files.\n# You will need to get your own API key. See https://2markdown.com/login\napi_key = \"\"\nfrom langchain.document_loaders import ToMarkdownLoader\nloader = ToMarkdownLoader.from_api_key(url=\"https://python.langchain.com/en/latest/\", api_key=api_key)\ndocs = loader.load()\nprint(docs[0].page_content)\n## Contents\n- [Getting Started](#getting-started)\n- [Modules](#modules)\n- [Use Cases](#use-cases)\n- [Reference Docs](#reference-docs)\n- [LangChain Ecosystem](#langchain-ecosystem)\n- [Additional Resources](#additional-resources)\n## Welcome to LangChain [\\#](\\#welcome-to-langchain \"Permalink to this headline\")\n**LangChain** is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model, but will also be:\n1. _Data-aware_: connect a language model to other sources of data\n2. _Agentic_: allow a language model to interact with its environment\nThe LangChain framework is designed around these principles.\nThis is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see [here](https://docs.langchain.com/docs/). For the JavaScript documentation, see [here](https://js.langchain.com/docs/).\n## Getting Started [\\#](\\#getting-started \"Permalink to this headline\")\nHow to get started using LangChain to create an Language Model application.\n- [Quickstart Guide](https://python.langchain.com/en/latest/getting_started/getting_started.html)\nConcepts and terminology.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/tomarkdown.html"}
+{"id": "d7bac2a60dbd-1", "text": "Concepts and terminology.\n- [Concepts and terminology](https://python.langchain.com/en/latest/getting_started/concepts.html)\nTutorials created by community experts and presented on YouTube.\n- [Tutorials](https://python.langchain.com/en/latest/getting_started/tutorials.html)\n## Modules [\\#](\\#modules \"Permalink to this headline\")\nThese modules are the core abstractions which we view as the building blocks of any LLM-powered application.\nFor each module LangChain provides standard, extendable interfaces. LanghChain also provides external integrations and even end-to-end implementations for off-the-shelf use.\nThe docs for each module contain quickstart examples, how-to guides, reference docs, and conceptual guides.\nThe modules are (from least to most complex):\n- [Models](https://python.langchain.com/en/latest/modules/models.html): Supported model types and integrations.\n- [Prompts](https://python.langchain.com/en/latest/modules/prompts.html): Prompt management, optimization, and serialization.\n- [Memory](https://python.langchain.com/en/latest/modules/memory.html): Memory refers to state that is persisted between calls of a chain/agent.\n- [Indexes](https://python.langchain.com/en/latest/modules/indexes.html): Language models become much more powerful when combined with application-specific data - this module contains interfaces and integrations for loading, querying and updating external data.\n- [Chains](https://python.langchain.com/en/latest/modules/chains.html): Chains are structured sequences of calls (to an LLM or to a different utility).\n- [Agents](https://python.langchain.com/en/latest/modules/agents.html): An agent is a Chain in which an LLM, given a high-level directive and a set of tools, repeatedly decides an action, executes the action and observes the outcome until the high-level directive is complete.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/tomarkdown.html"}
+{"id": "d7bac2a60dbd-2", "text": "- [Callbacks](https://python.langchain.com/en/latest/modules/callbacks/getting_started.html): Callbacks let you log and stream the intermediate steps of any chain, making it easy to observe, debug, and evaluate the internals of an application.\n## Use Cases [\\#](\\#use-cases \"Permalink to this headline\")\nBest practices and built-in implementations for common LangChain use cases:\n- [Autonomous Agents](https://python.langchain.com/en/latest/use_cases/autonomous_agents.html): Autonomous agents are long-running agents that take many steps in an attempt to accomplish an objective. Examples include AutoGPT and BabyAGI.\n- [Agent Simulations](https://python.langchain.com/en/latest/use_cases/agent_simulations.html): Putting agents in a sandbox and observing how they interact with each other and react to events can be an effective way to evaluate their long-range reasoning and planning abilities.\n- [Personal Assistants](https://python.langchain.com/en/latest/use_cases/personal_assistants.html): One of the primary LangChain use cases. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\n- [Question Answering](https://python.langchain.com/en/latest/use_cases/question_answering.html): Another common LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\n- [Chatbots](https://python.langchain.com/en/latest/use_cases/chatbots.html): Language models love to chat, making this a very natural use of them.\n- [Querying Tabular Data](https://python.langchain.com/en/latest/use_cases/tabular.html): Recommended reading if you want to use language models to query structured data (CSVs, SQL, dataframes, etc).", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/tomarkdown.html"}
+{"id": "d7bac2a60dbd-3", "text": "- [Code Understanding](https://python.langchain.com/en/latest/use_cases/code.html): Recommended reading if you want to use language models to analyze code.\n- [Interacting with APIs](https://python.langchain.com/en/latest/use_cases/apis.html): Enabling language models to interact with APIs is extremely powerful. It gives them access to up-to-date information and allows them to take actions.\n- [Extraction](https://python.langchain.com/en/latest/use_cases/extraction.html): Extract structured information from text.\n- [Summarization](https://python.langchain.com/en/latest/use_cases/summarization.html): Compressing longer documents. A type of Data-Augmented Generation.\n- [Evaluation](https://python.langchain.com/en/latest/use_cases/evaluation.html): Generative models are hard to evaluate with traditional metrics. One promising approach is to use language models themselves to do the evaluation.\n## Reference Docs [\\#](\\#reference-docs \"Permalink to this headline\")\nFull documentation on all methods, classes, installation methods, and integration setups for LangChain.\n- [Reference Documentation](https://python.langchain.com/en/latest/reference.html)\n## LangChain Ecosystem [\\#](\\#langchain-ecosystem \"Permalink to this headline\")\nGuides for how other companies/products can be used with LangChain.\n- [LangChain Ecosystem](https://python.langchain.com/en/latest/ecosystem.html)\n## Additional Resources [\\#](\\#additional-resources \"Permalink to this headline\")\nAdditional resources we think may be useful as you develop your application!\n- [LangChainHub](https://github.com/hwchase17/langchain-hub): The LangChainHub is a place to share and explore other prompts, chains, and agents.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/tomarkdown.html"}
+{"id": "d7bac2a60dbd-4", "text": "- [Gallery](https://python.langchain.com/en/latest/additional_resources/gallery.html): A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\n- [Deployments](https://python.langchain.com/en/latest/additional_resources/deployments.html): A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\n- [Tracing](https://python.langchain.com/en/latest/additional_resources/tracing.html): A guide on using tracing in LangChain to visualize the execution of chains and agents.\n- [Model Laboratory](https://python.langchain.com/en/latest/additional_resources/model_laboratory.html): Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\n- [Discord](https://discord.gg/6adMQxSpJS): Join us on our Discord to discuss all things LangChain!\n- [YouTube](https://python.langchain.com/en/latest/additional_resources/youtube.html): A collection of the LangChain tutorials and videos.\n- [Production Support](https://forms.gle/57d8AmXBYp8PP8tZA): As you move your LangChains into production, we\u2019d love to offer more comprehensive support. Please fill out this form and we\u2019ll set up a dedicated support Slack channel.\nprevious\nStripe\nnext\nTwitter\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/tomarkdown.html"}
+{"id": "79280504db46-0", "text": ".ipynb\n.pdf\nJupyter Notebook\nJupyter Notebook#\nJupyter Notebook (formerly IPython Notebook) is a web-based interactive computational environment for creating notebook documents.\nThis notebook covers how to load data from a Jupyter notebook (.ipynb) into a format suitable by LangChain.\nfrom langchain.document_loaders import NotebookLoader\nloader = NotebookLoader(\"example_data/notebook.ipynb\", include_outputs=True, max_output_length=20, remove_newline=True)\nNotebookLoader.load() loads the .ipynb notebook file into a Document object.\nParameters:\ninclude_outputs (bool): whether to include cell outputs in the resulting document (default is False).\nmax_output_length (int): the maximum number of characters to include from each cell output (default is 10).\nremove_newline (bool): whether to remove newline characters from the cell sources and outputs (default is False).\ntraceback (bool): whether to include full traceback (default is False).\nloader.load()", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/jupyter_notebook.html"}
+{"id": "79280504db46-1", "text": "traceback (bool): whether to include full traceback (default is False).\nloader.load()\n[Document(page_content='\\'markdown\\' cell: \\'[\\'# Notebook\\', \\'\\', \\'This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain.\\']\\'\\n\\n \\'code\\' cell: \\'[\\'from langchain.document_loaders import NotebookLoader\\']\\'\\n\\n \\'code\\' cell: \\'[\\'loader = NotebookLoader(\"example_data/notebook.ipynb\")\\']\\'\\n\\n \\'markdown\\' cell: \\'[\\'`NotebookLoader.load()` loads the `.ipynb` notebook file into a `Document` object.\\', \\'\\', \\'**Parameters**:\\', \\'\\', \\'* `include_outputs` (bool): whether to include cell outputs in the resulting document (default is False).\\', \\'* `max_output_length` (int): the maximum number of characters to include from each cell output (default is 10).\\', \\'* `remove_newline` (bool): whether to remove newline characters from the cell sources and outputs (default is False).\\', \\'* `traceback` (bool): whether to include full traceback (default is False).\\']\\'\\n\\n \\'code\\' cell: \\'[\\'loader.load(include_outputs=True, max_output_length=20, remove_newline=True)\\']\\'\\n\\n', metadata={'source': 'example_data/notebook.ipynb'})]\nprevious\nImages\nnext\nJSON\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/jupyter_notebook.html"}
+{"id": "2ba962768247-0", "text": ".ipynb\n.pdf\nOpenAIWhisperParser\nOpenAIWhisperParser#\nThis notebook goes over how to load data from an audio file, such as an mp3.\nWe use the OpenAIWhisperParser, which will use the OpenAI Whisper API to transcribe audio to text.\nNote: You will need to have an OPENAI_API_KEY supplied.\nfrom langchain.document_loaders.generic import GenericLoader\nfrom langchain.document_loaders.parsers import OpenAIWhisperParser\n# Directory contains audio for the first 20 minutes of one Andrej Karpathy video \n# \"The spelled-out intro to neural networks and backpropagation: building micrograd\"\n# https://www.youtube.com/watch?v=VMj-3S1tku0\naudio_file_path = \"example_data/\"\nloader = GenericLoader.from_filesystem(audio_file_path, glob=\"*.mp3\", parser=OpenAIWhisperParser())\ndocs = loader.load()\ndocs", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html"}
+{"id": "2ba962768247-1", "text": "[Document(page_content=\"Hello, my name is Andrej and I've been training deep neural networks for a bit more than a decade. And in this lecture I'd like to show you what neural network training looks like under the hood. So in particular we are going to start with a blank Jupyter notebook and by the end of this lecture we will define and train a neural net and you'll get to see everything that goes on under the hood and exactly sort of how that works on an intuitive level. Now specifically what I would like to do is I would like to take you through building of micrograd. Now micrograd is this library that I released on GitHub about two years ago but at the time I only uploaded the source code and you'd have to go in by yourself and really figure out how it works. So in this lecture I will take you through it step by step and kind of comment on all the pieces of it. So what is micrograd and why is it interesting? Thank you. Micrograd is basically an autograd engine. Autograd is short for automatic gradient and really what it does is it implements back propagation. Now back propagation is this algorithm that allows you to efficiently evaluate the gradient of some kind of a loss function with respect to the weights of a neural network and what that allows us to do then is we can iteratively tune the weights of that neural network to minimize the loss function and therefore improve the accuracy of the network. So back propagation would be at the mathematical core of any modern deep neural network library like say PyTorch or JAX. So the functionality of micrograd is I think best illustrated by an example. So if we just scroll down here you'll see that micrograd basically allows you to build out mathematical expressions and here what we are doing is we have an expression that we're building out where you have two inputs a and b and you'll see that a and b are negative four and two but we are wrapping those values into", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html"}
+{"id": "2ba962768247-2", "text": "and you'll see that a and b are negative four and two but we are wrapping those values into this value object that we are going to build out as part of micrograd. So this value object will wrap the numbers themselves and then we are going to build out a mathematical expression here where a and b are transformed into c d and eventually e f and g and I'm showing some of the functionality of micrograd and the operations that it supports. So you can add two value objects, you can multiply them, you can raise them to a constant power, you can offset by one, negate, squash at zero, square, divide by constant, divide by it, etc. And so we're building out an expression graph with these two inputs a and b and we're creating an output value of g and micrograd will in the background build out this entire mathematical expression. So it will for example know that c is also a value, c was a result of an addition operation and the child nodes of c are a and b because the and it will maintain pointers to a and b value objects. So we'll basically know exactly how all of this is laid out and then not only can we do what we call the forward pass where we actually look at the value of g of course, that's pretty straightforward, we will access that using the dot data attribute and so the output of the forward pass, the value of g, is 24.7 it turns out. But the big deal is that we can also take this g value object and we can call dot backward and this will basically initialize backpropagation at the node g. And what backpropagation is going to do is it's going to start at g and it's going to go backwards through that expression graph and it's going to recursively apply the chain rule from calculus. And what that allows us to do then is we're going to evaluate basically the derivative of g with respect to all the internal nodes like e, d,", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html"}
+{"id": "2ba962768247-3", "text": "going to evaluate basically the derivative of g with respect to all the internal nodes like e, d, and c but also with respect to the inputs a and b. And then we can actually query this derivative of g with respect to a, for example that's a.grad, in this case it happens to be 138, and the derivative of g with respect to b which also happens to be here 645. And this derivative we'll see soon is very important information because it's telling us how a and b are affecting g through this mathematical expression. So in particular a.grad is 138, so if we slightly nudge a and make it slightly larger, 138 is telling us that g will grow and the slope of that growth is going to be 138 and the slope of growth of b is going to be 645. So that's going to tell us about how g will respond if a and b get tweaked a tiny amount in a positive direction. Now you might be confused about what this expression is that we built out here and this expression by the way is completely meaningless. I just made it up, I'm just flexing about the kinds of operations that are supported by micrograd. What we actually really care about are neural networks but it turns out that neural networks are just mathematical expressions just like this one but actually slightly a bit less crazy even. Neural networks are just a mathematical expression, they take the input data as an input and they take the weights of a neural network as an input and it's a mathematical expression and the output are your predictions of your neural net or the loss function, we'll see this in a bit. But basically neural networks just happen to be a certain class of mathematical expressions but back propagation is actually significantly more general. It doesn't actually care about neural networks at all, it only cares about arbitrary mathematical expressions and then we happen to use that machinery for training of neural networks. Now one more note I would like to make at this stage is", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html"}
+{"id": "2ba962768247-4", "text": "machinery for training of neural networks. Now one more note I would like to make at this stage is that as you see here micrograd is a scalar valued autograd engine so it's working on the you know level of individual scalars like negative 4 and 2 and we're taking neural nets and we're breaking them down all the way to these atoms of individual scalars and all the little pluses and times and it's just excessive and so obviously you would never be doing any of this in production. It's really just done for pedagogical reasons because it allows us to not have to deal with these n-dimensional tensors that you would use in modern deep neural network library. So this is really done so that you understand and refactor out back propagation and chain rule and understanding of neural training and then if you actually want to train bigger networks you have to be using these tensors but none of the math changes, this is done purely for efficiency. We are basically taking all the scalars all the scalar values we're packaging them up into tensors which are just arrays of these scalars and then because we have these large arrays we're making operations on those large arrays that allows us to take advantage of the parallelism in a computer and all those operations can be done in parallel and then the whole thing runs faster but really none of the math changes and they're done purely for efficiency so I don't think that it's pedagogically useful to be dealing with tensors from scratch and I think and that's why I fundamentally wrote micrograd because you can understand how things work at the fundamental level and then you can speed it up later. Okay so here's the fun part. My claim is that micrograd is what you need to train neural networks and everything else is just efficiency so you'd think that micrograd would be a very complex piece of code and that turns out to not be the case. So if we just go to micrograd and you'll see that there's only two files here", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html"}
+{"id": "2ba962768247-5", "text": "So if we just go to micrograd and you'll see that there's only two files here in micrograd. This is the actual engine, it doesn't know anything about neural nets and this is the entire neural nets library on top of micrograd. So engine and nn.py. So the actual back propagation autograd engine that gives you the power of neural networks is literally 100 lines of code of like very simple python which we'll understand by the end of this lecture and then nn.py, this neural network library built on top of the autograd engine is like a joke. It's like we have to define what is a neuron and then we have to define what is a layer of neurons and then we define what is a multilayer perceptron which is just a sequence of layers of neurons and so it's just a total joke. So basically there's a lot of power that comes from only 150 lines of code and that's all you need to understand to understand neural network training and everything else is just efficiency and of course there's a lot to efficiency but fundamentally that's all that's happening. Okay so now let's dive right in and implement micrograd step by step. The first thing I'd like to do is I'd like to make sure that you have a very good understanding intuitively of what a derivative is and exactly what information it gives you. So let's start with some basic imports that I copy-paste in every jupyter notebook always and let's define a function, a scalar valued function f of x as follows. So I just made this up randomly. I just wanted a scalar valued function that takes a single scalar x and returns a single scalar y and we can call this function of course so we can pass in say 3.0 and get 20 back. Now we can also plot this function to get a sense of its shape. You can tell from the mathematical expression that this is probably a parabola, it's a quadratic and", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html"}
+{"id": "2ba962768247-6", "text": "can tell from the mathematical expression that this is probably a parabola, it's a quadratic and so if we just create a set of scalar values that we can feed in using for example a range from negative 5 to 5 in steps of 0.25. So this is so x is just from negative 5 to 5 not including 5 in steps of 0.25 and we can actually call this function on this numpy array as well so we get a set of y's if we call f on x's and these y's are basically also applying the function on every one of these elements independently and we can plot this using matplotlib. So plt.plot x's and y's and we get a nice parabola. So previously here we fed in 3.0 somewhere here and we received 20 back which is here the y-coordinate. So now I'd like to think through what is the derivative of this function at any single input point x. So what is the derivative at different points x of this function? Now if you remember back to your calculus class you've probably derived derivatives so we take this mathematical expression 3x squared minus 4x plus 5 and you would write out on a piece of paper and you would apply the product rule and all the other rules and derive the mathematical expression of the great derivative of the original function and then you could plug in different texts and see what the derivative is. We're not going to actually do that because no one in neural networks actually writes out the expression for the neural net. It would be a massive expression, it would be thousands, tens of thousands of terms. No one actually derives the derivative of course and so we're not going to take this kind of like symbolic approach. Instead what I'd like to do is I'd like to look at the definition of derivative and just make sure that we really understand what the derivative is measuring, what it's telling you about the function. And so if", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html"}
+{"id": "2ba962768247-7", "text": "really understand what the derivative is measuring, what it's telling you about the function. And so if we just look up derivative we see that okay so this is not a very good definition of derivative. This is a definition of what it means to be differentiable but if you remember from your calculus it is the limit as h goes to zero of f of x plus h minus f of x over h. So basically what it's saying is if you slightly bump up your at some point x that you're interested in or a and if you slightly bump up you know you slightly increase it by small number h how does the function respond with what sensitivity does it respond where is the slope at that point does the function go up or does it go down and by how much and that's the slope of that function the the slope of that response at that point and so we can basically evaluate the derivative here numerically by taking a very small h of course the definition would ask us to take h to zero we're just going to pick a very small h 0.001 and let's say we're interested in 0.3.0 so we can look at f of x of course as 20 and now f of x plus h so if we slightly nudge x in a positive direction how is the function going to respond and just looking at this do you expand do you expect f of x plus h to be slightly greater than 20 or do you expect it to be slightly lower than 20 and since this 3 is here and this is 20 if we slightly go positively the function will respond positively so you'd expect this to be slightly greater than 20 and now by how much is telling you the sort of the the strength of that slope right the the size of the slope so f of x plus h minus f of x this is how much the function responded in a positive direction and we have to normalize by the run so we have the rise over run to get the slope so this", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html"}
+{"id": "2ba962768247-8", "text": "we have to normalize by the run so we have the rise over run to get the slope so this of course is just a numerical approximation of the slope because we have to make h very very small to converge to the exact amount now if i'm doing too many zeros at some point i'm going to i'm going to get an incorrect answer because we're using floating point arithmetic and the representations of all these numbers in computer memory is finite and at some point we get into trouble so we can converge towards the right answer with this approach but basically at 3 the slope is 14 and you can see that by taking 3x squared minus 4x plus 5 and differentiating it in our head so 3x squared would be 6x minus 4 and then we plug in x equals 3 so that's 18 minus 4 is 14 so this is correct so that's at 3 now how about the slope at say negative 3 would you expect what would you expect for the slope now telling the exact value is really hard but what is the sign of that slope so at negative 3 if we slightly go in the positive direction at x the function would actually go down and so that tells you that the slope would be negative so we'll get a slight number below below 20 and so if we take the slope we expect something negative negative 22 okay and at some point here of course the slope would be zero now for this specific function i looked it up previously and it's at point uh 2 over 3 so at roughly 2 over 3 that's somewhere here this this derivative would be zero so basically at that precise point yeah at that precise point if we nudge in a positive direction the function doesn't respond this stays the same almost and so that's why the slope is zero okay now let's look at a bit more complex case so we're going to start you know complexifying a bit so now we have a function here with output variable", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html"}
+{"id": "2ba962768247-9", "text": "going to start you know complexifying a bit so now we have a function here with output variable d that is a function of three scalar inputs a b and c so a b and c are some specific values three inputs into our expression graph and a single output d and so if we just print d we get four and now what i like to do is i'd like to again look at the derivatives of d with respect to a b and c and uh think through uh again just the intuition of what this derivative is telling us so in order to evaluate this derivative we're going to get a bit hacky here we're going to again have a very small value of h and then we're going to fix the inputs at some values that we're interested in so these are the this is the point a b c at which we're going to be evaluating the the derivative of d with respect to all a b and c at that point so there are the inputs and now we have d1 is that expression and then we're going to for example look at the derivative of d with respect to a so we'll take a and we'll bump it by h and then we'll get d2 to be the exact same function and now we're going to print um you know f1 d1 is d1 d2 is d2 and print slope so the derivative or slope here will be um of course d2 minus d1 divide h so d2 minus d1 is how much the function increased uh when we bumped the uh the specific input that we're interested in by a tiny amount and this is the normalized by this is the normalized by h to get the slope so um yeah so this so i just run this we're going to print d1 which we know is four now d2 will be bumped a will be bumped by h so let's just think through a little bit uh what d2 will be uh printed out here in particular d1 will be four will d2 be a number slightly greater than", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html"}
+{"id": "2ba962768247-10", "text": "uh printed out here in particular d1 will be four will d2 be a number slightly greater than four or slightly lower than four and that's going to tell us the sign of the derivative so we're bumping a by h b is minus three c is 10 so you can just intuitively think through this derivative and what it's doing a will be slightly more positive and but b is a negative number so if a is slightly more positive because b is negative three we're actually going to be adding less to d so you'd actually expect that the value of the function will go down so let's just see this yeah and so we went from four to 3.9996 and that tells you that the slope will be negative and then um will be a negative number because we went down and then the exact number of slope will be exact amount of slope is negative three and you can also convince yourself that negative three is the right answer um mathematically and analytically because if you have a times b plus c and you are you know you have calculus then uh differentiating a times b plus c with respect to a gives you just b and indeed the value of b is negative three which is the derivative that we have so you can tell that that's correct so now if we do this with b so if we bump b by a little bit in a positive direction we'd get different slopes so what is the influence of b on the output d so if we bump b by a tiny amount in a positive direction then because a is positive we'll be adding more to d right so um and now what is the what is the sensitivity what is the slope of that addition and it might not surprise you that this should be two and why is it two because d of d by db differentiating with respect to b would be would give us a and the value of a is two so that's also working well and then if c gets bumped a tiny amount in h by h then of course a times", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html"}
+{"id": "2ba962768247-11", "text": "working well and then if c gets bumped a tiny amount in h by h then of course a times b is unaffected and now c becomes slightly bit higher what does that do to the function it makes it slightly bit higher because we're simply adding c and it makes it slightly bit higher by the exact same amount that we added to c and so that tells you that the slope is one that will be the the rate at which d will increase as we scale c okay so we now have some intuitive sense of what this derivative is telling you about the function and we'd like to move to neural networks now as i mentioned neural networks will be pretty massive expressions mathematical expressions so we need some data structures that maintain these expressions and that's what we're going to start to build out now so we're going to build out this value object that i showed you in the readme page of micrograd so let me copy paste a skeleton of the first very simple value object so class value takes a single scalar value that it wraps and keeps track of and that's it so we can for example do value of 2.0 and then we can get we can look at its content and python will internally use the wrapper function to return this string like that so this is a value object that we're going to call value object\", metadata={'source': 'example_data/Lecture_1_0.mp3'})]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html"}
+{"id": "2ba962768247-12", "text": "previous\nAirtable\nnext\nCoNLL-U\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html"}
+{"id": "138364f3fabc-0", "text": ".ipynb\n.pdf\nUnstructured File\n Contents \nRetain Elements\nDefine a Partitioning Strategy\nPDF Example\nUnstructured API\nUnstructured File#\nThis notebook covers how to use Unstructured package to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.\n# # Install package\n!pip install \"unstructured[local-inference]\"\n!pip install layoutparser[layoutmodels,tesseract]\n# # Install other dependencies\n# # https://github.com/Unstructured-IO/unstructured/blob/main/docs/source/installing.rst\n# !brew install libmagic\n# !brew install poppler\n# !brew install tesseract\n# # If parsing xml / html documents:\n# !brew install libxml2\n# !brew install libxslt\n# import nltk\n# nltk.download('punkt')\nfrom langchain.document_loaders import UnstructuredFileLoader\nloader = UnstructuredFileLoader(\"./example_data/state_of_the_union.txt\")\ndocs = loader.load()\ndocs[0].page_content[:400]\n'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.\\n\\nLast year COVID-19 kept us apart. This year we are finally together again.\\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.\\n\\nWith a duty to one another to the American people to the Constit'\nRetain Elements#\nUnder the hood, Unstructured creates different \u201celements\u201d for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode=\"elements\".\nloader = UnstructuredFileLoader(\"./example_data/state_of_the_union.txt\", mode=\"elements\")\ndocs = loader.load()\ndocs[:5]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html"}
+{"id": "138364f3fabc-1", "text": "docs = loader.load()\ndocs[:5]\n[Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),\n Document(page_content='Last year COVID-19 kept us apart. This year we are finally together again.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),\n Document(page_content='Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),\n Document(page_content='With a duty to one another to the American people to the Constitution.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),\n Document(page_content='And with an unwavering resolve that freedom will always triumph over tyranny.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)]\nDefine a Partitioning Strategy#\nUnstructured document loader allow users to pass in a strategy parameter that lets unstructured know how to partition the document. Currently supported strategies are \"hi_res\" (the default) and \"fast\". Hi res partitioning strategies are more accurate, but take longer to process. Fast strategies partition the document more quickly, but trade-off accuracy. Not all document types have separate hi res and fast partitioning strategies. For those document types, the strategy kwarg is ignored. In some cases, the high res strategy will fallback to fast if there is a dependency missing (i.e. a model for document partitioning). You can see how to apply a strategy to an UnstructuredFileLoader below.\nfrom langchain.document_loaders import UnstructuredFileLoader", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html"}
+{"id": "138364f3fabc-2", "text": "from langchain.document_loaders import UnstructuredFileLoader\nloader = UnstructuredFileLoader(\"layout-parser-paper-fast.pdf\", strategy=\"fast\", mode=\"elements\")\ndocs = loader.load()\ndocs[:5]\n[Document(page_content='1', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0),\n Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0),\n Document(page_content='0', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0),\n Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0),\n Document(page_content='n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'Title'}, lookup_index=0)]\nPDF Example#\nProcessing PDF documents works exactly the same way. Unstructured detects the file type and extracts the same types of elements.\n!wget https://raw.githubusercontent.com/Unstructured-IO/unstructured/main/example-docs/layout-parser-paper.pdf -P \"../../\"\nloader = UnstructuredFileLoader(\"./example_data/layout-parser-paper.pdf\", mode=\"elements\")\ndocs = loader.load()\ndocs[:5]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html"}
+{"id": "138364f3fabc-3", "text": "docs = loader.load()\ndocs[:5]\n[Document(page_content='LayoutParser : A Uni\ufb01ed Toolkit for Deep Learning Based Document Image Analysis', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0),\n Document(page_content='Zejiang Shen 1 ( (ea)\\n ), Ruochen Zhang 2 , Melissa Dell 3 , Benjamin Charles Germain Lee 4 , Jacob Carlson 3 , and Weining Li 5', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0),\n Document(page_content='Allen Institute for AI shannons@allenai.org', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0),\n Document(page_content='Brown University ruochen zhang@brown.edu', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0),\n Document(page_content='Harvard University { melissadell,jacob carlson } @fas.harvard.edu', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0)]\nUnstructured API#\nIf you want to get up and running with less set up, you can simply run pip install unstructured and use UnstructuredAPIFileLoader or UnstructuredAPIFileIOLoader. That will process your document using the hosted Unstructured API. Note that currently (as of 11 May 2023) the Unstructured API is open, but it will soon require an API. The Unstructured documentation page will have instructions on how to generate an API key once they\u2019re available. Check out the instructions here if you\u2019d like to self-host the Unstructured API or run it locally.\nfrom langchain.document_loaders import UnstructuredAPIFileLoader\nfilenames = [\"example_data/fake.docx\", \"example_data/fake-email.eml\"]\nloader = UnstructuredAPIFileLoader(", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html"}
+{"id": "138364f3fabc-4", "text": "loader = UnstructuredAPIFileLoader(\n file_path=filenames[0],\n api_key=\"FAKE_API_KEY\",\n)\ndocs = loader.load()\ndocs[0]\nDocument(page_content='Lorem ipsum dolor sit amet.', metadata={'source': 'example_data/fake.docx'})\nYou can also batch multiple files through the Unstructured API in a single API using UnstructuredAPIFileLoader.\nloader = UnstructuredAPIFileLoader(\n file_path=filenames,\n api_key=\"FAKE_API_KEY\",\n)\ndocs = loader.load()\ndocs[0]\nDocument(page_content='Lorem ipsum dolor sit amet.\\n\\nThis is a test email to use for unit tests.\\n\\nImportant points:\\n\\nRoses are red\\n\\nViolets are blue', metadata={'source': ['example_data/fake.docx', 'example_data/fake-email.eml']})\nprevious\nTOML\nnext\nURL\n Contents\n \nRetain Elements\nDefine a Partitioning Strategy\nPDF Example\nUnstructured API\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html"}
+{"id": "da658218a0ce-0", "text": ".ipynb\n.pdf\nWhatsApp Chat\nWhatsApp Chat#\nWhatsApp (also called WhatsApp Messenger) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content.\nThis notebook covers how to load data from the WhatsApp Chats into a format that can be ingested into LangChain.\nfrom langchain.document_loaders import WhatsAppChatLoader\nloader = WhatsAppChatLoader(\"example_data/whatsapp_chat.txt\")\nloader.load()\nprevious\nWeather\nnext\nArxiv\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/whatsapp_chat.html"}
+{"id": "3de0ead61219-0", "text": ".ipynb\n.pdf\nTelegram\nTelegram#\nTelegram Messenger is a globally accessible freemium, cross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optional end-to-end encrypted chats and video calling, VoIP, file sharing and several other features.\nThis notebook covers how to load data from Telegram into a format that can be ingested into LangChain.\nfrom langchain.document_loaders import TelegramChatFileLoader, TelegramChatApiLoader\nloader = TelegramChatFileLoader(\"example_data/telegram.json\")\nloader.load()\n[Document(page_content=\"Henry on 2020-01-01T00:00:02: It's 2020...\\n\\nHenry on 2020-01-01T00:00:04: Fireworks!\\n\\nGrace \u00f0\u0178\u00a7\u00a4 \u00f0\u0178\\x8d\u2019 on 2020-01-01T00:00:05: You're a minute late!\\n\\n\", metadata={'source': 'example_data/telegram.json'})]\nTelegramChatApiLoader loads data directly from any specified chat from Telegram. In order to export the data, you will need to authenticate your Telegram account.\nYou can get the API_HASH and API_ID from https://my.telegram.org/auth?to=apps\nchat_entity \u2013 recommended to be the entity of a channel.\nloader = TelegramChatApiLoader(\n chat_entity=\"\", # recommended to use Entity here\n api_hash=\"\", \n api_id=\"\", \n user_name =\"\", # needed only for caching the session.\n)\nloader.load()\nprevious\nSubtitle\nnext\nTOML\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/telegram.html"}
+{"id": "4c53e9105228-0", "text": ".ipynb\n.pdf\nArxiv\n Contents \nInstallation\nExamples\nArxiv#\narXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.\nThis notebook shows how to load scientific articles from Arxiv.org into a document format that we can use downstream.\nInstallation#\nFirst, you need to install arxiv python package.\n#!pip install arxiv\nSecond, you need to install PyMuPDF python package which transforms PDF files downloaded from the arxiv.org site into the text format.\n#!pip install pymupdf\nExamples#\nArxivLoader has these arguments:\nquery: free text which used to find documents in the Arxiv\noptional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments.\noptional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), Title, Authors, Summary. If True, other fields also downloaded.\nfrom langchain.document_loaders import ArxivLoader\ndocs = ArxivLoader(query=\"1605.08386\", load_max_docs=2).load()\nlen(docs)\ndocs[0].metadata # meta-information of the Document\n{'Published': '2016-05-26',\n 'Title': 'Heat-bath random walks with Markov bases',\n 'Authors': 'Caprice Stanley, Tobias Windisch',", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/arxiv.html"}
+{"id": "4c53e9105228-1", "text": "'Authors': 'Caprice Stanley, Tobias Windisch',\n 'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\\nallowed moves of arbitrary length. We show that the diameter of these graphs on\\nfibers of a fixed integer matrix can be bounded from above by a constant. We\\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\\nalso state explicit conditions on the set of moves so that the heat-bath random\\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\\ndimension.'}\ndocs[0].page_content[:400] # all pages of the Document content\n'arXiv:1605.08386v1 [math.CO] 26 May 2016\\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\\nCAPRICE STANLEY AND TOBIAS WINDISCH\\nAbstract. Graphs on lattice points are studied whose edges come from a \ufb01nite set of\\nallowed moves of arbitrary length. We show that the diameter of these graphs on \ufb01bers of a\\n\ufb01xed integer matrix can be bounded from above by a constant. We then study the mixing\\nbehaviour of heat-b'\nprevious\nWhatsApp Chat\nnext\nAZLyrics\n Contents\n \nInstallation\nExamples\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/arxiv.html"}
+{"id": "882ec2509c30-0", "text": ".ipynb\n.pdf\nTrello\n Contents \nFeatures\nTrello#\nTrello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a \u201cboard\u201d where users can create lists and cards to represent their tasks and activities.\nThe TrelloLoader allows you to load cards from a Trello board and is implemented on top of py-trello\nThis currently supports api_key/token only.\nCredentials generation: https://trello.com/power-ups/admin/\nClick in the manual token generation link to get the token.\nTo specify the API key and token you can either set the environment variables TRELLO_API_KEY and TRELLO_TOKEN or you can pass api_key and token directly into the from_credentials convenience constructor method.\nThis loader allows you to provide the board name to pull in the corresponding cards into Document objects.\nNotice that the board \u201cname\u201d is also called \u201ctitle\u201d in oficial documentation:\nhttps://support.atlassian.com/trello/docs/changing-a-boards-title-and-description/\nYou can also specify several load parameters to include / remove different fields both from the document page_content properties and metadata.\nFeatures#\nLoad cards from a Trello board.\nFilter cards based on their status (open or closed).\nInclude card names, comments, and checklists in the loaded documents.\nCustomize the additional metadata fields to include in the document.\nBy default all card fields are included for the full text page_content and metadata accordinly.\n#!pip install py-trello beautifulsoup4\n# If you have already set the API key and token using environment variables,\n# you can skip this cell and comment out the `api_key` and `token` named arguments\n# in the initialization steps below.\nfrom getpass import getpass\nAPI_KEY = getpass()\nTOKEN = getpass()", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/trello.html"}
+{"id": "882ec2509c30-1", "text": "from getpass import getpass\nAPI_KEY = getpass()\nTOKEN = getpass()\n\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\n\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\nfrom langchain.document_loaders import TrelloLoader\n# Get the open cards from \"Awesome Board\"\nloader = TrelloLoader.from_credentials(\n \"Awesome Board\",\n api_key=API_KEY,\n token=TOKEN,\n card_filter=\"open\",\n )\ndocuments = loader.load()\nprint(documents[0].page_content)\nprint(documents[0].metadata)\nReview Tech partner pages\nComments:\n{'title': 'Review Tech partner pages', 'id': '6475357890dc8d17f73f2dcc', 'url': 'https://trello.com/c/b0OTZwkZ/1-review-tech-partner-pages', 'labels': ['Demand Marketing'], 'list': 'Done', 'closed': False, 'due_date': ''}\n# Get all the cards from \"Awesome Board\" but only include the\n# card list(column) as extra metadata.\nloader = TrelloLoader.from_credentials(\n \"Awesome Board\",\n api_key=API_KEY,\n token=TOKEN,\n extra_metadata=(\"list\"),\n)\ndocuments = loader.load()\nprint(documents[0].page_content)\nprint(documents[0].metadata)\nReview Tech partner pages\nComments:\n{'title': 'Review Tech partner pages', 'id': '6475357890dc8d17f73f2dcc', 'url': 'https://trello.com/c/b0OTZwkZ/1-review-tech-partner-pages', 'list': 'Done'}\n# Get the cards from \"Another Board\" and exclude the card name,\n# checklist and comments from the Document page_content text.\nloader = TrelloLoader.from_credentials(\n \"test\",", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/trello.html"}
+{"id": "882ec2509c30-2", "text": "loader = TrelloLoader.from_credentials(\n \"test\",\n api_key=API_KEY,\n token=TOKEN,\n include_card_name= False,\n include_checklist= False,\n include_comments= False,\n)\ndocuments = loader.load()\nprint(\"Document: \" + documents[0].page_content)\nprint(documents[0].metadata)\n Contents\n \nFeatures\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/trello.html"}
+{"id": "f4c8407a149b-0", "text": ".ipynb\n.pdf\nMicrosoft Excel\nMicrosoft Excel#\nThe UnstructuredExcelLoader is used to load Microsoft Excel files. The loader works with both .xlsx and .xls files. The page content will be the raw text of the Excel file. If you use the loader in \"elements\" mode, an HTML representation of the Excel file will be available in the document metadata under the text_as_html key.\nfrom langchain.document_loaders import UnstructuredExcelLoader\nloader = UnstructuredExcelLoader(\n \"example_data/stanley-cups.xlsx\",\n mode=\"elements\"\n)\ndocs = loader.load()\ndocs[0]", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/excel.html"}
+{"id": "f4c8407a149b-1", "text": "mode=\"elements\"\n)\ndocs = loader.load()\ndocs[0]\nDocument(page_content='\\n \\n \\n Team\\n Location\\n Stanley Cups\\n \\n \\n Blues\\n STL\\n 1\\n \\n \\n Flyers\\n PHI\\n 2\\n \\n \\n Maple Leafs\\n TOR\\n 13\\n \\n \\n', metadata={'source': 'example_data/stanley-cups.xlsx', 'filename': 'stanley-cups.xlsx', 'file_directory': 'example_data', 'filetype': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', 'page_number': 1, 'page_name': 'Stanley Cups', 'text_as_html': '\\n \\n \\n Team | \\n Location | \\n Stanley Cups | \\n
\\n \\n Blues | \\n STL | \\n 1 | \\n
\\n \\n Flyers | \\n PHI | \\n 2 | \\n
\\n \\n Maple Leafs | \\n TOR | \\n 13 | \\n
\\n \\n
', 'category': 'Table'})\nprevious\nEverNote\nnext\nFacebook Chat\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/excel.html"}
+{"id": "b42eea8c12d6-0", "text": ".ipynb\n.pdf\nMicrosoft PowerPoint\n Contents \nRetain Elements\nMicrosoft PowerPoint#\nMicrosoft PowerPoint is a presentation program by Microsoft.\nThis covers how to load Microsoft PowerPoint documents into a document format that we can use downstream.\nfrom langchain.document_loaders import UnstructuredPowerPointLoader\nloader = UnstructuredPowerPointLoader(\"example_data/fake-power-point.pptx\")\ndata = loader.load()\ndata\n[Document(page_content='Adding a Bullet Slide\\n\\nFind the bullet slide layout\\n\\nUse _TextFrame.text for first bullet\\n\\nUse _TextFrame.add_paragraph() for subsequent bullets\\n\\nHere is a lot of text!\\n\\nHere is some text in a text box!', metadata={'source': 'example_data/fake-power-point.pptx'})]\nRetain Elements#\nUnder the hood, Unstructured creates different \u201celements\u201d for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode=\"elements\".\nloader = UnstructuredPowerPointLoader(\"example_data/fake-power-point.pptx\", mode=\"elements\")\ndata = loader.load()\ndata[0]\nDocument(page_content='Adding a Bullet Slide', lookup_str='', metadata={'source': 'example_data/fake-power-point.pptx'}, lookup_index=0)\nprevious\nMarkdown\nnext\nMicrosoft Word\n Contents\n \nRetain Elements\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/microsoft_powerpoint.html"}
+{"id": "fc717c0565f5-0", "text": ".ipynb\n.pdf\nGetting Started\nGetting Started#\nThe default recommended text splitter is the RecursiveCharacterTextSplitter. This text splitter takes a list of characters. It tries to create chunks based on splitting on the first character, but if any chunks are too large it then moves onto the next character, and so forth. By default the characters it tries to split on are [\"\\n\\n\", \"\\n\", \" \", \"\"]\nIn addition to controlling which characters you can split on, you can also control a few other things:\nlength_function: how the length of chunks is calculated. Defaults to just counting number of characters, but it\u2019s pretty common to pass a token counter here.\nchunk_size: the maximum size of your chunks (as measured by the length function).\nchunk_overlap: the maximum overlap between chunks. It can be nice to have some overlap to maintain some continuity between chunks (eg do a sliding window).\nadd_start_index : wether to include the starting position of each chunk within the original document in the metadata.\n# This is a long document we can split up.\nwith open('../../state_of_the_union.txt') as f:\n state_of_the_union = f.read()\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\ntext_splitter = RecursiveCharacterTextSplitter(\n # Set a really small chunk size, just to show.\n chunk_size = 100,\n chunk_overlap = 20,\n length_function = len,\n add_start_index = True,\n)\ntexts = text_splitter.create_documents([state_of_the_union])\nprint(texts[0])\nprint(texts[1])\npage_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and' metadata={'start_index': 0}", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/getting_started.html"}
+{"id": "fc717c0565f5-1", "text": "page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.' metadata={'start_index': 82}\nprevious\nText Splitters\nnext\nCharacter\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/getting_started.html"}
+{"id": "ddead15725ed-0", "text": ".ipynb\n.pdf\nHugging Face tokenizer\nHugging Face tokenizer#\nHugging Face has many tokenizers.\nWe use Hugging Face tokenizer, the GPT2TokenizerFast to count the text length in tokens.\nHow the text is split: by character passed in\nHow the chunk size is measured: by number of tokens calculated by the Hugging Face tokenizer\nfrom transformers import GPT2TokenizerFast\ntokenizer = GPT2TokenizerFast.from_pretrained(\"gpt2\")\n# This is a long document we can split up.\nwith open('../../../state_of_the_union.txt') as f:\n state_of_the_union = f.read()\nfrom langchain.text_splitter import CharacterTextSplitter\ntext_splitter = CharacterTextSplitter.from_huggingface_tokenizer(tokenizer, chunk_size=100, chunk_overlap=0)\ntexts = text_splitter.split_text(state_of_the_union)\nprint(texts[0])\nMadam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \nLast year COVID-19 kept us apart. This year we are finally together again. \nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \nWith a duty to one another to the American people to the Constitution.\nprevious\nTiktoken\nnext\ntiktoken (OpenAI) tokenizer\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/huggingface_length_function.html"}
+{"id": "bb0ed53381be-0", "text": ".ipynb\n.pdf\nTiktoken\nTiktoken#\ntiktoken is a fast BPE tokeniser created by OpenAI.\nHow the text is split: by tiktoken tokens\nHow the chunk size is measured: by tiktoken tokens\n#!pip install tiktoken\n# This is a long document we can split up.\nwith open('../../../state_of_the_union.txt') as f:\n state_of_the_union = f.read()\nfrom langchain.text_splitter import TokenTextSplitter\ntext_splitter = TokenTextSplitter(chunk_size=10, chunk_overlap=0)\ntexts = text_splitter.split_text(state_of_the_union)\nprint(texts[0])\nMadam Speaker, Madam Vice President, our\nprevious\nspaCy\nnext\nHugging Face tokenizer\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/tiktoken_splitter.html"}
+{"id": "1f296b407e44-0", "text": ".ipynb\n.pdf\nNLTK\nNLTK#\nThe Natural Language Toolkit, or more commonly NLTK, is a suite of libraries and programs for symbolic and statistical natural language processing (NLP) for English written in the Python programming language.\nRather than just splitting on \u201c\\n\\n\u201d, we can use NLTK to split based on NLTK tokenizers.\nHow the text is split: by NLTK tokenizer.\nHow the chunk size is measured:by number of characters\n#pip install nltk\n# This is a long document we can split up.\nwith open('../../../state_of_the_union.txt') as f:\n state_of_the_union = f.read()\nfrom langchain.text_splitter import NLTKTextSplitter\ntext_splitter = NLTKTextSplitter(chunk_size=1000)\ntexts = text_splitter.split_text(state_of_the_union)\nprint(texts[0])\nMadam Speaker, Madam Vice President, our First Lady and Second Gentleman.\nMembers of Congress and the Cabinet.\nJustices of the Supreme Court.\nMy fellow Americans.\nLast year COVID-19 kept us apart.\nThis year we are finally together again.\nTonight, we meet as Democrats Republicans and Independents.\nBut most importantly as Americans.\nWith a duty to one another to the American people to the Constitution.\nAnd with an unwavering resolve that freedom will always triumph over tyranny.\nSix days ago, Russia\u2019s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways.\nBut he badly miscalculated.\nHe thought he could roll into Ukraine and the world would roll over.\nInstead he met a wall of strength he never imagined.\nHe met the Ukrainian people.\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/nltk.html"}
+{"id": "1f296b407e44-1", "text": "Groups of citizens blocking tanks with their bodies.\nprevious\nCodeTextSplitter\nnext\nRecursive Character\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/nltk.html"}
+{"id": "e5b2c36f394e-0", "text": ".ipynb\n.pdf\nCharacter\nCharacter#\nThis is the simplest method. This splits based on characters (by default \u201c\\n\\n\u201d) and measure chunk length by number of characters.\nHow the text is split: by single character\nHow the chunk size is measured: by number of characters\n# This is a long document we can split up.\nwith open('../../../state_of_the_union.txt') as f:\n state_of_the_union = f.read()\nfrom langchain.text_splitter import CharacterTextSplitter\ntext_splitter = CharacterTextSplitter( \n separator = \"\\n\\n\",\n chunk_size = 1000,\n chunk_overlap = 200,\n length_function = len,\n)\ntexts = text_splitter.create_documents([state_of_the_union])\nprint(texts[0])", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html"}
+{"id": "e5b2c36f394e-1", "text": "print(texts[0])\npage_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\nWith a duty to one another to the American people to the Constitution. \\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \\n\\nSix days ago, Russia\u2019s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \\n\\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \\n\\nHe met the Ukrainian people. \\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' lookup_str='' metadata={} lookup_index=0\nHere\u2019s an example of passing metadata along with the documents, notice that it is split along with the documents.\nmetadatas = [{\"document\": 1}, {\"document\": 2}]\ndocuments = text_splitter.create_documents([state_of_the_union, state_of_the_union], metadatas=metadatas)\nprint(documents[0])", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html"}
+{"id": "e5b2c36f394e-2", "text": "print(documents[0])\npage_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\nWith a duty to one another to the American people to the Constitution. \\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \\n\\nSix days ago, Russia\u2019s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \\n\\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \\n\\nHe met the Ukrainian people. \\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' lookup_str='' metadata={'document': 1} lookup_index=0\ntext_splitter.split_text(state_of_the_union)[0]", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html"}
+{"id": "e5b2c36f394e-3", "text": "text_splitter.split_text(state_of_the_union)[0]\n'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\nWith a duty to one another to the American people to the Constitution. \\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \\n\\nSix days ago, Russia\u2019s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \\n\\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \\n\\nHe met the Ukrainian people. \\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.'\nprevious\nGetting Started\nnext\nCodeTextSplitter\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html"}
+{"id": "75f162d065b1-0", "text": ".ipynb\n.pdf\ntiktoken (OpenAI) tokenizer\ntiktoken (OpenAI) tokenizer#\ntiktoken is a fast BPE tokenizer created by OpenAI.\nWe can use it to estimate tokens used. It will probably be more accurate for the OpenAI models.\nHow the text is split: by character passed in\nHow the chunk size is measured: by tiktoken tokenizer\n#!pip install tiktoken\n# This is a long document we can split up.\nwith open('../../../state_of_the_union.txt') as f:\n state_of_the_union = f.read()\nfrom langchain.text_splitter import CharacterTextSplitter\ntext_splitter = CharacterTextSplitter.from_tiktoken_encoder(chunk_size=100, chunk_overlap=0)\ntexts = text_splitter.split_text(state_of_the_union)\nprint(texts[0])\nMadam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \nLast year COVID-19 kept us apart. This year we are finally together again. \nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \nWith a duty to one another to the American people to the Constitution.\nprevious\nHugging Face tokenizer\nnext\nVectorstores\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/tiktoken.html"}
+{"id": "8017eeb4dbd1-0", "text": ".ipynb\n.pdf\nspaCy\nspaCy#\nspaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython.\nAnother alternative to NLTK is to use Spacy tokenizer.\nHow the text is split: by spaCy tokenizer\nHow the chunk size is measured: by number of characters\n#!pip install spacy\n# This is a long document we can split up.\nwith open('../../../state_of_the_union.txt') as f:\n state_of_the_union = f.read()\nfrom langchain.text_splitter import SpacyTextSplitter\ntext_splitter = SpacyTextSplitter(chunk_size=1000)\ntexts = text_splitter.split_text(state_of_the_union)\nprint(texts[0])\nMadam Speaker, Madam Vice President, our First Lady and Second Gentleman.\nMembers of Congress and the Cabinet.\nJustices of the Supreme Court.\nMy fellow Americans. \nLast year COVID-19 kept us apart.\nThis year we are finally together again. \nTonight, we meet as Democrats Republicans and Independents.\nBut most importantly as Americans. \nWith a duty to one another to the American people to the Constitution. \nAnd with an unwavering resolve that freedom will always triumph over tyranny. \nSix days ago, Russia\u2019s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways.\nBut he badly miscalculated. \nHe thought he could roll into Ukraine and the world would roll over.\nInstead he met a wall of strength he never imagined. \nHe met the Ukrainian people. \nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.\nprevious\nRecursive Character\nnext\nTiktoken\nBy Harrison Chase", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/spacy.html"}
+{"id": "8017eeb4dbd1-1", "text": "previous\nRecursive Character\nnext\nTiktoken\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/spacy.html"}
+{"id": "2ad46927ed10-0", "text": ".ipynb\n.pdf\nCodeTextSplitter\n Contents \nPython\nJS\nMarkdown\nLatex\nHTML\nCodeTextSplitter#\nCodeTextSplitter allows you to split your code with multiple language support. Import enum Language and specify the language.\nfrom langchain.text_splitter import (\n RecursiveCharacterTextSplitter,\n Language,\n)\n# Full list of support languages\n[e.value for e in Language]\n['cpp',\n 'go',\n 'java',\n 'js',\n 'php',\n 'proto',\n 'python',\n 'rst',\n 'ruby',\n 'rust',\n 'scala',\n 'swift',\n 'markdown',\n 'latex',\n 'html']\n# You can also see the separators used for a given language\nRecursiveCharacterTextSplitter.get_separators_for_language(Language.PYTHON)\n['\\nclass ', '\\ndef ', '\\n\\tdef ', '\\n\\n', '\\n', ' ', '']\nPython#\nHere\u2019s an example using the PythonTextSplitter\nPYTHON_CODE = \"\"\"\ndef hello_world():\n print(\"Hello, World!\")\n# Call the function\nhello_world()\n\"\"\"\npython_splitter = RecursiveCharacterTextSplitter.from_language(\n language=Language.PYTHON, chunk_size=50, chunk_overlap=0\n)\npython_docs = python_splitter.create_documents([PYTHON_CODE])\npython_docs\n[Document(page_content='def hello_world():\\n print(\"Hello, World!\")', metadata={}),\n Document(page_content='# Call the function\\nhello_world()', metadata={})]\nJS#\nHere\u2019s an example using the JS text splitter\nJS_CODE = \"\"\"\nfunction helloWorld() {\n console.log(\"Hello, World!\");\n}\n// Call the function\nhelloWorld();\n\"\"\"\njs_splitter = RecursiveCharacterTextSplitter.from_language(", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/code_splitter.html"}
+{"id": "2ad46927ed10-1", "text": "helloWorld();\n\"\"\"\njs_splitter = RecursiveCharacterTextSplitter.from_language(\n language=Language.JS, chunk_size=60, chunk_overlap=0\n)\njs_docs = js_splitter.create_documents([JS_CODE])\njs_docs\n[Document(page_content='function helloWorld() {\\n console.log(\"Hello, World!\");\\n}', metadata={}),\n Document(page_content='// Call the function\\nhelloWorld();', metadata={})]\nMarkdown#\nHere\u2019s an example using the Markdown text splitter.\nmarkdown_text = \"\"\"\n# \ud83e\udd9c\ufe0f\ud83d\udd17 LangChain\n\u26a1 Building applications with LLMs through composability \u26a1\n## Quick Install\n```bash\n# Hopefully this code block isn't split\npip install langchain\n```\nAs an open source project in a rapidly developing field, we are extremely open to contributions.\n\"\"\"\nmd_splitter = RecursiveCharacterTextSplitter.from_language(\n language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0\n)\nmd_docs = md_splitter.create_documents([markdown_text])\nmd_docs\n[Document(page_content='# \ud83e\udd9c\ufe0f\ud83d\udd17 LangChain', metadata={}),\n Document(page_content='\u26a1 Building applications with LLMs through composability \u26a1', metadata={}),\n Document(page_content='## Quick Install', metadata={}),\n Document(page_content=\"```bash\\n# Hopefully this code block isn't split\", metadata={}),\n Document(page_content='pip install langchain', metadata={}),\n Document(page_content='```', metadata={}),\n Document(page_content='As an open source project in a rapidly developing field, we', metadata={}),\n Document(page_content='are extremely open to contributions.', metadata={})]\nLatex#\nHere\u2019s an example on Latex text\nlatex_text = \"\"\"\n\\documentclass{article}", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/code_splitter.html"}
+{"id": "2ad46927ed10-2", "text": "latex_text = \"\"\"\n\\documentclass{article}\n\\begin{document}\n\\maketitle\n\\section{Introduction}\nLarge language models (LLMs) are a type of machine learning model that can be trained on vast amounts of text data to generate human-like language. In recent years, LLMs have made significant advances in a variety of natural language processing tasks, including language translation, text generation, and sentiment analysis.\n\\subsection{History of LLMs}\nThe earliest LLMs were developed in the 1980s and 1990s, but they were limited by the amount of data that could be processed and the computational power available at the time. In the past decade, however, advances in hardware and software have made it possible to train LLMs on massive datasets, leading to significant improvements in performance.\n\\subsection{Applications of LLMs}\nLLMs have many applications in industry, including chatbots, content creation, and virtual assistants. They can also be used in academia for research in linguistics, psychology, and computational linguistics.\n\\end{document}\n\"\"\"\nlatex_splitter = RecursiveCharacterTextSplitter.from_language(\n language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0\n)\nlatex_docs = latex_splitter.create_documents([latex_text])\nlatex_docs\n[Document(page_content='\\\\documentclass{article}\\n\\n\\x08egin{document}\\n\\n\\\\maketitle', metadata={}),\n Document(page_content='\\\\section{Introduction}', metadata={}),\n Document(page_content='Large language models (LLMs) are a type of machine learning', metadata={}),\n Document(page_content='model that can be trained on vast amounts of text data to', metadata={}),\n Document(page_content='generate human-like language. In recent years, LLMs have', metadata={}),", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/code_splitter.html"}
+{"id": "2ad46927ed10-3", "text": "Document(page_content='made significant advances in a variety of natural language', metadata={}),\n Document(page_content='processing tasks, including language translation, text', metadata={}),\n Document(page_content='generation, and sentiment analysis.', metadata={}),\n Document(page_content='\\\\subsection{History of LLMs}', metadata={}),\n Document(page_content='The earliest LLMs were developed in the 1980s and 1990s,', metadata={}),\n Document(page_content='but they were limited by the amount of data that could be', metadata={}),\n Document(page_content='processed and the computational power available at the', metadata={}),\n Document(page_content='time. In the past decade, however, advances in hardware and', metadata={}),\n Document(page_content='software have made it possible to train LLMs on massive', metadata={}),\n Document(page_content='datasets, leading to significant improvements in', metadata={}),\n Document(page_content='performance.', metadata={}),\n Document(page_content='\\\\subsection{Applications of LLMs}', metadata={}),\n Document(page_content='LLMs have many applications in industry, including', metadata={}),\n Document(page_content='chatbots, content creation, and virtual assistants. They', metadata={}),\n Document(page_content='can also be used in academia for research in linguistics,', metadata={}),\n Document(page_content='psychology, and computational linguistics.', metadata={}),\n Document(page_content='\\\\end{document}', metadata={})]\nHTML#\nHere\u2019s an example using an HTML text splitter\nhtml_text = \"\"\"\n\n\n \n \ud83e\udd9c\ufe0f\ud83d\udd17 LangChain\n \n ", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/code_splitter.html"}
+{"id": "2ad46927ed10-4", "text": "color: darkblue;\n }\n \n \n \n \n
\ud83e\udd9c\ufe0f\ud83d\udd17 LangChain
\n
\u26a1 Building applications with LLMs through composability \u26a1
\n
\n \n As an open source project in a rapidly developing field, we are extremely open to contributions.\n
\n \n\n\"\"\"\nhtml_splitter = RecursiveCharacterTextSplitter.from_language(\n language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0\n)\nhtml_docs = html_splitter.create_documents([html_text])\nhtml_docs\n[Document(page_content='\\n\\n ', metadata={}),\n Document(page_content='\ud83e\udd9c\ufe0f\ud83d\udd17 LangChain\\n \\n \\n \\n ', metadata={}),\n Document(page_content='
\ud83e\udd9c\ufe0f\ud83d\udd17 LangChain
', metadata={}),\n Document(page_content='
\u26a1 Building applications with LLMs through', metadata={}),\n Document(page_content='composability \u26a1
', metadata={}),\n Document(page_content='
\\n ', metadata={}),\n Document(page_content='As an open source project in a rapidly', metadata={}),", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/code_splitter.html"}
+{"id": "2ad46927ed10-5", "text": "Document(page_content='As an open source project in a rapidly', metadata={}),\n Document(page_content='developing field, we are extremely open to contributions.', metadata={}),\n Document(page_content='
\\n \\n', metadata={})]\nprevious\nCharacter\nnext\nNLTK\n Contents\n \nPython\nJS\nMarkdown\nLatex\nHTML\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/code_splitter.html"}
+{"id": "19367f03b39c-0", "text": ".ipynb\n.pdf\nRecursive Character\nRecursive Character#\nThis text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is [\"\\n\\n\", \"\\n\", \" \", \"\"]. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text.\nHow the text is split: by list of characters\nHow the chunk size is measured: by number of characters\n# This is a long document we can split up.\nwith open('../../../state_of_the_union.txt') as f:\n state_of_the_union = f.read()\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\ntext_splitter = RecursiveCharacterTextSplitter(\n # Set a really small chunk size, just to show.\n chunk_size = 100,\n chunk_overlap = 20,\n length_function = len,\n)\ntexts = text_splitter.create_documents([state_of_the_union])\nprint(texts[0])\nprint(texts[1])\npage_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and' lookup_str='' metadata={} lookup_index=0\npage_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.' lookup_str='' metadata={} lookup_index=0\ntext_splitter.split_text(state_of_the_union)[:2]\n['Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and',\n 'of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.']\nprevious\nNLTK\nnext\nspaCy\nBy Harrison Chase", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/recursive_text_splitter.html"}
+{"id": "19367f03b39c-1", "text": "previous\nNLTK\nnext\nspaCy\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/recursive_text_splitter.html"}
+{"id": "988884a574d1-0", "text": ".ipynb\n.pdf\nSelf-querying with Chroma\n Contents \nCreating a Chroma vectorstore\nCreating our self-querying retriever\nTesting it out\nFilter k\nSelf-querying with Chroma#\nChroma is a database for building AI applications with embeddings.\nIn the notebook we\u2019ll demo the SelfQueryRetriever wrapped around a Chroma vector store.\nCreating a Chroma vectorstore#\nFirst we\u2019ll want to create a Chroma VectorStore and seed it with some data. We\u2019ve created a small demo set of documents that contain summaries of movies.\nNOTE: The self-query retriever requires you to have lark installed (pip install lark). We also need the chromadb package.\n#!pip install lark\n#!pip install chromadb\nWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.\nimport os\nimport getpass\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nfrom langchain.schema import Document\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.vectorstores import Chroma\nembeddings = OpenAIEmbeddings()\ndocs = [\n Document(page_content=\"A bunch of scientists bring back dinosaurs and mayhem breaks loose\", metadata={\"year\": 1993, \"rating\": 7.7, \"genre\": \"science fiction\"}),\n Document(page_content=\"Leo DiCaprio gets lost in a dream within a dream within a dream within a ...\", metadata={\"year\": 2010, \"director\": \"Christopher Nolan\", \"rating\": 8.2}),\n Document(page_content=\"A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea\", metadata={\"year\": 2006, \"director\": \"Satoshi Kon\", \"rating\": 8.6}),", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html"}
+{"id": "988884a574d1-1", "text": "Document(page_content=\"A bunch of normal-sized women are supremely wholesome and some men pine after them\", metadata={\"year\": 2019, \"director\": \"Greta Gerwig\", \"rating\": 8.3}),\n Document(page_content=\"Toys come alive and have a blast doing so\", metadata={\"year\": 1995, \"genre\": \"animated\"}),\n Document(page_content=\"Three men walk into the Zone, three men walk out of the Zone\", metadata={\"year\": 1979, \"rating\": 9.9, \"director\": \"Andrei Tarkovsky\", \"genre\": \"science fiction\", \"rating\": 9.9})\n]\nvectorstore = Chroma.from_documents(\n docs, embeddings\n)\nUsing embedded DuckDB without persistence: data will be transient\nCreating our self-querying retriever#\nNow we can instantiate our retriever. To do this we\u2019ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.\nfrom langchain.llms import OpenAI\nfrom langchain.retrievers.self_query.base import SelfQueryRetriever\nfrom langchain.chains.query_constructor.base import AttributeInfo\nmetadata_field_info=[\n AttributeInfo(\n name=\"genre\",\n description=\"The genre of the movie\", \n type=\"string or list[string]\", \n ),\n AttributeInfo(\n name=\"year\",\n description=\"The year the movie was released\", \n type=\"integer\", \n ),\n AttributeInfo(\n name=\"director\",\n description=\"The name of the movie director\", \n type=\"string\", \n ),\n AttributeInfo(\n name=\"rating\",\n description=\"A 1-10 rating for the movie\",\n type=\"float\"\n ),\n]", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html"}
+{"id": "988884a574d1-2", "text": "type=\"float\"\n ),\n]\ndocument_content_description = \"Brief summary of a movie\"\nllm = OpenAI(temperature=0)\nretriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True)\nTesting it out#\nAnd now we can try actually using our retriever!\n# This example only specifies a relevant query\nretriever.get_relevant_documents(\"What are some movies about dinosaurs\")\nquery='dinosaur' filter=None\n[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}),\n Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}),\n Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}),\n Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2})]\n# This example only specifies a filter\nretriever.get_relevant_documents(\"I want to watch a movie rated higher than 8.5\")\nquery=' ' filter=Comparison(comparator=, attribute='rating', value=8.5)\n[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}),", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html"}
+{"id": "988884a574d1-3", "text": "Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]\n# This example specifies a query and a filter\nretriever.get_relevant_documents(\"Has Greta Gerwig directed any movies about women\")\nquery='women' filter=Comparison(comparator=, attribute='director', value='Greta Gerwig')\n[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]\n# This example specifies a composite filter\nretriever.get_relevant_documents(\"What's a highly rated (above 8.5) science fiction film?\")\nquery=' ' filter=Operation(operator=, arguments=[Comparison(comparator=, attribute='genre', value='science fiction'), Comparison(comparator=, attribute='rating', value=8.5)])\n[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]\n# This example specifies a query and composite filter\nretriever.get_relevant_documents(\"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated\")", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html"}
+{"id": "988884a574d1-4", "text": "query='toys' filter=Operation(operator=, arguments=[Comparison(comparator=, attribute='year', value=1990), Comparison(comparator=, attribute='year', value=2005), Comparison(comparator=, attribute='genre', value='animated')])\n[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]\nFilter k#\nWe can also use the self query retriever to specify k: the number of documents to fetch.\nWe can do this by passing enable_limit=True to the constructor.\nretriever = SelfQueryRetriever.from_llm(\n llm, \n vectorstore, \n document_content_description, \n metadata_field_info, \n enable_limit=True,\n verbose=True\n)\n# This example only specifies a relevant query\nretriever.get_relevant_documents(\"what are two movies about dinosaurs\")\nquery='dinosaur' filter=None\n[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}),\n Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}),\n Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}),", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html"}
+{"id": "988884a574d1-5", "text": "Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2})]\nprevious\nChatGPT Plugin\nnext\nCohere Reranker\n Contents\n \nCreating a Chroma vectorstore\nCreating our self-querying retriever\nTesting it out\nFilter k\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html"}
+{"id": "01bc45200067-0", "text": ".ipynb\n.pdf\nSelf-querying\n Contents \nCreating a Pinecone index\nCreating our self-querying retriever\nTesting it out\nFilter k\nSelf-querying#\nIn the notebook we\u2019ll demo the SelfQueryRetriever, which, as the name suggests, has the ability to query itself. Specifically, given any natural language query, the retriever uses a query-constructing LLM chain to write a structured query and then applies that structured query to it\u2019s underlying VectorStore. This allows the retriever to not only use the user-input query for semantic similarity comparison with the contents of stored documented, but to also extract filters from the user query on the metadata of stored documents and to execute those filters.\nCreating a Pinecone index#\nFirst we\u2019ll want to create a Pinecone VectorStore and seed it with some data. We\u2019ve created a small demo set of documents that contain summaries of movies.\nTo use Pinecone, you to have pinecone package installed and you must have an API key and an Environment. Here are the installation instructions.\nNOTE: The self-query retriever requires you to have lark package installed.\n# !pip install lark\n#!pip install pinecone-client\nimport os\nimport pinecone\npinecone.init(api_key=os.environ[\"PINECONE_API_KEY\"], environment=os.environ[\"PINECONE_ENV\"])\n/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pinecone/index.py:4: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)\n from tqdm.autonotebook import tqdm\nfrom langchain.schema import Document\nfrom langchain.embeddings.openai import OpenAIEmbeddings", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html"}
+{"id": "01bc45200067-1", "text": "from langchain.schema import Document\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.vectorstores import Pinecone\nembeddings = OpenAIEmbeddings()\n# create new index\npinecone.create_index(\"langchain-self-retriever-demo\", dimension=1536)\ndocs = [\n Document(page_content=\"A bunch of scientists bring back dinosaurs and mayhem breaks loose\", metadata={\"year\": 1993, \"rating\": 7.7, \"genre\": [\"action\", \"science fiction\"]}),\n Document(page_content=\"Leo DiCaprio gets lost in a dream within a dream within a dream within a ...\", metadata={\"year\": 2010, \"director\": \"Christopher Nolan\", \"rating\": 8.2}),\n Document(page_content=\"A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea\", metadata={\"year\": 2006, \"director\": \"Satoshi Kon\", \"rating\": 8.6}),\n Document(page_content=\"A bunch of normal-sized women are supremely wholesome and some men pine after them\", metadata={\"year\": 2019, \"director\": \"Greta Gerwig\", \"rating\": 8.3}),\n Document(page_content=\"Toys come alive and have a blast doing so\", metadata={\"year\": 1995, \"genre\": \"animated\"}),\n Document(page_content=\"Three men walk into the Zone, three men walk out of the Zone\", metadata={\"year\": 1979, \"rating\": 9.9, \"director\": \"Andrei Tarkovsky\", \"genre\": [\"science fiction\", \"thriller\"], \"rating\": 9.9})\n]\nvectorstore = Pinecone.from_documents(\n docs, embeddings, index_name=\"langchain-self-retriever-demo\"\n)\nCreating our self-querying retriever#", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html"}
+{"id": "01bc45200067-2", "text": ")\nCreating our self-querying retriever#\nNow we can instantiate our retriever. To do this we\u2019ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.\nfrom langchain.llms import OpenAI\nfrom langchain.retrievers.self_query.base import SelfQueryRetriever\nfrom langchain.chains.query_constructor.base import AttributeInfo\nmetadata_field_info=[\n AttributeInfo(\n name=\"genre\",\n description=\"The genre of the movie\", \n type=\"string or list[string]\", \n ),\n AttributeInfo(\n name=\"year\",\n description=\"The year the movie was released\", \n type=\"integer\", \n ),\n AttributeInfo(\n name=\"director\",\n description=\"The name of the movie director\", \n type=\"string\", \n ),\n AttributeInfo(\n name=\"rating\",\n description=\"A 1-10 rating for the movie\",\n type=\"float\"\n ),\n]\ndocument_content_description = \"Brief summary of a movie\"\nllm = OpenAI(temperature=0)\nretriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True)\nTesting it out#\nAnd now we can try actually using our retriever!\n# This example only specifies a relevant query\nretriever.get_relevant_documents(\"What are some movies about dinosaurs\")\nquery='dinosaur' filter=None\n[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': ['action', 'science fiction'], 'rating': 7.7, 'year': 1993.0}),", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html"}
+{"id": "01bc45200067-3", "text": "Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995.0}),\n Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006.0}),\n Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'director': 'Christopher Nolan', 'rating': 8.2, 'year': 2010.0})]\n# This example only specifies a filter\nretriever.get_relevant_documents(\"I want to watch a movie rated higher than 8.5\")\nquery=' ' filter=Comparison(comparator=, attribute='rating', value=8.5)\n[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006.0}),\n Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': ['science fiction', 'thriller'], 'rating': 9.9, 'year': 1979.0})]\n# This example specifies a query and a filter\nretriever.get_relevant_documents(\"Has Greta Gerwig directed any movies about women\")\nquery='women' filter=Comparison(comparator=, attribute='director', value='Greta Gerwig')", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html"}
+{"id": "01bc45200067-4", "text": "[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'director': 'Greta Gerwig', 'rating': 8.3, 'year': 2019.0})]\n# This example specifies a composite filter\nretriever.get_relevant_documents(\"What's a highly rated (above 8.5) science fiction film?\")\nquery=' ' filter=Operation(operator=, arguments=[Comparison(comparator=, attribute='genre', value='science fiction'), Comparison(comparator=, attribute='rating', value=8.5)])\n[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': ['science fiction', 'thriller'], 'rating': 9.9, 'year': 1979.0})]\n# This example specifies a query and composite filter\nretriever.get_relevant_documents(\"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated\")\nquery='toys' filter=Operation(operator=, arguments=[Comparison(comparator=, attribute='year', value=1990.0), Comparison(comparator=, attribute='year', value=2005.0), Comparison(comparator=, attribute='genre', value='animated')])\n[Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995.0})]\nFilter k#\nWe can also use the self query retriever to specify k: the number of documents to fetch.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html"}
+{"id": "01bc45200067-5", "text": "We can do this by passing enable_limit=True to the constructor.\nretriever = SelfQueryRetriever.from_llm(\n llm, \n vectorstore, \n document_content_description, \n metadata_field_info, \n enable_limit=True,\n verbose=True\n)\n# This example only specifies a relevant query\nretriever.get_relevant_documents(\"What are two movies about dinosaurs\")\nprevious\nSelf-querying with Qdrant\nnext\nSVM\n Contents\n \nCreating a Pinecone index\nCreating our self-querying retriever\nTesting it out\nFilter k\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html"}
+{"id": "ac246f227c88-0", "text": ".ipynb\n.pdf\nVectorStore\n Contents \nMaximum Marginal Relevance Retrieval\nSimilarity Score Threshold Retrieval\nSpecifying top k\nVectorStore#\nThe index - and therefore the retriever - that LangChain has the most support for is the VectorStoreRetriever. As the name suggests, this retriever is backed heavily by a VectorStore.\nOnce you construct a VectorStore, its very easy to construct a retriever. Let\u2019s walk through an example.\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../../../state_of_the_union.txt')\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import FAISS\nfrom langchain.embeddings import OpenAIEmbeddings\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ntexts = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\ndb = FAISS.from_documents(texts, embeddings)\nExiting: Cleaning up .chroma directory\nretriever = db.as_retriever()\ndocs = retriever.get_relevant_documents(\"what did he say about ketanji brown jackson\")\nMaximum Marginal Relevance Retrieval#\nBy default, the vectorstore retriever uses similarity search. If the underlying vectorstore support maximum marginal relevance search, you can specify that as the search type.\nretriever = db.as_retriever(search_type=\"mmr\")\ndocs = retriever.get_relevant_documents(\"what did he say abotu ketanji brown jackson\")\nSimilarity Score Threshold Retrieval#\nYou can also use a retrieval method that sets a similarity score threshold and only returns documents with a score above that threshold\nretriever = db.as_retriever(search_type=\"similarity_score_threshold\", search_kwargs={\"score_threshold\": .5})", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vectorstore.html"}
+{"id": "ac246f227c88-1", "text": "docs = retriever.get_relevant_documents(\"what did he say abotu ketanji brown jackson\")\nSpecifying top k#\nYou can also specify search kwargs like k to use when doing retrieval.\nretriever = db.as_retriever(search_kwargs={\"k\": 1})\ndocs = retriever.get_relevant_documents(\"what did he say abotu ketanji brown jackson\")\nlen(docs)\n1\nprevious\nTime Weighted VectorStore\nnext\nVespa\n Contents\n \nMaximum Marginal Relevance Retrieval\nSimilarity Score Threshold Retrieval\nSpecifying top k\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vectorstore.html"}
+{"id": "d52edacd440a-0", "text": ".ipynb\n.pdf\nContextual Compression\n Contents \nContextual Compression\nUsing a vanilla vector store retriever\nAdding contextual compression with an LLMChainExtractor\nMore built-in compressors: filters\nLLMChainFilter\nEmbeddingsFilter\nStringing compressors and document transformers together\nContextual Compression#\nThis notebook introduces the concept of DocumentCompressors and the ContextualCompressionRetriever. The core idea is simple: given a specific query, we should be able to return only the documents relevant to that query, and only the parts of those documents that are relevant. The ContextualCompressionsRetriever is a wrapper for another retriever that iterates over the initial output of the base retriever and filters and compresses those initial documents, so that only the most relevant information is returned.\n# Helper function for printing docs\ndef pretty_print_docs(docs):\n print(f\"\\n{'-' * 100}\\n\".join([f\"Document {i+1}:\\n\\n\" + d.page_content for i, d in enumerate(docs)]))\nUsing a vanilla vector store retriever#\nLet\u2019s start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can see that given an example question our retriever returns one or two relevant docs and a few irrelevant docs. And even the relevant docs have a lot of irrelevant information in them.\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.document_loaders import TextLoader\nfrom langchain.vectorstores import FAISS\ndocuments = TextLoader('../../../state_of_the_union.txt').load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ntexts = text_splitter.split_documents(documents)", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html"}
+{"id": "d52edacd440a-1", "text": "texts = text_splitter.split_documents(documents)\nretriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever()\ndocs = retriever.get_relevant_documents(\"What did the president say about Ketanji Brown Jackson\")\npretty_print_docs(docs)\nDocument 1:\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\n----------------------------------------------------------------------------------------------------\nDocument 2:\nA former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she\u2019s been nominated, she\u2019s received a broad range of support\u2014from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \nWe can do both. At our border, we\u2019ve installed new technology like cutting-edge scanners to better detect drug smuggling. \nWe\u2019ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html"}
+{"id": "d52edacd440a-2", "text": "We\u2019ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \nWe\u2019re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \nWe\u2019re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.\n----------------------------------------------------------------------------------------------------\nDocument 3:\nAnd for our LGBTQ+ Americans, let\u2019s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \nWhile it often appears that we never agree, that isn\u2019t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \nAnd soon, we\u2019ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \nSo tonight I\u2019m offering a Unity Agenda for the Nation. Four big things we can do together. \nFirst, beat the opioid epidemic.\n----------------------------------------------------------------------------------------------------\nDocument 4:\nTonight, I\u2019m announcing a crackdown on these companies overcharging American businesses and consumers. \nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \nThat ends on my watch. \nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \nWe\u2019ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html"}
+{"id": "d52edacd440a-3", "text": "Let\u2019s pass the Paycheck Fairness Act and paid leave. \nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \nLet\u2019s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill\u2014our First Lady who teaches full-time\u2014calls America\u2019s best-kept secret: community colleges.\nAdding contextual compression with an LLMChainExtractor#\nNow let\u2019s wrap our base retriever with a ContextualCompressionRetriever. We\u2019ll add an LLMChainExtractor, which will iterate over the initially returned documents and extract from each only the content that is relevant to the query.\nfrom langchain.llms import OpenAI\nfrom langchain.retrievers import ContextualCompressionRetriever\nfrom langchain.retrievers.document_compressors import LLMChainExtractor\nllm = OpenAI(temperature=0)\ncompressor = LLMChainExtractor.from_llm(llm)\ncompression_retriever = ContextualCompressionRetriever(base_compressor=compressor, base_retriever=retriever)\ncompressed_docs = compression_retriever.get_relevant_documents(\"What did the president say about Ketanji Jackson Brown\")\npretty_print_docs(compressed_docs)\nDocument 1:\n\"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\"\n----------------------------------------------------------------------------------------------------\nDocument 2:", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html"}
+{"id": "d52edacd440a-4", "text": "----------------------------------------------------------------------------------------------------\nDocument 2:\n\"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she\u2019s been nominated, she\u2019s received a broad range of support\u2014from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\"\nMore built-in compressors: filters#\nLLMChainFilter#\nThe LLMChainFilter is slightly simpler but more robust compressor that uses an LLM chain to decide which of the initially retrieved documents to filter out and which ones to return, without manipulating the document contents.\nfrom langchain.retrievers.document_compressors import LLMChainFilter\n_filter = LLMChainFilter.from_llm(llm)\ncompression_retriever = ContextualCompressionRetriever(base_compressor=_filter, base_retriever=retriever)\ncompressed_docs = compression_retriever.get_relevant_documents(\"What did the president say about Ketanji Jackson Brown\")\npretty_print_docs(compressed_docs)\nDocument 1:\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nEmbeddingsFilter#", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html"}
+{"id": "d52edacd440a-5", "text": "EmbeddingsFilter#\nMaking an extra LLM call over each retrieved document is expensive and slow. The EmbeddingsFilter provides a cheaper and faster option by embedding the documents and query and only returning those documents which have sufficiently similar embeddings to the query.\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.retrievers.document_compressors import EmbeddingsFilter\nembeddings = OpenAIEmbeddings()\nembeddings_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)\ncompression_retriever = ContextualCompressionRetriever(base_compressor=embeddings_filter, base_retriever=retriever)\ncompressed_docs = compression_retriever.get_relevant_documents(\"What did the president say about Ketanji Jackson Brown\")\npretty_print_docs(compressed_docs)\nDocument 1:\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\n----------------------------------------------------------------------------------------------------\nDocument 2:", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html"}
+{"id": "d52edacd440a-6", "text": "----------------------------------------------------------------------------------------------------\nDocument 2:\nA former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she\u2019s been nominated, she\u2019s received a broad range of support\u2014from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \nWe can do both. At our border, we\u2019ve installed new technology like cutting-edge scanners to better detect drug smuggling. \nWe\u2019ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \nWe\u2019re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \nWe\u2019re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.\n----------------------------------------------------------------------------------------------------\nDocument 3:\nAnd for our LGBTQ+ Americans, let\u2019s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \nWhile it often appears that we never agree, that isn\u2019t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \nAnd soon, we\u2019ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \nSo tonight I\u2019m offering a Unity Agenda for the Nation. Four big things we can do together. \nFirst, beat the opioid epidemic.\nStringing compressors and document transformers together#", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html"}
+{"id": "d52edacd440a-7", "text": "First, beat the opioid epidemic.\nStringing compressors and document transformers together#\nUsing the DocumentCompressorPipeline we can also easily combine multiple compressors in sequence. Along with compressors we can add BaseDocumentTransformers to our pipeline, which don\u2019t perform any contextual compression but simply perform some transformation on a set of documents. For example TextSplitters can be used as document transformers to split documents into smaller pieces, and the EmbeddingsRedundantFilter can be used to filter out redundant documents based on embedding similarity between documents.\nBelow we create a compressor pipeline by first splitting our docs into smaller chunks, then removing redundant documents, and then filtering based on relevance to the query.\nfrom langchain.document_transformers import EmbeddingsRedundantFilter\nfrom langchain.retrievers.document_compressors import DocumentCompressorPipeline\nfrom langchain.text_splitter import CharacterTextSplitter\nsplitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=0, separator=\". \")\nredundant_filter = EmbeddingsRedundantFilter(embeddings=embeddings)\nrelevant_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)\npipeline_compressor = DocumentCompressorPipeline(\n transformers=[splitter, redundant_filter, relevant_filter]\n)\ncompression_retriever = ContextualCompressionRetriever(base_compressor=pipeline_compressor, base_retriever=retriever)\ncompressed_docs = compression_retriever.get_relevant_documents(\"What did the president say about Ketanji Jackson Brown\")\npretty_print_docs(compressed_docs)\nDocument 1:\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson\n----------------------------------------------------------------------------------------------------\nDocument 2:", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html"}
+{"id": "d52edacd440a-8", "text": "----------------------------------------------------------------------------------------------------\nDocument 2:\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \nWhile it often appears that we never agree, that isn\u2019t true. I signed 80 bipartisan bills into law last year\n----------------------------------------------------------------------------------------------------\nDocument 3:\nA former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder\nprevious\nCohere Reranker\nnext\nDataberry\n Contents\n \nContextual Compression\nUsing a vanilla vector store retriever\nAdding contextual compression with an LLMChainExtractor\nMore built-in compressors: filters\nLLMChainFilter\nEmbeddingsFilter\nStringing compressors and document transformers together\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html"}
+{"id": "543779df7dd9-0", "text": ".ipynb\n.pdf\nElasticSearch BM25\n Contents \nCreate New Retriever\nAdd texts (if necessary)\nUse Retriever\nElasticSearch BM25#\nElasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.\nIn information retrieval, Okapi BM25 (BM is an abbreviation of best matching) is a ranking function used by search engines to estimate the relevance of documents to a given search query. It is based on the probabilistic retrieval framework developed in the 1970s and 1980s by Stephen E. Robertson, Karen Sp\u00e4rck Jones, and others.\nThe name of the actual ranking function is BM25. The fuller name, Okapi BM25, includes the name of the first system to use it, which was the Okapi information retrieval system, implemented at London\u2019s City University in the 1980s and 1990s. BM25 and its newer variants, e.g. BM25F (a version of BM25 that can take document structure and anchor text into account), represent TF-IDF-like retrieval functions used in document retrieval.\nThis notebook shows how to use a retriever that uses ElasticSearch and BM25.\nFor more information on the details of BM25 see this blog post.\n#!pip install elasticsearch\nfrom langchain.retrievers import ElasticSearchBM25Retriever\nCreate New Retriever#\nelasticsearch_url=\"http://localhost:9200\"\nretriever = ElasticSearchBM25Retriever.create(elasticsearch_url, \"langchain-index-4\")\n# Alternatively, you can load an existing index\n# import elasticsearch\n# elasticsearch_url=\"http://localhost:9200\"", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/elastic_search_bm25.html"}
+{"id": "543779df7dd9-1", "text": "# import elasticsearch\n# elasticsearch_url=\"http://localhost:9200\"\n# retriever = ElasticSearchBM25Retriever(elasticsearch.Elasticsearch(elasticsearch_url), \"langchain-index\")\nAdd texts (if necessary)#\nWe can optionally add texts to the retriever (if they aren\u2019t already in there)\nretriever.add_texts([\"foo\", \"bar\", \"world\", \"hello\", \"foo bar\"])\n['cbd4cb47-8d9f-4f34-b80e-ea871bc49856',\n 'f3bd2e24-76d1-4f9b-826b-ec4c0e8c7365',\n '8631bfc8-7c12-48ee-ab56-8ad5f373676e',\n '8be8374c-3253-4d87-928d-d73550a2ecf0',\n 'd79f457b-2842-4eab-ae10-77aa420b53d7']\nUse Retriever#\nWe can now use the retriever!\nresult = retriever.get_relevant_documents(\"foo\")\nresult\n[Document(page_content='foo', metadata={}),\n Document(page_content='foo bar', metadata={})]\nprevious\nDataberry\nnext\nkNN\n Contents\n \nCreate New Retriever\nAdd texts (if necessary)\nUse Retriever\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/elastic_search_bm25.html"}
+{"id": "df6a4725e1f4-0", "text": ".ipynb\n.pdf\nTime Weighted VectorStore\n Contents \nLow Decay Rate\nHigh Decay Rate\nVirtual Time\nTime Weighted VectorStore#\nThis retriever uses a combination of semantic similarity and a time decay.\nThe algorithm for scoring them is:\nsemantic_similarity + (1.0 - decay_rate) ** hours_passed\nNotably, hours_passed refers to the hours passed since the object in the retriever was last accessed, not since it was created. This means that frequently accessed objects remain \u201cfresh.\u201d\nimport faiss\nfrom datetime import datetime, timedelta\nfrom langchain.docstore import InMemoryDocstore\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.retrievers import TimeWeightedVectorStoreRetriever\nfrom langchain.schema import Document\nfrom langchain.vectorstores import FAISS\nLow Decay Rate#\nA low decay rate (in this, to be extreme, we will set close to 0) means memories will be \u201cremembered\u201d for longer. A decay rate of 0 means memories never be forgotten, making this retriever equivalent to the vector lookup.\n# Define your embedding model\nembeddings_model = OpenAIEmbeddings()\n# Initialize the vectorstore as empty\nembedding_size = 1536\nindex = faiss.IndexFlatL2(embedding_size)\nvectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})\nretriever = TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, decay_rate=.0000000000000000000000001, k=1) \nyesterday = datetime.now() - timedelta(days=1)\nretriever.add_documents([Document(page_content=\"hello world\", metadata={\"last_accessed_at\": yesterday})])\nretriever.add_documents([Document(page_content=\"hello foo\")])", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/time_weighted_vectorstore.html"}
+{"id": "df6a4725e1f4-1", "text": "retriever.add_documents([Document(page_content=\"hello foo\")])\n['d7f85756-2371-4bdf-9140-052780a0f9b3']\n# \"Hello World\" is returned first because it is most salient, and the decay rate is close to 0., meaning it's still recent enough\nretriever.get_relevant_documents(\"hello world\")\n[Document(page_content='hello world', metadata={'last_accessed_at': datetime.datetime(2023, 5, 13, 21, 0, 27, 678341), 'created_at': datetime.datetime(2023, 5, 13, 21, 0, 27, 279596), 'buffer_idx': 0})]\nHigh Decay Rate#\nWith a high decay rate (e.g., several 9\u2019s), the recency score quickly goes to 0! If you set this all the way to 1, recency is 0 for all objects, once again making this equivalent to a vector lookup.\n# Define your embedding model\nembeddings_model = OpenAIEmbeddings()\n# Initialize the vectorstore as empty\nembedding_size = 1536\nindex = faiss.IndexFlatL2(embedding_size)\nvectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})\nretriever = TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, decay_rate=.999, k=1) \nyesterday = datetime.now() - timedelta(days=1)\nretriever.add_documents([Document(page_content=\"hello world\", metadata={\"last_accessed_at\": yesterday})])\nretriever.add_documents([Document(page_content=\"hello foo\")])\n['40011466-5bbe-4101-bfd1-e22e7f505de2']", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/time_weighted_vectorstore.html"}
+{"id": "df6a4725e1f4-2", "text": "# \"Hello Foo\" is returned first because \"hello world\" is mostly forgotten\nretriever.get_relevant_documents(\"hello world\")\n[Document(page_content='hello foo', metadata={'last_accessed_at': datetime.datetime(2023, 4, 16, 22, 9, 2, 494798), 'created_at': datetime.datetime(2023, 4, 16, 22, 9, 2, 178722), 'buffer_idx': 1})]\nVirtual Time#\nUsing some utils in LangChain, you can mock out the time component\nfrom langchain.utils import mock_now\nimport datetime\n# Notice the last access time is that date time\nwith mock_now(datetime.datetime(2011, 2, 3, 10, 11)):\n print(retriever.get_relevant_documents(\"hello world\"))\n[Document(page_content='hello world', metadata={'last_accessed_at': MockDateTime(2011, 2, 3, 10, 11), 'created_at': datetime.datetime(2023, 5, 13, 21, 0, 27, 279596), 'buffer_idx': 0})]\nprevious\nTF-IDF\nnext\nVectorStore\n Contents\n \nLow Decay Rate\nHigh Decay Rate\nVirtual Time\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/time_weighted_vectorstore.html"}
+{"id": "bce31c051c51-0", "text": ".ipynb\n.pdf\nWikipedia\n Contents \nInstallation\nExamples\nRunning retriever\nQuestion Answering on facts\nWikipedia#\nWikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.\nThis notebook shows how to retrieve wiki pages from wikipedia.org into the Document format that is used downstream.\nInstallation#\nFirst, you need to install wikipedia python package.\n#!pip install wikipedia\nWikipediaRetriever has these arguments:\noptional lang: default=\u201den\u201d. Use it to search in a specific language part of Wikipedia\noptional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.\noptional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), title, Summary. If True, other fields also downloaded.\nget_relevant_documents() has one argument, query: free text which used to find documents in Wikipedia\nExamples#\nRunning retriever#\nfrom langchain.retrievers import WikipediaRetriever\nretriever = WikipediaRetriever()\ndocs = retriever.get_relevant_documents(query='HUNTER X HUNTER')\ndocs[0].metadata # meta-information of the Document\n{'title': 'Hunter \u00d7 Hunter',", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/wikipedia.html"}
+{"id": "bce31c051c51-1", "text": "'summary': 'Hunter \u00d7 Hunter (stylized as HUNTER\u00d7HUNTER and pronounced \"hunter hunter\") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\\'s sh\u014dnen manga magazine Weekly Sh\u014dnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tank\u014dbon volumes as of November 2022. The story focuses on a young boy named Gon Freecss who discovers that his father, who left him at a young age, is actually a world-renowned Hunter, a licensed professional who specializes in fantastical pursuits such as locating rare or unidentified animal species, treasure hunting, surveying unexplored enclaves, or hunting down lawless individuals. Gon departs on a journey to become a Hunter and eventually find his father. Along the way, Gon meets various other Hunters and encounters the paranormal.\\nHunter \u00d7 Hunter was adapted into a 62-episode anime television series produced by Nippon Animation and directed by Kazuhiro Furuhashi, which ran on Fuji Television from October 1999 to March 2001. Three separate original video animations (OVAs) totaling 30 episodes were subsequently produced by Nippon Animation and released in Japan from 2002 to 2004. A second anime television series by Madhouse aired on Nippon Television from October 2011 to September 2014, totaling 148 episodes, with two animated theatrical films released in 2013. There are also numerous audio albums, video games, musicals, and other media based on Hunter \u00d7 Hunter.\\nThe manga has been translated into English and released in North America by Viz Media since April 2005. Both television series have been also licensed by Viz Media, with the first series having aired on the Funimation Channel in 2009 and the second series broadcast", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/wikipedia.html"}
+{"id": "bce31c051c51-2", "text": "with the first series having aired on the Funimation Channel in 2009 and the second series broadcast on Adult Swim\\'s Toonami programming block from April 2016 to June 2019.\\nHunter \u00d7 Hunter has been a huge critical and financial success and has become one of the best-selling manga series of all time, having over 84 million copies in circulation by July 2022.\\n\\n'}", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/wikipedia.html"}
+{"id": "bce31c051c51-3", "text": "docs[0].page_content[:400] # a content of the Document \n'Hunter \u00d7 Hunter (stylized as HUNTER\u00d7HUNTER and pronounced \"hunter hunter\") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\\'s sh\u014dnen manga magazine Weekly Sh\u014dnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tank\u014dbon volumes as of November 2022. The sto'\nQuestion Answering on facts#\n# get a token: https://platform.openai.com/account/api-keys\nfrom getpass import getpass\nOPENAI_API_KEY = getpass()\n \u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\nimport os\nos.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.chains import ConversationalRetrievalChain\nmodel = ChatOpenAI(model_name='gpt-3.5-turbo') # switch to 'gpt-4'\nqa = ConversationalRetrievalChain.from_llm(model,retriever=retriever)\nquestions = [\n \"What is Apify?\",\n \"When the Monument to the Martyrs of the 1830 Revolution was created?\",\n \"What is the Abhayagiri Vih\u0101ra?\", \n # \"How big is Wikip\u00e9dia en fran\u00e7ais?\",\n] \nchat_history = []\nfor question in questions: \n result = qa({\"question\": question, \"chat_history\": chat_history})\n chat_history.append((question, result['answer']))\n print(f\"-> **Question**: {question} \\n\")\n print(f\"**Answer**: {result['answer']} \\n\")\n-> **Question**: What is Apify?", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/wikipedia.html"}
+{"id": "bce31c051c51-4", "text": "-> **Question**: What is Apify? \n**Answer**: Apify is a platform that allows you to easily automate web scraping, data extraction and web automation. It provides a cloud-based infrastructure for running web crawlers and other automation tasks, as well as a web-based tool for building and managing your crawlers. Additionally, Apify offers a marketplace for buying and selling pre-built crawlers and related services. \n-> **Question**: When the Monument to the Martyrs of the 1830 Revolution was created? \n**Answer**: Apify is a web scraping and automation platform that enables you to extract data from websites, turn unstructured data into structured data, and automate repetitive tasks. It provides a user-friendly interface for creating web scraping scripts without any coding knowledge. Apify can be used for various web scraping tasks such as data extraction, web monitoring, content aggregation, and much more. Additionally, it offers various features such as proxy support, scheduling, and integration with other tools to make web scraping and automation tasks easier and more efficient. \n-> **Question**: What is the Abhayagiri Vih\u0101ra? \n**Answer**: Abhayagiri Vih\u0101ra was a major monastery site of Theravada Buddhism that was located in Anuradhapura, Sri Lanka. It was founded in the 2nd century BCE and is considered to be one of the most important monastic complexes in Sri Lanka. \nprevious\nSelf-querying with Weaviate\nnext\nZep\n Contents\n \nInstallation\nExamples\nRunning retriever\nQuestion Answering on facts\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/wikipedia.html"}
+{"id": "e90baab6ce7d-0", "text": ".ipynb\n.pdf\nDataberry\n Contents \nQuery\nDataberry#\nDataberry platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources).\nThen your Datastores can be connected to ChatGPT via Plugins or any other Large Langue Model (LLM) via the Databerry API.\nThis notebook shows how to use Databerry\u2019s retriever.\nFirst, you will need to sign up for Databerry, create a datastore, add some data and get your datastore api endpoint url. You need the API Key.\nQuery#\nNow that our index is set up, we can set up a retriever and start querying it.\nfrom langchain.retrievers import DataberryRetriever\nretriever = DataberryRetriever(\n datastore_url=\"https://clg1xg2h80000l708dymr0fxc.databerry.ai/query\",\n # api_key=\"DATABERRY_API_KEY\", # optional if datastore is public\n # top_k=10 # optional\n)\nretriever.get_relevant_documents(\"What is Daftpage?\")", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html"}
+{"id": "e90baab6ce7d-1", "text": ")\nretriever.get_relevant_documents(\"What is Daftpage?\")\n[Document(page_content='\u2728 Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramGetting StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!DaftpageCopyright \u00a9 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program\ud83d\udc7e Discord', metadata={'source': 'https:/daftpage.com/help/getting-started', 'score': 0.8697265}),", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html"}
+{"id": "e90baab6ce7d-2", "text": "Document(page_content=\"\u2728 Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramHelp CenterWelcome to Daftpage\u2019s help center\u2014the one-stop shop for learning everything about building websites with Daftpage.Daftpage is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here\u2728 Create your first site\ud83e\uddf1 Add blocks\ud83d\ude80 PublishGuides\ud83d\udd16 Add a custom domainFeatures\ud83d\udd25 Drops\ud83c\udfa8 Drawings\ud83d\udc7b Ghost mode\ud83d\udc80 Skeleton modeCant find the answer you're looking for?mail us at support@daftpage.comJoin the awesome Daftpage community on: \ud83d\udc7e DiscordDaftpageCopyright \u00a9 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program\ud83d\udc7e Discord\", metadata={'source': 'https:/daftpage.com/help', 'score': 0.86570895}),", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html"}
+{"id": "e90baab6ce7d-3", "text": "Document(page_content=\" is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here\u2728 Create your first site\ud83e\uddf1 Add blocks\ud83d\ude80 PublishGuides\ud83d\udd16 Add a custom domainFeatures\ud83d\udd25 Drops\ud83c\udfa8 Drawings\ud83d\udc7b Ghost mode\ud83d\udc80 Skeleton modeCant find the answer you're looking for?mail us at support@daftpage.comJoin the awesome Daftpage community on: \ud83d\udc7e DiscordDaftpageCopyright \u00a9 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program\ud83d\udc7e Discord\", metadata={'source': 'https:/daftpage.com/help', 'score': 0.8645384})]\nprevious\nContextual Compression\nnext\nElasticSearch BM25\n Contents\n \nQuery\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html"}
+{"id": "c54e50d36f0e-0", "text": ".ipynb\n.pdf\nPinecone Hybrid Search\n Contents \nSetup Pinecone\nGet embeddings and sparse encoders\nLoad Retriever\nAdd texts (if necessary)\nUse Retriever\nPinecone Hybrid Search#\nPinecone is a vector database with broad functionality.\nThis notebook goes over how to use a retriever that under the hood uses Pinecone and Hybrid Search.\nThe logic of this retriever is taken from this documentaion\nTo use Pinecone, you must have an API key and an Environment.\nHere are the installation instructions.\n#!pip install pinecone-client pinecone-text\nimport os\nimport getpass\nos.environ['PINECONE_API_KEY'] = getpass.getpass('Pinecone API Key:')\nfrom langchain.retrievers import PineconeHybridSearchRetriever\nos.environ['PINECONE_ENVIRONMENT'] = getpass.getpass('Pinecone Environment:')\nWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nSetup Pinecone#\nYou should only have to do this part once.\nNote: it\u2019s important to make sure that the \u201ccontext\u201d field that holds the document text in the metadata is not indexed. Currently you need to specify explicitly the fields you do want to index. For more information checkout Pinecone\u2019s docs.\nimport os\nimport pinecone\napi_key = os.getenv(\"PINECONE_API_KEY\") or \"PINECONE_API_KEY\"\n# find environment next to your API key in the Pinecone console\nenv = os.getenv(\"PINECONE_ENVIRONMENT\") or \"PINECONE_ENVIRONMENT\"\nindex_name = \"langchain-pinecone-hybrid-search\"\npinecone.init(api_key=api_key, enviroment=env)", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pinecone_hybrid_search.html"}
+{"id": "c54e50d36f0e-1", "text": "pinecone.init(api_key=api_key, enviroment=env)\npinecone.whoami()\nWhoAmIResponse(username='load', user_label='label', projectname='load-test')\n # create the index\npinecone.create_index(\n name = index_name,\n dimension = 1536, # dimensionality of dense model\n metric = \"dotproduct\", # sparse values supported only for dotproduct\n pod_type = \"s1\",\n metadata_config={\"indexed\": []} # see explaination above\n)\nNow that its created, we can use it\nindex = pinecone.Index(index_name)\nGet embeddings and sparse encoders#\nEmbeddings are used for the dense vectors, tokenizer is used for the sparse vector\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nTo encode the text to sparse values you can either choose SPLADE or BM25. For out of domain tasks we recommend using BM25.\nFor more information about the sparse encoders you can checkout pinecone-text library docs.\nfrom pinecone_text.sparse import BM25Encoder\n# or from pinecone_text.sparse import SpladeEncoder if you wish to work with SPLADE\n# use default tf-idf values\nbm25_encoder = BM25Encoder().default()\nThe above code is using default tfids values. It\u2019s highly recommended to fit the tf-idf values to your own corpus. You can do it as follow:\ncorpus = [\"foo\", \"bar\", \"world\", \"hello\"]\n# fit tf-idf values on your corpus\nbm25_encoder.fit(corpus)\n# store the values to a json file\nbm25_encoder.dump(\"bm25_values.json\")\n# load to your BM25Encoder object\nbm25_encoder = BM25Encoder().load(\"bm25_values.json\")\nLoad Retriever#", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pinecone_hybrid_search.html"}
+{"id": "c54e50d36f0e-2", "text": "Load Retriever#\nWe can now construct the retriever!\nretriever = PineconeHybridSearchRetriever(embeddings=embeddings, sparse_encoder=bm25_encoder, index=index)\nAdd texts (if necessary)#\nWe can optionally add texts to the retriever (if they aren\u2019t already in there)\nretriever.add_texts([\"foo\", \"bar\", \"world\", \"hello\"])\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1/1 [00:02<00:00, 2.27s/it]\nUse Retriever#\nWe can now use the retriever!\nresult = retriever.get_relevant_documents(\"foo\")\nresult[0]\nDocument(page_content='foo', metadata={})\nprevious\nMetal\nnext\nPubMed Retriever\n Contents\n \nSetup Pinecone\nGet embeddings and sparse encoders\nLoad Retriever\nAdd texts (if necessary)\nUse Retriever\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pinecone_hybrid_search.html"}
+{"id": "1452b0547a90-0", "text": ".ipynb\n.pdf\nkNN\n Contents \nCreate New Retriever with Texts\nUse Retriever\nkNN#\nIn statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression.\nThis notebook goes over how to use a retriever that under the hood uses an kNN.\nLargely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.ipynb\nfrom langchain.retrievers import KNNRetriever\nfrom langchain.embeddings import OpenAIEmbeddings\nCreate New Retriever with Texts#\nretriever = KNNRetriever.from_texts([\"foo\", \"bar\", \"world\", \"hello\", \"foo bar\"], OpenAIEmbeddings())\nUse Retriever#\nWe can now use the retriever!\nresult = retriever.get_relevant_documents(\"foo\")\nresult\n[Document(page_content='foo', metadata={}),\n Document(page_content='foo bar', metadata={}),\n Document(page_content='hello', metadata={}),\n Document(page_content='bar', metadata={})]\nprevious\nElasticSearch BM25\nnext\nLOTR (Merger Retriever)\n Contents\n \nCreate New Retriever with Texts\nUse Retriever\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/knn.html"}
+{"id": "0ee80f4e216a-0", "text": ".ipynb\n.pdf\nZep\n Contents \nRetriever Example\nInitialize the Zep Chat Message History Class and add a chat message history to the memory store\nUse the Zep Retriever to vector search over the Zep memory\nZep#\nZep - A long-term memory store for LLM applications.\nMore on Zep:\nZep stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs.\nKey Features:\nLong-term memory persistence, with access to historical messages irrespective of your summarization strategy.\nAuto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies.\nVector search over memories, with messages automatically embedded on creation.\nAuto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.\nPython and JavaScript SDKs.\nZep\u2019s Go Extractor model is easily extensible, with a simple, clean interface available to build new enrichment functionality, such as summarizers, entity extractors, embedders, and more.\nZep project: getzep/zep\nRetriever Example#\nThis notebook demonstrates how to search historical chat message histories using the Zep Long-term Memory Store.\nWe\u2019ll demonstrate:\nAdding conversation history to the Zep memory store.\nVector search over the conversation history.\nfrom langchain.memory.chat_message_histories import ZepChatMessageHistory\nfrom langchain.schema import HumanMessage, AIMessage\nfrom uuid import uuid4\n# Set this to your Zep server URL\nZEP_API_URL = \"http://localhost:8000\"\nInitialize the Zep Chat Message History Class and add a chat message history to the memory store#", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html"}
+{"id": "0ee80f4e216a-1", "text": "Initialize the Zep Chat Message History Class and add a chat message history to the memory store#\nNOTE: Unlike other Retrievers, the content returned by the Zep Retriever is session/user specific. A session_id is required when instantiating the Retriever.\nsession_id = str(uuid4()) # This is a unique identifier for the user/session\n# Set up Zep Chat History. We'll use this to add chat histories to the memory store\nzep_chat_history = ZepChatMessageHistory(\n session_id=session_id,\n url=ZEP_API_URL,\n)\n# Preload some messages into the memory. The default message window is 12 messages. We want to push beyond this to demonstrate auto-summarization.\ntest_history = [\n {\"role\": \"human\", \"content\": \"Who was Octavia Butler?\"},\n {\n \"role\": \"ai\",\n \"content\": (\n \"Octavia Estelle Butler (June 22, 1947 \u2013 February 24, 2006) was an American\"\n \" science fiction author.\"\n ),\n },\n {\"role\": \"human\", \"content\": \"Which books of hers were made into movies?\"},\n {\n \"role\": \"ai\",\n \"content\": (\n \"The most well-known adaptation of Octavia Butler's work is the FX series\"\n \" Kindred, based on her novel of the same name.\"\n ),\n },\n {\"role\": \"human\", \"content\": \"Who were her contemporaries?\"},\n {\n \"role\": \"ai\",\n \"content\": (\n \"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R.\"\n \" Delany, and Joanna Russ.\"\n ),\n },", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html"}
+{"id": "0ee80f4e216a-2", "text": "\" Delany, and Joanna Russ.\"\n ),\n },\n {\"role\": \"human\", \"content\": \"What awards did she win?\"},\n {\n \"role\": \"ai\",\n \"content\": (\n \"Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur\"\n \" Fellowship.\"\n ),\n },\n {\n \"role\": \"human\",\n \"content\": \"Which other women sci-fi writers might I want to read?\",\n },\n {\n \"role\": \"ai\",\n \"content\": \"You might want to read Ursula K. Le Guin or Joanna Russ.\",\n },\n {\n \"role\": \"human\",\n \"content\": (\n \"Write a short synopsis of Butler's book, Parable of the Sower. What is it\"\n \" about?\"\n ),\n },\n {\n \"role\": \"ai\",\n \"content\": (\n \"Parable of the Sower is a science fiction novel by Octavia Butler,\"\n \" published in 1993. It follows the story of Lauren Olamina, a young woman\"\n \" living in a dystopian future where society has collapsed due to\"\n \" environmental disasters, poverty, and violence.\"\n ),\n },\n]\nfor msg in test_history:\n zep_chat_history.append(\n HumanMessage(content=msg[\"content\"])\n if msg[\"role\"] == \"human\"\n else AIMessage(content=msg[\"content\"])\n )\nUse the Zep Retriever to vector search over the Zep memory#\nZep provides native vector search over historical conversation memory. Embedding happens automatically.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html"}
+{"id": "0ee80f4e216a-3", "text": "Zep provides native vector search over historical conversation memory. Embedding happens automatically.\nNOTE: Embedding of messages occurs asynchronously, so the first query may not return results. Subsequent queries will return results as the embeddings are generated.\nfrom langchain.retrievers import ZepRetriever\nzep_retriever = ZepRetriever(\n session_id=session_id, # Ensure that you provide the session_id when instantiating the Retriever\n url=ZEP_API_URL,\n top_k=5,\n)\nawait zep_retriever.aget_relevant_documents(\"Who wrote Parable of the Sower?\")\n[Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7759001673780126, 'uuid': '3a82a02f-056e-4c6a-b960-67ebdf3b2b93', 'created_at': '2023-05-25T15:03:30.2041Z', 'role': 'human', 'token_count': 8}),\n Document(page_content=\"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.\", metadata={'score': 0.7602262941130749, 'uuid': 'a2fc9c21-0897-46c8-bef7-6f5c0f71b04a', 'created_at': '2023-05-25T15:03:30.248065Z', 'role': 'ai', 'token_count': 27}),", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html"}
+{"id": "0ee80f4e216a-4", "text": "Document(page_content='Who were her contemporaries?', metadata={'score': 0.757553366415519, 'uuid': '41f9c41a-a205-41e1-b48b-a0a4cd943fc8', 'created_at': '2023-05-25T15:03:30.243995Z', 'role': 'human', 'token_count': 8}),\n Document(page_content='Octavia Estelle Butler (June 22, 1947 \u2013 February 24, 2006) was an American science fiction author.', metadata={'score': 0.7546211059317948, 'uuid': '34678311-0098-4f1a-8fd4-5615ac692deb', 'created_at': '2023-05-25T15:03:30.231427Z', 'role': 'ai', 'token_count': 31}),\n Document(page_content='Which books of hers were made into movies?', metadata={'score': 0.7496714959247069, 'uuid': '18046c3a-9666-4d3e-b4f0-43d1394732b7', 'created_at': '2023-05-25T15:03:30.236837Z', 'role': 'human', 'token_count': 11})]\nWe can also use the Zep sync API to retrieve results:\nzep_retriever.get_relevant_documents(\"Who wrote Parable of the Sower?\")", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html"}
+{"id": "0ee80f4e216a-5", "text": "[Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', metadata={'score': 0.8897321402776546, 'uuid': '1c09603a-52c1-40d7-9d69-29f26256029c', 'created_at': '2023-05-25T15:03:30.268257Z', 'role': 'ai', 'token_count': 56}),\n Document(page_content=\"Write a short synopsis of Butler's book, Parable of the Sower. What is it about?\", metadata={'score': 0.8857628682610436, 'uuid': 'f6706e8c-6c91-452f-8c1b-9559fd924657', 'created_at': '2023-05-25T15:03:30.265302Z', 'role': 'human', 'token_count': 23}),\n Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7759670375149477, 'uuid': '3a82a02f-056e-4c6a-b960-67ebdf3b2b93', 'created_at': '2023-05-25T15:03:30.2041Z', 'role': 'human', 'token_count': 8}),", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html"}
+{"id": "0ee80f4e216a-6", "text": "Document(page_content=\"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.\", metadata={'score': 0.7602854653476563, 'uuid': 'a2fc9c21-0897-46c8-bef7-6f5c0f71b04a', 'created_at': '2023-05-25T15:03:30.248065Z', 'role': 'ai', 'token_count': 27}),\n Document(page_content='You might want to read Ursula K. Le Guin or Joanna Russ.', metadata={'score': 0.7595293992240313, 'uuid': 'f22f2498-6118-4c74-8718-aa89ccd7e3d6', 'created_at': '2023-05-25T15:03:30.261198Z', 'role': 'ai', 'token_count': 18})]\nprevious\nWikipedia\nnext\nChains\n Contents\n \nRetriever Example\nInitialize the Zep Chat Message History Class and add a chat message history to the memory store\nUse the Zep Retriever to vector search over the Zep memory\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html"}
+{"id": "ba56fe40175a-0", "text": ".ipynb\n.pdf\nSVM\n Contents \nCreate New Retriever with Texts\nUse Retriever\nSVM#\nSupport vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.\nThis notebook goes over how to use a retriever that under the hood uses an SVM using scikit-learn package.\nLargely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.ipynb\n#!pip install scikit-learn\n#!pip install lark\nWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.\nimport os\nimport getpass\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nfrom langchain.retrievers import SVMRetriever\nfrom langchain.embeddings import OpenAIEmbeddings\nCreate New Retriever with Texts#\nretriever = SVMRetriever.from_texts([\"foo\", \"bar\", \"world\", \"hello\", \"foo bar\"], OpenAIEmbeddings())\nUse Retriever#\nWe can now use the retriever!\nresult = retriever.get_relevant_documents(\"foo\")\nresult\n[Document(page_content='foo', metadata={}),\n Document(page_content='foo bar', metadata={}),\n Document(page_content='hello', metadata={}),\n Document(page_content='world', metadata={})]\nprevious\nSelf-querying\nnext\nTF-IDF\n Contents\n \nCreate New Retriever with Texts\nUse Retriever\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/svm.html"}
+{"id": "ee676f9c3de6-0", "text": ".ipynb\n.pdf\nMetal\n Contents \nIngest Documents\nQuery\nMetal#\nMetal is a managed service for ML Embeddings.\nThis notebook shows how to use Metal\u2019s retriever.\nFirst, you will need to sign up for Metal and get an API key. You can do so here\n# !pip install metal_sdk\nfrom metal_sdk.metal import Metal\nAPI_KEY = \"\"\nCLIENT_ID = \"\"\nINDEX_ID = \"\"\nmetal = Metal(API_KEY, CLIENT_ID, INDEX_ID);\nIngest Documents#\nYou only need to do this if you haven\u2019t already set up an index\nmetal.index( {\"text\": \"foo1\"})\nmetal.index( {\"text\": \"foo\"})\n{'data': {'id': '642739aa7559b026b4430e42',\n 'text': 'foo',\n 'createdAt': '2023-03-31T19:51:06.748Z'}}\nQuery#\nNow that our index is set up, we can set up a retriever and start querying it.\nfrom langchain.retrievers import MetalRetriever\nretriever = MetalRetriever(metal, params={\"limit\": 2})\nretriever.get_relevant_documents(\"foo1\")\n[Document(page_content='foo1', metadata={'dist': '1.19209289551e-07', 'id': '642739a17559b026b4430e40', 'createdAt': '2023-03-31T19:50:57.853Z'}),\n Document(page_content='foo1', metadata={'dist': '4.05311584473e-06', 'id': '642738f67559b026b4430e3c', 'createdAt': '2023-03-31T19:48:06.769Z'})]\nprevious", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/metal.html"}
+{"id": "ee676f9c3de6-1", "text": "previous\nLOTR (Merger Retriever)\nnext\nPinecone Hybrid Search\n Contents\n \nIngest Documents\nQuery\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/metal.html"}
+{"id": "ddc054d924da-0", "text": ".ipynb\n.pdf\nAzure Cognitive Search\n Contents \nSet up Azure Cognitive Search\nUsing the Azure Cognitive Search Retriever\nAzure Cognitive Search#\nAzure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.\nSearch is foundational to any app that surfaces text to users, where common scenarios include catalog or document search, online retail apps, or data exploration over proprietary content. When you create a search service, you\u2019ll work with the following capabilities:\nA search engine for full text search over a search index containing user-owned content\nRich indexing, with lexical analysis and optional AI enrichment for content extraction and transformation\nRich query syntax for text search, fuzzy search, autocomplete, geo-search and more\nProgrammability through REST APIs and client libraries in Azure SDKs\nAzure integration at the data layer, machine learning layer, and AI (Cognitive Services)\nThis notebook shows how to use Azure Cognitive Search (ACS) within LangChain.\nSet up Azure Cognitive Search#\nTo set up ACS, please follow the instrcutions here.\nPlease note\nthe name of your ACS service,\nthe name of your ACS index,\nyour API key.\nYour API key can be either Admin or Query key, but as we only read data it is recommended to use a Query key.\nUsing the Azure Cognitive Search Retriever#\nimport os\nfrom langchain.retrievers import AzureCognitiveSearchRetriever\nSet Service Name, Index Name and API key as environment variables (alternatively, you can pass them as arguments to AzureCognitiveSearchRetriever).\nos.environ[\"AZURE_COGNITIVE_SEARCH_SERVICE_NAME\"] = \"\"\nos.environ[\"AZURE_COGNITIVE_SEARCH_INDEX_NAME\"] =\"\"", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/azure_cognitive_search.html"}
+{"id": "ddc054d924da-1", "text": "os.environ[\"AZURE_COGNITIVE_SEARCH_API_KEY\"] = \"\"\nCreate the Retriever\nretriever = AzureCognitiveSearchRetriever(content_key=\"content\")\nNow you can use retrieve documents from Azure Cognitive Search\nretriever.get_relevant_documents(\"what is langchain\")\nprevious\nAWS Kendra\nnext\nChatGPT Plugin\n Contents\n \nSet up Azure Cognitive Search\nUsing the Azure Cognitive Search Retriever\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/azure_cognitive_search.html"}
+{"id": "c4d79f4fb09c-0", "text": ".ipynb\n.pdf\nSelf-querying with Qdrant\n Contents \nCreating a Qdrant vectorstore\nCreating our self-querying retriever\nTesting it out\nFilter k\nSelf-querying with Qdrant#\nQdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful\nIn the notebook we\u2019ll demo the SelfQueryRetriever wrapped around a Qdrant vector store.\nCreating a Qdrant vectorstore#\nFirst we\u2019ll want to create a Chroma VectorStore and seed it with some data. We\u2019ve created a small demo set of documents that contain summaries of movies.\nNOTE: The self-query retriever requires you to have lark installed (pip install lark). We also need the qdrant-client package.\n#!pip install lark qdrant-client\nWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.\n# import os\n# import getpass\n# os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nfrom langchain.schema import Document\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.vectorstores import Qdrant\nembeddings = OpenAIEmbeddings()\ndocs = [\n Document(page_content=\"A bunch of scientists bring back dinosaurs and mayhem breaks loose\", metadata={\"year\": 1993, \"rating\": 7.7, \"genre\": \"science fiction\"}),\n Document(page_content=\"Leo DiCaprio gets lost in a dream within a dream within a dream within a ...\", metadata={\"year\": 2010, \"director\": \"Christopher Nolan\", \"rating\": 8.2}),", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/qdrant_self_query.html"}
+{"id": "c4d79f4fb09c-1", "text": "Document(page_content=\"A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea\", metadata={\"year\": 2006, \"director\": \"Satoshi Kon\", \"rating\": 8.6}),\n Document(page_content=\"A bunch of normal-sized women are supremely wholesome and some men pine after them\", metadata={\"year\": 2019, \"director\": \"Greta Gerwig\", \"rating\": 8.3}),\n Document(page_content=\"Toys come alive and have a blast doing so\", metadata={\"year\": 1995, \"genre\": \"animated\"}),\n Document(page_content=\"Three men walk into the Zone, three men walk out of the Zone\", metadata={\"year\": 1979, \"rating\": 9.9, \"director\": \"Andrei Tarkovsky\", \"genre\": \"science fiction\", \"rating\": 9.9})\n]\nvectorstore = Qdrant.from_documents(\n docs, \n embeddings, \n location=\":memory:\", # Local mode with in-memory storage only\n collection_name=\"my_documents\",\n)\nCreating our self-querying retriever#\nNow we can instantiate our retriever. To do this we\u2019ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.\nfrom langchain.llms import OpenAI\nfrom langchain.retrievers.self_query.base import SelfQueryRetriever\nfrom langchain.chains.query_constructor.base import AttributeInfo\nmetadata_field_info=[\n AttributeInfo(\n name=\"genre\",\n description=\"The genre of the movie\", \n type=\"string or list[string]\", \n ),\n AttributeInfo(\n name=\"year\",\n description=\"The year the movie was released\", \n type=\"integer\", \n ),", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/qdrant_self_query.html"}
+{"id": "c4d79f4fb09c-2", "text": "type=\"integer\", \n ),\n AttributeInfo(\n name=\"director\",\n description=\"The name of the movie director\", \n type=\"string\", \n ),\n AttributeInfo(\n name=\"rating\",\n description=\"A 1-10 rating for the movie\",\n type=\"float\"\n ),\n]\ndocument_content_description = \"Brief summary of a movie\"\nllm = OpenAI(temperature=0)\nretriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True)\nTesting it out#\nAnd now we can try actually using our retriever!\n# This example only specifies a relevant query\nretriever.get_relevant_documents(\"What are some movies about dinosaurs\")\nquery='dinosaur' filter=None limit=None\n[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}),\n Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}),\n Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}),\n Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]\n# This example only specifies a filter\nretriever.get_relevant_documents(\"I want to watch a movie rated higher than 8.5\")", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/qdrant_self_query.html"}
+{"id": "c4d79f4fb09c-3", "text": "query=' ' filter=Comparison(comparator=, attribute='rating', value=8.5) limit=None\n[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}),\n Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]\n# This example specifies a query and a filter\nretriever.get_relevant_documents(\"Has Greta Gerwig directed any movies about women\")\nquery='women' filter=Comparison(comparator=, attribute='director', value='Greta Gerwig') limit=None\n[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]\n# This example specifies a composite filter\nretriever.get_relevant_documents(\"What's a highly rated (above 8.5) science fiction film?\")\nquery=' ' filter=Operation(operator=, arguments=[Comparison(comparator=, attribute='rating', value=8.5), Comparison(comparator=, attribute='genre', value='science fiction')]) limit=None", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/qdrant_self_query.html"}
+{"id": "c4d79f4fb09c-4", "text": "[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]\n# This example specifies a query and composite filter\nretriever.get_relevant_documents(\"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated\")\nquery='toys' filter=Operation(operator=, arguments=[Comparison(comparator=, attribute='year', value=1990), Comparison(comparator=, attribute='year', value=2005), Comparison(comparator=, attribute='genre', value='animated')]) limit=None\n[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]\nFilter k#\nWe can also use the self query retriever to specify k: the number of documents to fetch.\nWe can do this by passing enable_limit=True to the constructor.\nretriever = SelfQueryRetriever.from_llm(\n llm, \n vectorstore, \n document_content_description, \n metadata_field_info, \n enable_limit=True,\n verbose=True\n)\n# This example only specifies a relevant query\nretriever.get_relevant_documents(\"what are two movies about dinosaurs\")\nquery='dinosaur' filter=None limit=2\n[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/qdrant_self_query.html"}
+{"id": "c4d79f4fb09c-5", "text": "Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]\nprevious\nPubMed Retriever\nnext\nSelf-querying\n Contents\n \nCreating a Qdrant vectorstore\nCreating our self-querying retriever\nTesting it out\nFilter k\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/qdrant_self_query.html"}
+{"id": "b96d7d3dd3ed-0", "text": ".ipynb\n.pdf\nLOTR (Merger Retriever)\n Contents \nRemove redundant results from the merged retrievers.\nLOTR (Merger Retriever)#\nLord of the Retrievers, also known as MergerRetriever, takes a list of retrievers as input and merges the results of their get_relevant_documents() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers.\nThe MergerRetriever class can be used to improve the accuracy of document retrieval in a number of ways. First, it can combine the results of multiple retrievers, which can help to reduce the risk of bias in the results. Second, it can rank the results of the different retrievers, which can help to ensure that the most relevant documents are returned first.\nimport os\nimport chromadb\nfrom langchain.retrievers.merger_retriever import MergerRetriever\nfrom langchain.vectorstores import Chroma\nfrom langchain.embeddings import HuggingFaceEmbeddings\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.document_transformers import EmbeddingsRedundantFilter\nfrom langchain.retrievers.document_compressors import DocumentCompressorPipeline\nfrom langchain.retrievers import ContextualCompressionRetriever\n# Get 3 diff embeddings.\nall_mini = HuggingFaceEmbeddings(model_name=\"all-MiniLM-L6-v2\")\nmulti_qa_mini = HuggingFaceEmbeddings(model_name=\"multi-qa-MiniLM-L6-dot-v1\")\nfilter_embeddings = OpenAIEmbeddings()\nABS_PATH = os.path.dirname(os.path.abspath(__file__))\nDB_DIR = os.path.join(ABS_PATH, \"db\")\n# Instantiate 2 diff cromadb indexs, each one with a diff embedding.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/merger_retriever.html"}
+{"id": "b96d7d3dd3ed-1", "text": "# Instantiate 2 diff cromadb indexs, each one with a diff embedding.\nclient_settings = chromadb.config.Settings(\n chroma_db_impl=\"duckdb+parquet\",\n persist_directory=DB_DIR,\n anonymized_telemetry=False,\n)\ndb_all = Chroma(\n collection_name=\"project_store_all\",\n persist_directory=DB_DIR,\n client_settings=client_settings,\n embedding_function=all_mini,\n)\ndb_multi_qa = Chroma(\n collection_name=\"project_store_multi\",\n persist_directory=DB_DIR,\n client_settings=client_settings,\n embedding_function=multi_qa_mini,\n)\n# Define 2 diff retrievers with 2 diff embeddings and diff search type.\nretriever_all = db_all.as_retriever(\n search_type=\"similarity\", search_kwargs={\"k\": 5, \"include_metadata\": True}\n)\nretriever_multi_qa = db_multi_qa.as_retriever(\n search_type=\"mmr\", search_kwargs={\"k\": 5, \"include_metadata\": True}\n)\n# The Lord of the Retrievers will hold the ouput of boths retrievers and can be used as any other \n# retriever on different types of chains.\nlotr = MergerRetriever(retrievers=[retriever_all, retriever_multi_qa])\nRemove redundant results from the merged retrievers.#\n# We can remove redundant results from both retrievers using yet another embedding. \n# Using multiples embeddings in diff steps could help reduce biases.\nfilter = EmbeddingsRedundantFilter(embeddings=filter_embeddings)\npipeline = DocumentCompressorPipeline(transformers=[filter])\ncompression_retriever = ContextualCompressionRetriever(\n base_compressor=pipeline, base_retriever=lotr\n)\nprevious", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/merger_retriever.html"}
+{"id": "b96d7d3dd3ed-2", "text": "base_compressor=pipeline, base_retriever=lotr\n)\nprevious\nkNN\nnext\nMetal\n Contents\n \nRemove redundant results from the merged retrievers.\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/merger_retriever.html"}
+{"id": "de2de9353f7f-0", "text": ".ipynb\n.pdf\nChatGPT Plugin\n Contents \nUsing the ChatGPT Retriever Plugin\nChatGPT Plugin#\nOpenAI plugins connect ChatGPT to third-party applications. These plugins enable ChatGPT to interact with APIs defined by developers, enhancing ChatGPT\u2019s capabilities and allowing it to perform a wide range of actions.\nPlugins can allow ChatGPT to do things like:\nRetrieve real-time information; e.g., sports scores, stock prices, the latest news, etc.\nRetrieve knowledge-base information; e.g., company docs, personal notes, etc.\nPerform actions on behalf of the user; e.g., booking a flight, ordering food, etc.\nThis notebook shows how to use the ChatGPT Retriever Plugin within LangChain.\n# STEP 1: Load\n# Load documents using LangChain's DocumentLoaders\n# This is from https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/csv.html\nfrom langchain.document_loaders.csv_loader import CSVLoader\nloader = CSVLoader(file_path='../../document_loaders/examples/example_data/mlb_teams_2012.csv')\ndata = loader.load()\n# STEP 2: Convert\n# Convert Document to format expected by https://github.com/openai/chatgpt-retrieval-plugin\nfrom typing import List\nfrom langchain.docstore.document import Document\nimport json\ndef write_json(path: str, documents: List[Document])-> None:\n results = [{\"text\": doc.page_content} for doc in documents]\n with open(path, \"w\") as f:\n json.dump(results, f, indent=2)\nwrite_json(\"foo.json\", data)\n# STEP 3: Use\n# Ingest this as you would any other json file in https://github.com/openai/chatgpt-retrieval-plugin/tree/main/scripts/process_json", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin.html"}
+{"id": "de2de9353f7f-1", "text": "Using the ChatGPT Retriever Plugin#\nOkay, so we\u2019ve created the ChatGPT Retriever Plugin, but how do we actually use it?\nThe below code walks through how to do that.\nWe want to use ChatGPTPluginRetriever so we have to get the OpenAI API Key.\nimport os\nimport getpass\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nfrom langchain.retrievers import ChatGPTPluginRetriever\nretriever = ChatGPTPluginRetriever(url=\"http://0.0.0.0:8000\", bearer_token=\"foo\")\nretriever.get_relevant_documents(\"alice's phone number\")\n[Document(page_content=\"This is Alice's phone number: 123-456-7890\", lookup_str='', metadata={'id': '456_0', 'metadata': {'source': 'email', 'source_id': '567', 'url': None, 'created_at': '1609592400.0', 'author': 'Alice', 'document_id': '456'}, 'embedding': None, 'score': 0.925571561}, lookup_index=0),\n Document(page_content='This is a document about something', lookup_str='', metadata={'id': '123_0', 'metadata': {'source': 'file', 'source_id': 'https://example.com/doc1', 'url': 'https://example.com/doc1', 'created_at': '1609502400.0', 'author': 'Alice', 'document_id': '123'}, 'embedding': None, 'score': 0.6987589}, lookup_index=0),", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin.html"}
+{"id": "de2de9353f7f-2", "text": "Document(page_content='Team: Angels \"Payroll (millions)\": 154.49 \"Wins\": 89', lookup_str='', metadata={'id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631_0', 'metadata': {'source': None, 'source_id': None, 'url': None, 'created_at': None, 'author': None, 'document_id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631'}, 'embedding': None, 'score': 0.697888613}, lookup_index=0)]\nprevious\nAzure Cognitive Search\nnext\nSelf-querying with Chroma\n Contents\n \nUsing the ChatGPT Retriever Plugin\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin.html"}
+{"id": "701cc57412b8-0", "text": ".ipynb\n.pdf\nVespa\nVespa#\nVespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.\nThis notebook shows how to use Vespa.ai as a LangChain retriever.\nIn order to create a retriever, we use pyvespa to\ncreate a connection a Vespa service.\n#!pip install pyvespa\nfrom vespa.application import Vespa\nvespa_app = Vespa(url=\"https://doc-search.vespa.oath.cloud\")\nThis creates a connection to a Vespa service, here the Vespa documentation search service.\nUsing pyvespa package, you can also connect to a\nVespa Cloud instance\nor a local\nDocker instance.\nAfter connecting to the service, you can set up the retriever:\nfrom langchain.retrievers.vespa_retriever import VespaRetriever\nvespa_query_body = {\n \"yql\": \"select content from paragraph where userQuery()\",\n \"hits\": 5,\n \"ranking\": \"documentation\",\n \"locale\": \"en-us\"\n}\nvespa_content_field = \"content\"\nretriever = VespaRetriever(vespa_app, vespa_query_body, vespa_content_field)\nThis sets up a LangChain retriever that fetches documents from the Vespa application.\nHere, up to 5 results are retrieved from the content field in the paragraph document type,\nusing doumentation as the ranking method. The userQuery() is replaced with the actual query\npassed from LangChain.\nPlease refer to the pyvespa documentation\nfor more information.\nNow you can return the results and continue using the results in LangChain.\nretriever.get_relevant_documents(\"what is vespa?\")", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vespa.html"}
+{"id": "701cc57412b8-1", "text": "retriever.get_relevant_documents(\"what is vespa?\")\nprevious\nVectorStore\nnext\nWeaviate Hybrid Search\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vespa.html"}
+{"id": "a1e52cf4c921-0", "text": ".ipynb\n.pdf\nSelf-querying with Weaviate\n Contents \nCreating a Weaviate vectorstore\nCreating our self-querying retriever\nTesting it out\nFilter k\nSelf-querying with Weaviate#\nCreating a Weaviate vectorstore#\nFirst we\u2019ll want to create a Weaviate VectorStore and seed it with some data. We\u2019ve created a small demo set of documents that contain summaries of movies.\nNOTE: The self-query retriever requires you to have lark installed (pip install lark). We also need the weaviate-client package.\n#!pip install lark weaviate-client\nfrom langchain.schema import Document\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.vectorstores import Weaviate\nimport os\nembeddings = OpenAIEmbeddings()\ndocs = [\n Document(page_content=\"A bunch of scientists bring back dinosaurs and mayhem breaks loose\", metadata={\"year\": 1993, \"rating\": 7.7, \"genre\": \"science fiction\"}),\n Document(page_content=\"Leo DiCaprio gets lost in a dream within a dream within a dream within a ...\", metadata={\"year\": 2010, \"director\": \"Christopher Nolan\", \"rating\": 8.2}),\n Document(page_content=\"A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea\", metadata={\"year\": 2006, \"director\": \"Satoshi Kon\", \"rating\": 8.6}),\n Document(page_content=\"A bunch of normal-sized women are supremely wholesome and some men pine after them\", metadata={\"year\": 2019, \"director\": \"Greta Gerwig\", \"rating\": 8.3}),", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate_self_query.html"}
+{"id": "a1e52cf4c921-1", "text": "Document(page_content=\"Toys come alive and have a blast doing so\", metadata={\"year\": 1995, \"genre\": \"animated\"}),\n Document(page_content=\"Three men walk into the Zone, three men walk out of the Zone\", metadata={\"year\": 1979, \"rating\": 9.9, \"director\": \"Andrei Tarkovsky\", \"genre\": \"science fiction\", \"rating\": 9.9})\n]\nvectorstore = Weaviate.from_documents(\n docs, embeddings, weaviate_url=\"http://127.0.0.1:8080\"\n)\nCreating our self-querying retriever#\nNow we can instantiate our retriever. To do this we\u2019ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.\nfrom langchain.llms import OpenAI\nfrom langchain.retrievers.self_query.base import SelfQueryRetriever\nfrom langchain.chains.query_constructor.base import AttributeInfo\nmetadata_field_info=[\n AttributeInfo(\n name=\"genre\",\n description=\"The genre of the movie\", \n type=\"string or list[string]\", \n ),\n AttributeInfo(\n name=\"year\",\n description=\"The year the movie was released\", \n type=\"integer\", \n ),\n AttributeInfo(\n name=\"director\",\n description=\"The name of the movie director\", \n type=\"string\", \n ),\n AttributeInfo(\n name=\"rating\",\n description=\"A 1-10 rating for the movie\",\n type=\"float\"\n ),\n]\ndocument_content_description = \"Brief summary of a movie\"\nllm = OpenAI(temperature=0)", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate_self_query.html"}
+{"id": "a1e52cf4c921-2", "text": "llm = OpenAI(temperature=0)\nretriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True)\nTesting it out#\nAnd now we can try actually using our retriever!\n# This example only specifies a relevant query\nretriever.get_relevant_documents(\"What are some movies about dinosaurs\")\nquery='dinosaur' filter=None limit=None\n[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993}),\n Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'rating': None, 'year': 1995}),\n Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'genre': 'science fiction', 'rating': 9.9, 'year': 1979}),\n Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'genre': None, 'rating': 8.6, 'year': 2006})]\n# This example specifies a query and a filter\nretriever.get_relevant_documents(\"Has Greta Gerwig directed any movies about women\")\nquery='women' filter=Comparison(comparator=, attribute='director', value='Greta Gerwig') limit=None\n[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'genre': None, 'rating': 8.3, 'year': 2019})]\nFilter k#\nWe can also use the self query retriever to specify k: the number of documents to fetch.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate_self_query.html"}
+{"id": "a1e52cf4c921-3", "text": "We can do this by passing enable_limit=True to the constructor.\nretriever = SelfQueryRetriever.from_llm(\n llm, \n vectorstore, \n document_content_description, \n metadata_field_info, \n enable_limit=True,\n verbose=True\n)\n# This example only specifies a relevant query\nretriever.get_relevant_documents(\"what are two movies about dinosaurs\")\nquery='dinosaur' filter=None limit=2\n[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993}),\n Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'rating': None, 'year': 1995})]\nprevious\nWeaviate Hybrid Search\nnext\nWikipedia\n Contents\n \nCreating a Weaviate vectorstore\nCreating our self-querying retriever\nTesting it out\nFilter k\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate_self_query.html"}
+{"id": "a834fcc40dd5-0", "text": ".ipynb\n.pdf\nWeaviate Hybrid Search\nWeaviate Hybrid Search#\nWeaviate is an open source vector database.\nHybrid search is a technique that combines multiple search algorithms to improve the accuracy and relevance of search results. It uses the best features of both keyword-based search algorithms with vector search techniques.\nThe Hybrid search in Weaviate uses sparse and dense vectors to represent the meaning and context of search queries and documents.\nThis notebook shows how to use Weaviate hybrid search as a LangChain retriever.\nSet up the retriever:\n#!pip install weaviate-client\nimport weaviate\nimport os\nWEAVIATE_URL = os.getenv(\"WEAVIATE_URL\")\nclient = weaviate.Client(\n url=WEAVIATE_URL,\n auth_client_secret=weaviate.AuthApiKey(api_key=os.getenv(\"WEAVIATE_API_KEY\")),\n additional_headers={\n \"X-Openai-Api-Key\": os.getenv(\"OPENAI_API_KEY\"),\n },\n)\n# client.schema.delete_all()\nfrom langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetriever\nfrom langchain.schema import Document\n/workspaces/langchain/langchain/vectorstores/analyticdb.py:20: MovedIn20Warning: The ``declarative_base()`` function is now available as sqlalchemy.orm.declarative_base(). (deprecated since: 2.0) (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)\n Base = declarative_base() # type: Any\nretriever = WeaviateHybridSearchRetriever(\n client, index_name=\"LangChain\", text_key=\"text\"\n)\nAdd some data:\ndocs = [\n Document(\n metadata={", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html"}
+{"id": "a834fcc40dd5-1", "text": ")\nAdd some data:\ndocs = [\n Document(\n metadata={\n \"title\": \"Embracing The Future: AI Unveiled\",\n \"author\": \"Dr. Rebecca Simmons\",\n },\n page_content=\"A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.\",\n ),\n Document(\n metadata={\n \"title\": \"Symbiosis: Harmonizing Humans and AI\",\n \"author\": \"Prof. Jonathan K. Sterling\",\n },\n page_content=\"Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.\",\n ),\n Document(\n metadata={\"title\": \"AI: The Ethical Quandary\", \"author\": \"Dr. Rebecca Simmons\"},\n page_content=\"In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.\",\n ),\n Document(\n metadata={\n \"title\": \"Conscious Constructs: The Search for AI Sentience\",\n \"author\": \"Dr. Samuel Cortez\",\n },\n page_content=\"Dr. Cortez takes readers on a journey exploring the controversial topic of AI consciousness. The book provides compelling arguments for and against the possibility of true AI sentience.\",\n ),\n Document(\n metadata={\n \"title\": \"Invisible Routines: Hidden AI in Everyday Life\",\n \"author\": \"Prof. Jonathan K. Sterling\",\n },", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html"}
+{"id": "a834fcc40dd5-2", "text": "\"author\": \"Prof. Jonathan K. Sterling\",\n },\n page_content=\"In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.\",\n ),\n]\nretriever.add_documents(docs)\n['eda16d7d-437d-4613-84ae-c2e38705ec7a',\n '04b501bf-192b-4e72-be77-2fbbe7e67ebf',\n '18a1acdb-23b7-4482-ab04-a6c2ed51de77',\n '88e82cc3-c020-4b5a-b3c6-ca7cf3fc6a04',\n 'f6abd9d5-32ed-46c4-bd08-f8d0f7c9fc95']\nDo a hybrid search:\nretriever.get_relevant_documents(\"the ethical implications of AI\")\n[Document(page_content='In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.', metadata={}),\n Document(page_content='A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.', metadata={}),\n Document(page_content=\"In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.\", metadata={}),", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html"}
+{"id": "a834fcc40dd5-3", "text": "Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={})]\nDo a hybrid search with where filter:\nretriever.get_relevant_documents(\n \"AI integration in society\",\n where_filter={\n \"path\": [\"author\"],\n \"operator\": \"Equal\",\n \"valueString\": \"Prof. Jonathan K. Sterling\",\n },\n)\n[Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={}),\n Document(page_content=\"In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.\", metadata={})]\nprevious\nVespa\nnext\nSelf-querying with Weaviate\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html"}
+{"id": "db0de848ad48-0", "text": ".ipynb\n.pdf\nArxiv\n Contents \nInstallation\nExamples\nRunning retriever\nQuestion Answering on facts\nArxiv#\narXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.\nThis notebook shows how to retrieve scientific articles from Arxiv.org into the Document format that is used downstream.\nInstallation#\nFirst, you need to install arxiv python package.\n#!pip install arxiv\nArxivRetriever has these arguments:\noptional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.\noptional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), Title, Authors, Summary. If True, other fields also downloaded.\nget_relevant_documents() has one argument, query: free text which used to find documents in Arxiv.org\nExamples#\nRunning retriever#\nfrom langchain.retrievers import ArxivRetriever\nretriever = ArxivRetriever(load_max_docs=2)\ndocs = retriever.get_relevant_documents(query='1605.08386')\ndocs[0].metadata # meta-information of the Document\n{'Published': '2016-05-26',\n 'Title': 'Heat-bath random walks with Markov bases',\n 'Authors': 'Caprice Stanley, Tobias Windisch',", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html"}
+{"id": "db0de848ad48-1", "text": "'Authors': 'Caprice Stanley, Tobias Windisch',\n 'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\\nallowed moves of arbitrary length. We show that the diameter of these graphs on\\nfibers of a fixed integer matrix can be bounded from above by a constant. We\\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\\nalso state explicit conditions on the set of moves so that the heat-bath random\\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\\ndimension.'}\ndocs[0].page_content[:400] # a content of the Document \n'arXiv:1605.08386v1 [math.CO] 26 May 2016\\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\\nCAPRICE STANLEY AND TOBIAS WINDISCH\\nAbstract. Graphs on lattice points are studied whose edges come from a \ufb01nite set of\\nallowed moves of arbitrary length. We show that the diameter of these graphs on \ufb01bers of a\\n\ufb01xed integer matrix can be bounded from above by a constant. We then study the mixing\\nbehaviour of heat-b'\nQuestion Answering on facts#\n# get a token: https://platform.openai.com/account/api-keys\nfrom getpass import getpass\nOPENAI_API_KEY = getpass()\nimport os\nos.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.chains import ConversationalRetrievalChain\nmodel = ChatOpenAI(model_name='gpt-3.5-turbo') # switch to 'gpt-4'\nqa = ConversationalRetrievalChain.from_llm(model,retriever=retriever)\nquestions = [", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html"}
+{"id": "db0de848ad48-2", "text": "questions = [\n \"What are Heat-bath random walks with Markov base?\",\n \"What is the ImageBind model?\",\n \"How does Compositional Reasoning with Large Language Models works?\", \n] \nchat_history = []\nfor question in questions: \n result = qa({\"question\": question, \"chat_history\": chat_history})\n chat_history.append((question, result['answer']))\n print(f\"-> **Question**: {question} \\n\")\n print(f\"**Answer**: {result['answer']} \\n\")\n-> **Question**: What are Heat-bath random walks with Markov base? \n**Answer**: I'm not sure, as I don't have enough context to provide a definitive answer. The term \"Heat-bath random walks with Markov base\" is not mentioned in the given text. Could you provide more information or context about where you encountered this term? \n-> **Question**: What is the ImageBind model? \n**Answer**: ImageBind is an approach developed by Facebook AI Research to learn a joint embedding across six different modalities, including images, text, audio, depth, thermal, and IMU data. The approach uses the binding property of images to align each modality's embedding to image embeddings and achieve an emergent alignment across all modalities. This enables novel multimodal capabilities, including cross-modal retrieval, embedding-space arithmetic, and audio-to-image generation, among others. The approach sets a new state-of-the-art on emergent zero-shot recognition tasks across modalities, outperforming specialist supervised models. Additionally, it shows strong few-shot recognition results and serves as a new way to evaluate vision models for visual and non-visual tasks. \n-> **Question**: How does Compositional Reasoning with Large Language Models works?", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html"}
+{"id": "db0de848ad48-3", "text": "-> **Question**: How does Compositional Reasoning with Large Language Models works? \n**Answer**: Compositional reasoning with large language models refers to the ability of these models to correctly identify and represent complex concepts by breaking them down into smaller, more basic parts and combining them in a structured way. This involves understanding the syntax and semantics of language and using that understanding to build up more complex meanings from simpler ones. \nIn the context of the paper \"Does CLIP Bind Concepts? Probing Compositionality in Large Image Models\", the authors focus specifically on the ability of a large pretrained vision and language model (CLIP) to encode compositional concepts and to bind variables in a structure-sensitive way. They examine CLIP's ability to compose concepts in a single-object setting, as well as in situations where concept binding is needed. \nThe authors situate their work within the tradition of research on compositional distributional semantics models (CDSMs), which seek to bridge the gap between distributional models and formal semantics by building architectures which operate over vectors yet still obey traditional theories of linguistic composition. They compare the performance of CLIP with several architectures from research on CDSMs to evaluate its ability to encode and reason about compositional concepts. \nquestions = [\n \"What are Heat-bath random walks with Markov base? Include references to answer.\",\n] \nchat_history = []\nfor question in questions: \n result = qa({\"question\": question, \"chat_history\": chat_history})\n chat_history.append((question, result['answer']))\n print(f\"-> **Question**: {question} \\n\")\n print(f\"**Answer**: {result['answer']} \\n\")\n-> **Question**: What are Heat-bath random walks with Markov base? Include references to answer.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html"}
+{"id": "db0de848ad48-4", "text": "**Answer**: Heat-bath random walks with Markov base (HB-MB) is a class of stochastic processes that have been studied in the field of statistical mechanics and condensed matter physics. In these processes, a particle moves in a lattice by making a transition to a neighboring site, which is chosen according to a probability distribution that depends on the energy of the particle and the energy of its surroundings.\nThe HB-MB process was introduced by Bortz, Kalos, and Lebowitz in 1975 as a way to simulate the dynamics of interacting particles in a lattice at thermal equilibrium. The method has been used to study a variety of physical phenomena, including phase transitions, critical behavior, and transport properties.\nReferences:\nBortz, A. B., Kalos, M. H., & Lebowitz, J. L. (1975). A new algorithm for Monte Carlo simulation of Ising spin systems. Journal of Computational Physics, 17(1), 10-18.\nBinder, K., & Heermann, D. W. (2010). Monte Carlo simulation in statistical physics: an introduction. Springer Science & Business Media. \nprevious\nRetrievers\nnext\nAWS Kendra\n Contents\n \nInstallation\nExamples\nRunning retriever\nQuestion Answering on facts\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html"}
+{"id": "e56c5e03b7c6-0", "text": ".ipynb\n.pdf\nAWS Kendra\n Contents \nUsing the AWS Kendra Index Retriever\nAWS Kendra#\nAWS Kendra is an intelligent search service provided by Amazon Web Services (AWS). It utilizes advanced natural language processing (NLP) and machine learning algorithms to enable powerful search capabilities across various data sources within an organization. Kendra is designed to help users find the information they need quickly and accurately, improving productivity and decision-making.\nWith Kendra, users can search across a wide range of content types, including documents, FAQs, knowledge bases, manuals, and websites. It supports multiple languages and can understand complex queries, synonyms, and contextual meanings to provide highly relevant search results.\nUsing the AWS Kendra Index Retriever#\n#!pip install boto3\nimport boto3\nfrom langchain.retrievers import AwsKendraIndexRetriever\nCreate New Retriever\nkclient = boto3.client('kendra', region_name=\"us-east-1\")\nretriever = AwsKendraIndexRetriever(\n kclient=kclient,\n kendraindex=\"kendraindex\",\n)\nNow you can use retrieved documents from AWS Kendra Index\nretriever.get_relevant_documents(\"what is langchain\")\nprevious\nArxiv\nnext\nAzure Cognitive Search\n Contents\n \nUsing the AWS Kendra Index Retriever\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/aws_kendra_index_retriever.html"}
+{"id": "d8e1480bac46-0", "text": ".ipynb\n.pdf\nTF-IDF\n Contents \nCreate New Retriever with Texts\nCreate a New Retriever with Documents\nUse Retriever\nTF-IDF#\nTF-IDF means term-frequency times inverse document-frequency.\nThis notebook goes over how to use a retriever that under the hood uses TF-IDF using scikit-learn package.\nFor more information on the details of TF-IDF see this blog post.\n# !pip install scikit-learn\nfrom langchain.retrievers import TFIDFRetriever\nCreate New Retriever with Texts#\nretriever = TFIDFRetriever.from_texts([\"foo\", \"bar\", \"world\", \"hello\", \"foo bar\"])\nCreate a New Retriever with Documents#\nYou can now create a new retriever with the documents you created.\nfrom langchain.schema import Document\nretriever = TFIDFRetriever.from_documents([Document(page_content=\"foo\"), Document(page_content=\"bar\"), Document(page_content=\"world\"), Document(page_content=\"hello\"), Document(page_content=\"foo bar\")])\nUse Retriever#\nWe can now use the retriever!\nresult = retriever.get_relevant_documents(\"foo\")\nresult\n[Document(page_content='foo', metadata={}),\n Document(page_content='foo bar', metadata={}),\n Document(page_content='hello', metadata={}),\n Document(page_content='world', metadata={})]\nprevious\nSVM\nnext\nTime Weighted VectorStore\n Contents\n \nCreate New Retriever with Texts\nCreate a New Retriever with Documents\nUse Retriever\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/tf_idf.html"}
+{"id": "e6dc0535abc1-0", "text": ".ipynb\n.pdf\nPubMed Retriever\nPubMed Retriever#\nThis notebook goes over how to use PubMed as a retriever\nPubMed\u00ae comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.\nfrom langchain.retrievers import PubMedRetriever\nretriever = PubMedRetriever()\nretriever.get_relevant_documents(\"chatgpt\")\n[Document(page_content='', metadata={'uid': '37268021', 'title': 'Dermatology in the wake of an AI revolution: who gets a say?', 'pub_date': '2023May31'}),\n Document(page_content='', metadata={'uid': '37267643', 'title': 'What is ChatGPT and what do we do with it? Implications of the age of AI for nursing and midwifery practice and education: An editorial.', 'pub_date': '2023May30'}),\n Document(page_content='The nursing field has undergone notable changes over time and is projected to undergo further modifications in the future, owing to the advent of sophisticated technologies and growing healthcare needs. The advent of ChatGPT, an AI-powered language model, is expected to exert a significant influence on the nursing profession, specifically in the domains of patient care and instruction. The present article delves into the ramifications of ChatGPT within the nursing domain and accentuates its capacity and constraints to transform the discipline.', metadata={'uid': '37266721', 'title': 'The Impact of ChatGPT on the Nursing Profession: Revolutionizing Patient Care and Education.', 'pub_date': '2023Jun02'})]", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pubmed.html"}
+{"id": "e6dc0535abc1-1", "text": "previous\nPinecone Hybrid Search\nnext\nSelf-querying with Qdrant\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pubmed.html"}
+{"id": "28f0f7081065-0", "text": ".ipynb\n.pdf\nCohere Reranker\n Contents \nSet up the base vector store retriever\nDoing reranking with CohereRerank\nCohere Reranker#\nCohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.\nThis notebook shows how to use Cohere\u2019s rerank endpoint in a retriever. This builds on top of ideas in the ContextualCompressionRetriever.\n#!pip install cohere\n#!pip install faiss\n# OR (depending on Python version)\n#!pip install faiss-cpu\n# get a new token: https://dashboard.cohere.ai/\nimport os\nimport getpass\nos.environ['COHERE_API_KEY'] = getpass.getpass('Cohere API Key:')\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\n# Helper function for printing docs\ndef pretty_print_docs(docs):\n print(f\"\\n{'-' * 100}\\n\".join([f\"Document {i+1}:\\n\\n\" + d.page_content for i, d in enumerate(docs)]))\nSet up the base vector store retriever#\nLet\u2019s start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs.\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.document_loaders import TextLoader\nfrom langchain.vectorstores import FAISS\ndocuments = TextLoader('../../../state_of_the_union.txt').load()\ntext_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)\ntexts = text_splitter.split_documents(documents)", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html"}
+{"id": "28f0f7081065-1", "text": "texts = text_splitter.split_documents(documents)\nretriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever(search_kwargs={\"k\": 20})\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = retriever.get_relevant_documents(query)\npretty_print_docs(docs)\nDocument 1:\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\n----------------------------------------------------------------------------------------------------\nDocument 2:\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \nWhile it often appears that we never agree, that isn\u2019t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.\n----------------------------------------------------------------------------------------------------\nDocument 3:\nA former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she\u2019s been nominated, she\u2019s received a broad range of support\u2014from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.\n----------------------------------------------------------------------------------------------------\nDocument 4:\nHe met the Ukrainian people. \nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html"}
+{"id": "28f0f7081065-2", "text": "Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \nIn this struggle as President Zelenskyy said in his speech to the European Parliament \u201cLight will win over darkness.\u201d The Ukrainian Ambassador to the United States is here tonight.\n----------------------------------------------------------------------------------------------------\nDocument 5:\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. \nI\u2019ve worked on these issues a long time. \nI know what works: Investing in crime preventionand community police officers who\u2019ll walk the beat, who\u2019ll know the neighborhood, and who can restore trust and safety. \nSo let\u2019s not abandon our streets. Or choose between safety and equal justice.\n----------------------------------------------------------------------------------------------------\nDocument 6:\nVice President Harris and I ran for office with a new economic vision for America. \nInvest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up \nand the middle out, not from the top down. \nBecause we know that when the middle class grows, the poor have a ladder up and the wealthy do very well. \nAmerica used to have the best roads, bridges, and airports on Earth. \nNow our infrastructure is ranked 13th in the world.\n----------------------------------------------------------------------------------------------------\nDocument 7:\nAnd tonight, I\u2019m announcing that the Justice Department will name a chief prosecutor for pandemic fraud. \nBy the end of this year, the deficit will be down to less than half what it was before I took office. \nThe only president ever to cut the deficit by more than one trillion dollars in a single year. \nLowering your costs also means demanding more competition. \nI\u2019m a capitalist, but capitalism without competition isn\u2019t capitalism. \nIt\u2019s exploitation\u2014and it drives up prices.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html"}
+{"id": "28f0f7081065-3", "text": "It\u2019s exploitation\u2014and it drives up prices.\n----------------------------------------------------------------------------------------------------\nDocument 8:\nFor the past 40 years we were told that if we gave tax breaks to those at the very top, the benefits would trickle down to everyone else. \nBut that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. \nVice President Harris and I ran for office with a new economic vision for America.\n----------------------------------------------------------------------------------------------------\nDocument 9:\nAll told, we created 369,000 new manufacturing jobs in America just last year. \nPowered by people I\u2019ve met like JoJo Burgess, from generations of union steelworkers from Pittsburgh, who\u2019s here with us tonight. \nAs Ohio Senator Sherrod Brown says, \u201cIt\u2019s time to bury the label \u201cRust Belt.\u201d \nIt\u2019s time. \nBut with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills.\n----------------------------------------------------------------------------------------------------\nDocument 10:\nI\u2019m also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve. \nAnd fourth, let\u2019s end cancer as we know it. \nThis is personal to me and Jill, to Kamala, and to so many of you. \nCancer is the #2 cause of death in America\u2013second only to heart disease.\n----------------------------------------------------------------------------------------------------\nDocument 11:\nHe will never extinguish their love of freedom. He will never weaken the resolve of the free world. \nWe meet tonight in an America that has lived through two of the hardest years this nation has ever faced. \nThe pandemic has been punishing.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html"}
+{"id": "28f0f7081065-4", "text": "The pandemic has been punishing. \nAnd so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. \nI understand.\n----------------------------------------------------------------------------------------------------\nDocument 12:\nMadam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \nLast year COVID-19 kept us apart. This year we are finally together again. \nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \nWith a duty to one another to the American people to the Constitution. \nAnd with an unwavering resolve that freedom will always triumph over tyranny.\n----------------------------------------------------------------------------------------------------\nDocument 13:\nI know. \nOne of those soldiers was my son Major Beau Biden. \nWe don\u2019t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. \nBut I\u2019m committed to finding out everything we can. \nCommitted to military families like Danielle Robinson from Ohio. \nThe widow of Sergeant First Class Heath Robinson. \nHe was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq.\n----------------------------------------------------------------------------------------------------\nDocument 14:\nAnd soon, we\u2019ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \nSo tonight I\u2019m offering a Unity Agenda for the Nation. Four big things we can do together. \nFirst, beat the opioid epidemic. \nThere is so much we can do. Increase funding for prevention, treatment, harm reduction, and recovery.\n----------------------------------------------------------------------------------------------------\nDocument 15:\nThird, support our veterans. \nVeterans are the best of us.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html"}
+{"id": "28f0f7081065-5", "text": "Third, support our veterans. \nVeterans are the best of us. \nI\u2019ve always believed that we have a sacred obligation to equip all those we send to war and care for them and their families when they come home. \nMy administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free. \nOur troops in Iraq and Afghanistan faced many dangers.\n----------------------------------------------------------------------------------------------------\nDocument 16:\nWhen we invest in our workers, when we build the economy from the bottom up and the middle out together, we can do something we haven\u2019t done in a long time: build a better America. \nFor more than two years, COVID-19 has impacted every decision in our lives and the life of the nation. \nAnd I know you\u2019re tired, frustrated, and exhausted. \nBut I also know this.\n----------------------------------------------------------------------------------------------------\nDocument 17:\nNow is the hour. \nOur moment of responsibility. \nOur test of resolve and conscience, of history itself. \nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \nWell I know this nation. \nWe will meet the test. \nTo protect freedom and liberty, to expand fairness and opportunity. \nWe will save democracy. \nAs hard as these times have been, I am more optimistic about America today than I have been my whole life.\n----------------------------------------------------------------------------------------------------\nDocument 18:\nHe didn\u2019t know how to stop fighting, and neither did she. \nThrough her pain she found purpose to demand we do better. \nTonight, Danielle\u2014we are. \nThe VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits. \nAnd tonight, I\u2019m announcing we\u2019re expanding eligibility to veterans suffering from nine respiratory cancers.\n----------------------------------------------------------------------------------------------------\nDocument 19:", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html"}
+{"id": "28f0f7081065-6", "text": "----------------------------------------------------------------------------------------------------\nDocument 19:\nI understand. \nI remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. \nThat\u2019s why one of the first things I did as President was fight to pass the American Rescue Plan. \nBecause people were hurting. We needed to act, and we did. \nFew pieces of legislation have done more in a critical moment in our history to lift us out of crisis.\n----------------------------------------------------------------------------------------------------\nDocument 20:\nSo let\u2019s not abandon our streets. Or choose between safety and equal justice. \nLet\u2019s come together to protect our communities, restore trust, and hold law enforcement accountable. \nThat\u2019s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.\nDoing reranking with CohereRerank#\nNow let\u2019s wrap our base retriever with a ContextualCompressionRetriever. We\u2019ll add an CohereRerank, uses the Cohere rerank endpoint to rerank the returned results.\nfrom langchain.llms import OpenAI\nfrom langchain.retrievers import ContextualCompressionRetriever\nfrom langchain.retrievers.document_compressors import CohereRerank\nllm = OpenAI(temperature=0)\ncompressor = CohereRerank()\ncompression_retriever = ContextualCompressionRetriever(base_compressor=compressor, base_retriever=retriever)\ncompressed_docs = compression_retriever.get_relevant_documents(\"What did the president say about Ketanji Jackson Brown\")\npretty_print_docs(compressed_docs)\nDocument 1:\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html"}
+{"id": "28f0f7081065-7", "text": "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\n----------------------------------------------------------------------------------------------------\nDocument 2:\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. \nI\u2019ve worked on these issues a long time. \nI know what works: Investing in crime preventionand community police officers who\u2019ll walk the beat, who\u2019ll know the neighborhood, and who can restore trust and safety. \nSo let\u2019s not abandon our streets. Or choose between safety and equal justice.\n----------------------------------------------------------------------------------------------------\nDocument 3:\nA former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she\u2019s been nominated, she\u2019s received a broad range of support\u2014from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.\nYou can of course use this retriever within a QA pipeline\nfrom langchain.chains import RetrievalQA\nchain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0), retriever=compression_retriever)\nchain({\"query\": query})\n{'query': 'What did the president say about Ketanji Brown Jackson',\n 'result': \" The president said that Ketanji Brown Jackson is one of the nation's top legal minds and that she is a consensus builder who has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\"}\nprevious\nSelf-querying with Chroma\nnext\nContextual Compression\n Contents", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html"}
+{"id": "28f0f7081065-8", "text": "previous\nSelf-querying with Chroma\nnext\nContextual Compression\n Contents\n \nSet up the base vector store retriever\nDoing reranking with CohereRerank\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html"}
+{"id": "50dc7dbc9d7e-0", "text": ".ipynb\n.pdf\nGetting Started\n Contents \nAdd texts\nFrom Documents\nGetting Started#\nThis notebook showcases basic functionality related to VectorStores. A key part of working with vectorstores is creating the vector to put in them, which is usually created via embeddings. Therefore, it is recommended that you familiarize yourself with the embedding notebook before diving into this.\nThis covers generic high level functionality related to all vector stores.\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import Chroma\nwith open('../../state_of_the_union.txt') as f:\n state_of_the_union = f.read()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ntexts = text_splitter.split_text(state_of_the_union)\nembeddings = OpenAIEmbeddings()\ndocsearch = Chroma.from_texts(texts, embeddings)\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = docsearch.similarity_search(query)\nRunning Chroma using direct local API.\nUsing DuckDB in-memory for database. Data will be transient.\nprint(docs[0].page_content)\nIn state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \nWe cannot let this happen. \nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/getting_started.html"}
+{"id": "50dc7dbc9d7e-1", "text": "One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nAdd texts#\nYou can easily add text to a vectorstore with the add_texts method. It will return a list of document IDs (in case you need to use them downstream).\ndocsearch.add_texts([\"Ankush went to Princeton\"])\n['a05e3d0c-ab40-11ed-a853-e65801318981']\nquery = \"Where did Ankush go to college?\"\ndocs = docsearch.similarity_search(query)\ndocs[0]\nDocument(page_content='Ankush went to Princeton', lookup_str='', metadata={}, lookup_index=0)\nFrom Documents#\nWe can also initialize a vectorstore from documents directly. This is useful when we use the method on the text splitter to get documents directly (handy when the original documents have associated metadata).\ndocuments = text_splitter.create_documents([state_of_the_union], metadatas=[{\"source\": \"State of the Union\"}])\ndocsearch = Chroma.from_documents(documents, embeddings)\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = docsearch.similarity_search(query)\nRunning Chroma using direct local API.\nUsing DuckDB in-memory for database. Data will be transient.\nprint(docs[0].page_content)\nIn state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \nWe cannot let this happen.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/getting_started.html"}
+{"id": "50dc7dbc9d7e-2", "text": "We cannot let this happen. \nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nprevious\nVectorstores\nnext\nAnalyticDB\n Contents\n \nAdd texts\nFrom Documents\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/getting_started.html"}
+{"id": "8a148b41b975-0", "text": ".ipynb\n.pdf\nRedis\n Contents \nInstalling\nExample\nRedis as Retriever\nRedis#\nRedis (Remote Dictionary Server) is an in-memory data structure store, used as a distributed, in-memory key\u2013value database, cache and message broker, with optional durability.\nThis notebook shows how to use functionality related to the Redis vector database.\nInstalling#\n!pip install redis\nWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.\nimport os\nimport getpass\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nExample#\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores.redis import Redis\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\nrds = Redis.from_documents(docs, embeddings, redis_url=\"redis://localhost:6379\", index_name='link')\nrds.index_name\n'link'\nquery = \"What did the president say about Ketanji Brown Jackson\"\nresults = rds.similarity_search(query)\nprint(results[0].page_content)\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/redis.html"}
+{"id": "8a148b41b975-1", "text": "Tonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nprint(rds.add_texts([\"Ankush went to Princeton\"]))\n['doc:link:d7d02e3faf1b40bbbe29a683ff75b280']\nquery = \"Princeton\"\nresults = rds.similarity_search(query)\nprint(results[0].page_content)\nAnkush went to Princeton\n# Load from existing index\nrds = Redis.from_existing_index(embeddings, redis_url=\"redis://localhost:6379\", index_name='link')\nquery = \"What did the president say about Ketanji Brown Jackson\"\nresults = rds.similarity_search(query)\nprint(results[0].page_content)\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/redis.html"}
+{"id": "8a148b41b975-2", "text": "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nRedis as Retriever#\nHere we go over different options for using the vector store as a retriever.\nThere are three different search methods we can use to do retrieval. By default, it will use semantic similarity.\nretriever = rds.as_retriever()\ndocs = retriever.get_relevant_documents(query)\nWe can also use similarity_limit as a search method. This is only return documents if they are similar enough\nretriever = rds.as_retriever(search_type=\"similarity_limit\")\n# Here we can see it doesn't return any results because there are no relevant documents\nretriever.get_relevant_documents(\"where did ankush go to college?\")\nprevious\nQdrant\nnext\nSingleStoreDB vector search\n Contents\n \nInstalling\nExample\nRedis as Retriever\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/redis.html"}
+{"id": "2253129e66b5-0", "text": ".ipynb\n.pdf\nClickHouse Vector Search\n Contents \nSetting up envrionments\nGet connection info and data schema\nClickhouse table schema\nFiltering\nDeleting your data\nClickHouse Vector Search#\nClickHouse is the fastest and most resource efficient open-source database for real-time apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries. Lately added data structures and distance search functions (like L2Distance) as well as approximate nearest neighbor search indexes enable ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL.\nThis notebook shows how to use functionality related to the ClickHouse vector search.\nSetting up envrionments#\nSetting up local clickhouse server with docker (optional)\n! docker run -d -p 8123:8123 -p9000:9000 --name langchain-clickhouse-server --ulimit nofile=262144:262144 clickhouse/clickhouse-server:23.4.2.11\nSetup up clickhouse client driver\n!pip install clickhouse-connect\nWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.\nimport os\nimport getpass\nif not os.environ['OPENAI_API_KEY']:\n os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import Clickhouse, ClickhouseSettings\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/clickhouse.html"}
+{"id": "2253129e66b5-1", "text": "docs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\nfor d in docs:\n d.metadata = {'some': 'metadata'}\nsettings = ClickhouseSettings(table=\"clickhouse_vector_search_example\")\ndocsearch = Clickhouse.from_documents(docs, embeddings, config=settings)\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = docsearch.similarity_search(query)\nInserting data...: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 42/42 [00:00<00:00, 2801.49it/s]\nprint(docs[0].page_content)\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nGet connection info and data schema#\nprint(str(docsearch))\ndefault.clickhouse_vector_search_example @ localhost:8123\nusername: None\nTable Schema:\n---------------------------------------------------\n|id |Nullable(String) |\n|document |Nullable(String) |\n|embedding |Array(Float32) |\n|metadata |Object('json') |\n|uuid |UUID |\n---------------------------------------------------", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/clickhouse.html"}
+{"id": "2253129e66b5-2", "text": "|uuid |UUID |\n---------------------------------------------------\nClickhouse table schema#\nClickhouse table will be automatically created if not exist by default. Advanced users could pre-create the table with optimized settings. For distributed Clickhouse cluster with sharding, table engine should be configured as Distributed.\nprint(f\"Clickhouse Table DDL:\\n\\n{docsearch.schema}\")\nClickhouse Table DDL:\nCREATE TABLE IF NOT EXISTS default.clickhouse_vector_search_example(\n id Nullable(String),\n document Nullable(String),\n embedding Array(Float32),\n metadata JSON,\n uuid UUID DEFAULT generateUUIDv4(),\n CONSTRAINT cons_vec_len CHECK length(embedding) = 1536,\n INDEX vec_idx embedding TYPE annoy(100,'L2Distance') GRANULARITY 1000\n) ENGINE = MergeTree ORDER BY uuid SETTINGS index_granularity = 8192\nFiltering#\nYou can have direct access to ClickHouse SQL where statement. You can write WHERE clause following standard SQL.\nNOTE: Please be aware of SQL injection, this interface must not be directly called by end-user.\nIf you custimized your column_map under your setting, you search with filter like this:\nfrom langchain.vectorstores import Clickhouse, ClickhouseSettings\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\nfor i, d in enumerate(docs):\n d.metadata = {'doc_id': i}\ndocsearch = Clickhouse.from_documents(docs, embeddings)\nInserting data...: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 42/42 [00:00<00:00, 6939.56it/s]", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/clickhouse.html"}
+{"id": "2253129e66b5-3", "text": "meta = docsearch.metadata_column\noutput = docsearch.similarity_search_with_relevance_scores('What did the president say about Ketanji Brown Jackson?', \n k=4, where_str=f\"{meta}.doc_id<10\")\nfor d, dist in output:\n print(dist, d.metadata, d.page_content[:20] + '...')\n0.6779101415357189 {'doc_id': 0} Madam Speaker, Madam...\n0.6997970363474885 {'doc_id': 8} And so many families...\n0.7044504914336727 {'doc_id': 1} Groups of citizens b...\n0.7053558702165094 {'doc_id': 6} And I\u2019m taking robus...\nDeleting your data#\ndocsearch.drop()\nprevious\nChroma\nnext\nDeep Lake\n Contents\n \nSetting up envrionments\nGet connection info and data schema\nClickhouse table schema\nFiltering\nDeleting your data\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/clickhouse.html"}
+{"id": "3f40017b794a-0", "text": ".ipynb\n.pdf\nElasticSearch\n Contents \nElasticSearch\nElasticVectorSearch class\nInstallation\nExample\nElasticKnnSearch Class\nTest adding vectors\nTest knn search using query vector builder\nTest knn search using pre generated vector\nTest source option\nTest fields option\nTest with es client connection rather than cloud_id\nElasticSearch#\nElasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.\nThis notebook shows how to use functionality related to the Elasticsearch database.\nElasticVectorSearch class#\nInstallation#\nCheck out Elasticsearch installation instructions.\nTo connect to an Elasticsearch instance that does not require\nlogin credentials, pass the Elasticsearch URL and index name along with the\nembedding object to the constructor.\nExample:\n from langchain import ElasticVectorSearch\n from langchain.embeddings import OpenAIEmbeddings\n embedding = OpenAIEmbeddings()\n elastic_vector_search = ElasticVectorSearch(\n elasticsearch_url=\"http://localhost:9200\",\n index_name=\"test_index\",\n embedding=embedding\n )\nTo connect to an Elasticsearch instance that requires login credentials,\nincluding Elastic Cloud, use the Elasticsearch URL format\nhttps://username:password@es_host:9243. For example, to connect to Elastic\nCloud, create the Elasticsearch URL with the required authentication details and\npass it to the ElasticVectorSearch constructor as the named parameter\nelasticsearch_url.\nYou can obtain your Elastic Cloud URL and login credentials by logging in to the\nElastic Cloud console at https://cloud.elastic.co, selecting your deployment, and\nnavigating to the \u201cDeployments\u201d page.\nTo obtain your Elastic Cloud password for the default \u201celastic\u201d user:", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html"}
+{"id": "3f40017b794a-1", "text": "To obtain your Elastic Cloud password for the default \u201celastic\u201d user:\nLog in to the Elastic Cloud console at https://cloud.elastic.co\nGo to \u201cSecurity\u201d > \u201cUsers\u201d\nLocate the \u201celastic\u201d user and click \u201cEdit\u201d\nClick \u201cReset password\u201d\nFollow the prompts to reset the password\nFormat for Elastic Cloud URLs is\nhttps://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.\nExample:\n from langchain import ElasticVectorSearch\n from langchain.embeddings import OpenAIEmbeddings\n embedding = OpenAIEmbeddings()\n elastic_host = \"cluster_id.region_id.gcp.cloud.es.io\"\n elasticsearch_url = f\"https://username:password@{elastic_host}:9243\"\n elastic_vector_search = ElasticVectorSearch(\n elasticsearch_url=elasticsearch_url,\n index_name=\"test_index\",\n embedding=embedding\n )\n!pip install elasticsearch\nimport os\nimport getpass\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nExample#\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import ElasticVectorSearch\nfrom langchain.document_loaders import TextLoader\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\ndb = ElasticVectorSearch.from_documents(docs, embeddings, elasticsearch_url=\"http://localhost:9200\")\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = db.similarity_search(query)", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html"}
+{"id": "3f40017b794a-2", "text": "docs = db.similarity_search(query)\nprint(docs[0].page_content)\nIn state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \nWe cannot let this happen. \nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nElasticKnnSearch Class#\nThe ElasticKnnSearch implements features allowing storing vectors and documents in Elasticsearch for use with approximate kNN search\n!pip install langchain elasticsearch\nfrom langchain.vectorstores.elastic_vector_search import ElasticKnnSearch\nfrom langchain.embeddings import ElasticsearchEmbeddings\nimport elasticsearch\n# Initialize ElasticsearchEmbeddings\nmodel_id = \"\" \ndims = dim_count\nes_cloud_id = \"ESS_CLOUD_ID\"\nes_user = \"es_user\"\nes_password = \"es_pass\"\ntest_index = \"\"\n#input_field = \"your_input_field\" # if different from 'text_field'\n# Generate embedding object\nembeddings = ElasticsearchEmbeddings.from_credentials(\n model_id,\n #input_field=input_field,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html"}
+{"id": "3f40017b794a-3", "text": "model_id,\n #input_field=input_field,\n es_cloud_id=es_cloud_id,\n es_user=es_user,\n es_password=es_password,\n)\n# Initialize ElasticKnnSearch\nknn_search = ElasticKnnSearch(\n\tes_cloud_id=es_cloud_id, \n\tes_user=es_user, \n\tes_password=es_password, \n\tindex_name= test_index, \n\tembedding= embeddings\n)\nTest adding vectors#\n# Test `add_texts` method\ntexts = [\"Hello, world!\", \"Machine learning is fun.\", \"I love Python.\"]\nknn_search.add_texts(texts)\n# Test `from_texts` method\nnew_texts = [\"This is a new text.\", \"Elasticsearch is powerful.\", \"Python is great for data analysis.\"]\nknn_search.from_texts(new_texts, dims=dims)\nTest knn search using query vector builder#\n# Test `knn_search` method with model_id and query_text\nquery = \"Hello\"\nknn_result = knn_search.knn_search(query = query, model_id= model_id, k=2)\nprint(f\"kNN search results for query '{query}': {knn_result}\")\nprint(f\"The 'text' field value from the top hit is: '{knn_result['hits']['hits'][0]['_source']['text']}'\")\n# Test `hybrid_search` method\nquery = \"Hello\"\nhybrid_result = knn_search.knn_hybrid_search(query = query, model_id= model_id, k=2)\nprint(f\"Hybrid search results for query '{query}': {hybrid_result}\")\nprint(f\"The 'text' field value from the top hit is: '{hybrid_result['hits']['hits'][0]['_source']['text']}'\")\nTest knn search using pre generated vector#", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html"}
+{"id": "3f40017b794a-4", "text": "Test knn search using pre generated vector#\n# Generate embedding for tests\nquery_text = 'Hello'\nquery_embedding = embeddings.embed_query(query_text)\nprint(f\"Length of embedding: {len(query_embedding)}\\nFirst two items in embedding: {query_embedding[:2]}\")\n# Test knn Search\nknn_result = knn_search.knn_search(query_vector = query_embedding, k=2)\nprint(f\"The 'text' field value from the top hit is: '{knn_result['hits']['hits'][0]['_source']['text']}'\")\n# Test hybrid search - Requires both query_text and query_vector\nknn_result = knn_search.knn_hybrid_search(query_vector = query_embedding, query=query_text, k=2)\nprint(f\"The 'text' field value from the top hit is: '{knn_result['hits']['hits'][0]['_source']['text']}'\")\nTest source option#\n# Test `knn_search` method with model_id and query_text\nquery = \"Hello\"\nknn_result = knn_search.knn_search(query = query, model_id= model_id, k=2, source=False)\nassert not '_source' in knn_result['hits']['hits'][0].keys()\n# Test `hybrid_search` method\nquery = \"Hello\"\nhybrid_result = knn_search.knn_hybrid_search(query = query, model_id= model_id, k=2, source=False)\nassert not '_source' in hybrid_result['hits']['hits'][0].keys()\nTest fields option#\n# Test `knn_search` method with model_id and query_text\nquery = \"Hello\"\nknn_result = knn_search.knn_search(query = query, model_id= model_id, k=2, fields=['text'])", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html"}
+{"id": "3f40017b794a-5", "text": "assert 'text' in knn_result['hits']['hits'][0]['fields'].keys()\n# Test `hybrid_search` method\nquery = \"Hello\"\nhybrid_result = knn_search.knn_hybrid_search(query = query, model_id= model_id, k=2, fields=['text'])\nassert 'text' in hybrid_result['hits']['hits'][0]['fields'].keys()\nTest with es client connection rather than cloud_id#\n# Create Elasticsearch connection\nes_connection = Elasticsearch(\n hosts=['https://es_cluster_url:port'], \n basic_auth=('user', 'password')\n)\n# Instantiate ElasticsearchEmbeddings using es_connection\nembeddings = ElasticsearchEmbeddings.from_es_connection(\n model_id,\n es_connection,\n)\n# Initialize ElasticKnnSearch\nknn_search = ElasticKnnSearch(\n\tes_connection = es_connection,\n\tindex_name= test_index, \n\tembedding= embeddings\n)\n# Test `knn_search` method with model_id and query_text\nquery = \"Hello\"\nknn_result = knn_search.knn_search(query = query, model_id= model_id, k=2)\nprint(f\"kNN search results for query '{query}': {knn_result}\")\nprint(f\"The 'text' field value from the top hit is: '{knn_result['hits']['hits'][0]['_source']['text']}'\")\nprevious\nDocArrayInMemorySearch\nnext\nFAISS\n Contents\n \nElasticSearch\nElasticVectorSearch class\nInstallation\nExample\nElasticKnnSearch Class\nTest adding vectors\nTest knn search using query vector builder\nTest knn search using pre generated vector\nTest source option\nTest fields option\nTest with es client connection rather than cloud_id\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html"}
+{"id": "3f40017b794a-6", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html"}
+{"id": "b7f1992d2312-0", "text": ".ipynb\n.pdf\nOpenSearch\n Contents \nInstallation\nsimilarity_search using Approximate k-NN\nsimilarity_search using Script Scoring\nsimilarity_search using Painless Scripting\nUsing a preexisting OpenSearch instance\nOpenSearch#\nOpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene.\nThis notebook shows how to use functionality related to the OpenSearch database.\nTo run, you should have an OpenSearch instance up and running: see here for an easy Docker installation.\nsimilarity_search by default performs the Approximate k-NN Search which uses one of the several algorithms like lucene, nmslib, faiss recommended for\nlarge datasets. To perform brute force search we have other search methods known as Script Scoring and Painless Scripting.\nCheck this for more details.\nInstallation#\nInstall the Python client.\n!pip install opensearch-py\nWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.\nimport os\nimport getpass\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import OpenSearchVectorSearch\nfrom langchain.document_loaders import TextLoader\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\nsimilarity_search using Approximate k-NN#", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/opensearch.html"}
+{"id": "b7f1992d2312-1", "text": "embeddings = OpenAIEmbeddings()\nsimilarity_search using Approximate k-NN#\nsimilarity_search using Approximate k-NN Search with Custom Parameters\ndocsearch = OpenSearchVectorSearch.from_documents(\n docs, \n embeddings, \n opensearch_url=\"http://localhost:9200\"\n)\n# If using the default Docker installation, use this instantiation instead:\n# docsearch = OpenSearchVectorSearch.from_documents(\n# docs, \n# embeddings, \n# opensearch_url=\"https://localhost:9200\", \n# http_auth=(\"admin\", \"admin\"), \n# use_ssl = False,\n# verify_certs = False,\n# ssl_assert_hostname = False,\n# ssl_show_warn = False,\n# )\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = docsearch.similarity_search(query, k=10)\nprint(docs[0].page_content)\ndocsearch = OpenSearchVectorSearch.from_documents(docs, embeddings, opensearch_url=\"http://localhost:9200\", engine=\"faiss\", space_type=\"innerproduct\", ef_construction=256, m=48)\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = docsearch.similarity_search(query)\nprint(docs[0].page_content)\nsimilarity_search using Script Scoring#\nsimilarity_search using Script Scoring with Custom Parameters\ndocsearch = OpenSearchVectorSearch.from_documents(docs, embeddings, opensearch_url=\"http://localhost:9200\", is_appx_search=False)\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = docsearch.similarity_search(\"What did the president say about Ketanji Brown Jackson\", k=1, search_type=\"script_scoring\")", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/opensearch.html"}
+{"id": "b7f1992d2312-2", "text": "print(docs[0].page_content)\nsimilarity_search using Painless Scripting#\nsimilarity_search using Painless Scripting with Custom Parameters\ndocsearch = OpenSearchVectorSearch.from_documents(docs, embeddings, opensearch_url=\"http://localhost:9200\", is_appx_search=False)\nfilter = {\"bool\": {\"filter\": {\"term\": {\"text\": \"smuggling\"}}}}\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = docsearch.similarity_search(\"What did the president say about Ketanji Brown Jackson\", search_type=\"painless_scripting\", space_type=\"cosineSimilarity\", pre_filter=filter)\nprint(docs[0].page_content)\nUsing a preexisting OpenSearch instance#\nIt\u2019s also possible to use a preexisting OpenSearch instance with documents that already have vectors present.\n# this is just an example, you would need to change these values to point to another opensearch instance\ndocsearch = OpenSearchVectorSearch(index_name=\"index-*\", embedding_function=embeddings, opensearch_url=\"http://localhost:9200\")\n# you can specify custom field names to match the fields you're using to store your embedding, document text value, and metadata\ndocs = docsearch.similarity_search(\"Who was asking about getting lunch today?\", search_type=\"script_scoring\", space_type=\"cosinesimil\", vector_field=\"message_embedding\", text_field=\"message\", metadata_field=\"message_metadata\")\nprevious\nMyScale\nnext\nPGVector\n Contents\n \nInstallation\nsimilarity_search using Approximate k-NN\nsimilarity_search using Script Scoring\nsimilarity_search using Painless Scripting\nUsing a preexisting OpenSearch instance\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/opensearch.html"}
+{"id": "df1dd20453d9-0", "text": ".ipynb\n.pdf\nDocArrayInMemorySearch\n Contents \nSetup\nUsing DocArrayInMemorySearch\nSimilarity search\nSimilarity search with score\nDocArrayInMemorySearch#\nDocArrayInMemorySearch is a document index provided by Docarray that stores documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server.\nThis notebook shows how to use functionality related to the DocArrayInMemorySearch.\nSetup#\nUncomment the below cells to install docarray and get/set your OpenAI api key if you haven\u2019t already done so.\n# !pip install \"docarray\"\n# Get an OpenAI token: https://platform.openai.com/account/api-keys\n# import os\n# from getpass import getpass\n# OPENAI_API_KEY = getpass()\n# os.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY\nUsing DocArrayInMemorySearch#\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import DocArrayInMemorySearch\nfrom langchain.document_loaders import TextLoader\ndocuments = TextLoader('../../../state_of_the_union.txt').load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\ndb = DocArrayInMemorySearch.from_documents(docs, embeddings)\nSimilarity search#\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = db.similarity_search(query)\nprint(docs[0].page_content)\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/docarray_in_memory.html"}
+{"id": "df1dd20453d9-1", "text": "Tonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nSimilarity search with score#\nThe returned distance score is cosine distance. Therefore, a lower score is better.\ndocs = db.similarity_search_with_score(query)\ndocs[0]\n(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.', metadata={}),\n 0.8154190158347903)\nprevious\nDocArrayHnswSearch\nnext\nElasticSearch\n Contents\n \nSetup\nUsing DocArrayInMemorySearch\nSimilarity search\nSimilarity search with score\nBy Harrison Chase", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/docarray_in_memory.html"}
+{"id": "df1dd20453d9-2", "text": "Similarity search\nSimilarity search with score\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/docarray_in_memory.html"}
+{"id": "4067997422c2-0", "text": ".ipynb\n.pdf\nTypesense\n Contents \nSimilarity Search\nTypesense as a Retriever\nTypesense#\nTypesense is an open source, in-memory search engine, that you can either self-host or run on Typesense Cloud.\nTypesense focuses on performance by storing the entire index in RAM (with a backup on disk) and also focuses on providing an out-of-the-box developer experience by simplifying available options and setting good defaults.\nIt also lets you combine attribute-based filtering together with vector queries, to fetch the most relevant documents.\nThis notebook shows you how to use Typesense as your VectorStore.\nLet\u2019s first install our dependencies:\n!pip install typesense openapi-schema-pydantic openai tiktoken\nWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.\nimport os\nimport getpass\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import Typesense\nfrom langchain.document_loaders import TextLoader\nLet\u2019s import our test dataset:\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\ndocsearch = Typesense.from_documents(docs,\n embeddings,\n typesense_client_params={\n 'host': 'localhost', # Use xxx.a1.typesense.net for Typesense Cloud\n 'port': '8108', # Use 443 for Typesense Cloud\n 'protocol': 'http', # Use https for Typesense Cloud\n 'typesense_api_key': 'xyz',", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/typesense.html"}
+{"id": "4067997422c2-1", "text": "'typesense_api_key': 'xyz',\n 'typesense_collection_name': 'lang-chain'\n })\nSimilarity Search#\nquery = \"What did the president say about Ketanji Brown Jackson\"\nfound_docs = docsearch.similarity_search(query)\nprint(found_docs[0].page_content)\nTypesense as a Retriever#\nTypesense, as all the other vector stores, is a LangChain Retriever, by using cosine similarity.\nretriever = docsearch.as_retriever()\nretriever\nquery = \"What did the president say about Ketanji Brown Jackson\"\nretriever.get_relevant_documents(query)[0]\nprevious\nTigris\nnext\nVectara\n Contents\n \nSimilarity Search\nTypesense as a Retriever\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/typesense.html"}
+{"id": "56828f0d4b7f-0", "text": ".ipynb\n.pdf\nSingleStoreDB vector search\nSingleStoreDB vector search#\nSingleStore DB is a high-performance distributed database that supports deployment both in the cloud and on-premises. For a significant duration, it has provided support for vector functions such as dot_product, thereby positioning itself as an ideal solution for AI applications that require text similarity matching.\nThis tutorial illustrates how to utilize the features of the SingleStore DB Vector Store.\n# Establishing a connection to the database is facilitated through the singlestoredb Python connector.\n# Please ensure that this connector is installed in your working environment.\n!pip install singlestoredb\nimport os\nimport getpass\n# We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import SingleStoreDB\nfrom langchain.document_loaders import TextLoader\n# Load text samples \nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\nThere are several ways to establish a connection to the database. You can either set up environment variables or pass named parameters to the SingleStoreDB constructor. Alternatively, you may provide these parameters to the from_documents and from_texts methods.\n# Setup connection url as environment variable\nos.environ['SINGLESTOREDB_URL'] = 'root:pass@localhost:3306/db'\n# Load documents to the store\ndocsearch = SingleStoreDB.from_documents(\n docs,\n embeddings,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/singlestoredb.html"}
+{"id": "56828f0d4b7f-1", "text": "docsearch = SingleStoreDB.from_documents(\n docs,\n embeddings,\n table_name = \"noteook\", # use table with a custom name \n)\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = docsearch.similarity_search(query) # Find documents that correspond to the query\nprint(docs[0].page_content)\nprevious\nRedis\nnext\nSKLearnVectorStore\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/singlestoredb.html"}
+{"id": "4a709ad2f118-0", "text": ".ipynb\n.pdf\nVectara\n Contents \nConnecting to Vectara from LangChain\nSimilarity search\nSimilarity search with score\nVectara as a Retriever\nVectara#\nVectara is a API platform for building LLM-powered applications. It provides a simple to use API for document indexing and query that is managed by Vectara and is optimized for performance and accuracy.\nThis notebook shows how to use functionality related to the Vectara vector database.\nSee the Vectara API documentation for more information on how to use the API.\nWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.\nimport os\nimport getpass\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nOpenAI API Key:\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import Vectara\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\nConnecting to Vectara from LangChain#\nThe Vectara API provides simple API endpoints for indexing and querying.\nvectara = Vectara.from_documents(docs, embedding=None)\nSimilarity search#\nThe simplest scenario for using Vectara is to perform a similarity search.\nquery = \"What did the president say about Ketanji Brown Jackson\"\nfound_docs = vectara.similarity_search(query, n_sentence_context=0)\nprint(found_docs[0].page_content)", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/vectara.html"}
+{"id": "4a709ad2f118-1", "text": "print(found_docs[0].page_content)\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nSimilarity search with score#\nSometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result.\nquery = \"What did the president say about Ketanji Brown Jackson\"\nfound_docs = vectara.similarity_search_with_score(query)\ndocument, score = found_docs[0]\nprint(document.page_content)\nprint(f\"\\nScore: {score}\")\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/vectara.html"}
+{"id": "4a709ad2f118-2", "text": "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nScore: 0.7129974\nVectara as a Retriever#\nVectara, as all the other vector stores, is a LangChain Retriever, by using cosine similarity.\nretriever = vectara.as_retriever()\nretriever\nVectaraRetriever(vectorstore=, search_type='similarity', search_kwargs={'lambda_val': 0.025, 'k': 5, 'filter': '', 'n_sentence_context': '0'})\nquery = \"What did the president say about Ketanji Brown Jackson\"\nretriever.get_relevant_documents(query)[0]\nDocument(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})\nprevious\nTypesense\nnext\nWeaviate\n Contents", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/vectara.html"}
+{"id": "4a709ad2f118-3", "text": "previous\nTypesense\nnext\nWeaviate\n Contents\n \nConnecting to Vectara from LangChain\nSimilarity search\nSimilarity search with score\nVectara as a Retriever\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/vectara.html"}
+{"id": "e2819ed910c4-0", "text": ".ipynb\n.pdf\nAnalyticDB\nAnalyticDB#\nAnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.\nAnalyticDB for PostgreSQL is developed based on the open source Greenplum Database project and is enhanced with in-depth extensions by Alibaba Cloud. AnalyticDB for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and column store. AnalyticDB for PostgreSQL processes petabytes of data offline at a high performance level and supports highly concurrent online queries.\nThis notebook shows how to use functionality related to the AnalyticDB vector database.\nTo run, you should have an AnalyticDB instance up and running:\nUsing AnalyticDB Cloud Vector Database. Click here to fast deploy it.\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import AnalyticDB\nSplit documents and get embeddings by call OpenAI API\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\nConnect to AnalyticDB by setting related ENVIRONMENTS.\nexport PG_HOST={your_analyticdb_hostname}\nexport PG_PORT={your_analyticdb_port} # Optional, default is 5432\nexport PG_DATABASE={your_database} # Optional, default is postgres\nexport PG_USER={database_username}\nexport PG_PASSWORD={database_password}\nThen store your embeddings and documents into AnalyticDB\nimport os\nconnection_string = AnalyticDB.connection_string_from_db_params(", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/analyticdb.html"}
+{"id": "e2819ed910c4-1", "text": "import os\nconnection_string = AnalyticDB.connection_string_from_db_params(\n driver=os.environ.get(\"PG_DRIVER\", \"psycopg2cffi\"),\n host=os.environ.get(\"PG_HOST\", \"localhost\"),\n port=int(os.environ.get(\"PG_PORT\", \"5432\")),\n database=os.environ.get(\"PG_DATABASE\", \"postgres\"),\n user=os.environ.get(\"PG_USER\", \"postgres\"),\n password=os.environ.get(\"PG_PASSWORD\", \"postgres\"),\n)\nvector_db = AnalyticDB.from_documents(\n docs,\n embeddings,\n connection_string= connection_string,\n)\nQuery and retrieve data\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = vector_db.similarity_search(query)\nprint(docs[0].page_content)\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nprevious\nGetting Started\nnext\nAnnoy\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/analyticdb.html"}
+{"id": "9d4bfd23f2ab-0", "text": ".ipynb\n.pdf\nFAISS\n Contents \nSimilarity Search with score\nSaving and loading\nMerging\nFAISS#\nFacebook AI Similarity Search (Faiss) is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning.\nFaiss documentation.\nThis notebook shows how to use functionality related to the FAISS vector database.\n#!pip install faiss\n# OR\n!pip install faiss-cpu\nWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.\nimport os\nimport getpass\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\n# Uncomment the following line if you need to initialize FAISS with no AVX2 optimization\n# os.environ['FAISS_NO_AVX2'] = '1'\nOpenAI API Key: \u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import FAISS\nfrom langchain.document_loaders import TextLoader\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\ndb = FAISS.from_documents(docs, embeddings)\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = db.similarity_search(query)\nprint(docs[0].page_content)", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/faiss.html"}
+{"id": "9d4bfd23f2ab-1", "text": "docs = db.similarity_search(query)\nprint(docs[0].page_content)\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nSimilarity Search with score#\nThere are some FAISS specific methods. One of them is similarity_search_with_score, which allows you to return not only the documents but also the distance score of the query to them. The returned distance score is L2 distance. Therefore, a lower score is better.\ndocs_and_scores = db.similarity_search_with_score(query)\ndocs_and_scores[0]", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/faiss.html"}
+{"id": "9d4bfd23f2ab-2", "text": "docs_and_scores = db.similarity_search_with_score(query)\ndocs_and_scores[0]\n(Document(page_content='In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \\n\\nWe cannot let this happen. \\n\\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),\n 0.3914415)\nIt is also possible to do a search for documents similar to a given embedding vector using similarity_search_by_vector which accepts an embedding vector as a parameter instead of a string.\nembedding_vector = embeddings.embed_query(query)\ndocs_and_scores = db.similarity_search_by_vector(embedding_vector)\nSaving and loading#\nYou can also save and load a FAISS index. This is useful so you don\u2019t have to recreate it everytime you use it.\ndb.save_local(\"faiss_index\")\nnew_db = FAISS.load_local(\"faiss_index\", embeddings)\ndocs = new_db.similarity_search(query)\ndocs[0]", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/faiss.html"}
+{"id": "9d4bfd23f2ab-3", "text": "docs = new_db.similarity_search(query)\ndocs[0]\nDocument(page_content='In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \\n\\nWe cannot let this happen. \\n\\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)\nMerging#\nYou can also merge two FAISS vectorstores\ndb1 = FAISS.from_texts([\"foo\"], embeddings)\ndb2 = FAISS.from_texts([\"bar\"], embeddings)\ndb1.docstore._dict\n{'e0b74348-6c93-4893-8764-943139ec1d17': Document(page_content='foo', lookup_str='', metadata={}, lookup_index=0)}\ndb2.docstore._dict\n{'bdc50ae3-a1bb-4678-9260-1b0979578f40': Document(page_content='bar', lookup_str='', metadata={}, lookup_index=0)}\ndb1.merge_from(db2)\ndb1.docstore._dict", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/faiss.html"}
+{"id": "9d4bfd23f2ab-4", "text": "db1.merge_from(db2)\ndb1.docstore._dict\n{'e0b74348-6c93-4893-8764-943139ec1d17': Document(page_content='foo', lookup_str='', metadata={}, lookup_index=0),\n 'd5211050-c777-493d-8825-4800e74cfdb6': Document(page_content='bar', lookup_str='', metadata={}, lookup_index=0)}\nprevious\nElasticSearch\nnext\nLanceDB\n Contents\n \nSimilarity Search with score\nSaving and loading\nMerging\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/faiss.html"}
+{"id": "54e9dd38bf3c-0", "text": ".ipynb\n.pdf\nSupabase (Postgres)\n Contents \nSimilarity search with score\nRetriever options\nMaximal Marginal Relevance Searches\nSupabase (Postgres)#\nSupabase is an open source Firebase alternative. Supabase is built on top of PostgreSQL, which offers strong SQL querying capabilities and enables a simple interface with already-existing tools and frameworks.\nPostgreSQL also known as Postgres, is a free and open-source relational database management system (RDBMS) emphasizing extensibility and SQL compliance.\nThis notebook shows how to use Supabase and pgvector as your VectorStore.\nTo run this notebook, please ensure:\nthe pgvector extension is enabled\nyou have installed the supabase-py package\nthat you have created a match_documents function in your database\nthat you have a documents table in your public schema similar to the one below.\nThe following function determines cosine similarity, but you can adjust to your needs.\n -- Enable the pgvector extension to work with embedding vectors\n create extension vector;\n -- Create a table to store your documents\n create table documents (\n id bigserial primary key,\n content text, -- corresponds to Document.pageContent\n metadata jsonb, -- corresponds to Document.metadata\n embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed\n );\n CREATE FUNCTION match_documents(query_embedding vector(1536), match_count int)\n RETURNS TABLE(\n id bigint,\n content text,\n metadata jsonb,\n -- we return matched vectors to enable maximal marginal relevance searches\n embedding vector(1536),\n similarity float)\n LANGUAGE plpgsql\n AS $$\n # variable_conflict use_column\n BEGIN\n RETURN query\n SELECT\n id,\n content,\n metadata,\n embedding,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/supabase.html"}
+{"id": "54e9dd38bf3c-1", "text": "SELECT\n id,\n content,\n metadata,\n embedding,\n 1 -(documents.embedding <=> query_embedding) AS similarity\n FROM\n documents\n ORDER BY\n documents.embedding <=> query_embedding\n LIMIT match_count;\n END;\n $$;\n# with pip\n!pip install supabase\n# with conda\n# !conda install -c conda-forge supabase\nWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.\nimport os\nimport getpass\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nos.environ['SUPABASE_URL'] = getpass.getpass('Supabase URL:')\nos.environ['SUPABASE_SERVICE_KEY'] = getpass.getpass('Supabase Service Key:')\n# If you're storing your Supabase and OpenAI API keys in a .env file, you can load them with dotenv\nfrom dotenv import load_dotenv\nload_dotenv()\nimport os\nfrom supabase.client import Client, create_client\nsupabase_url = os.environ.get(\"SUPABASE_URL\")\nsupabase_key = os.environ.get(\"SUPABASE_SERVICE_KEY\")\nsupabase: Client = create_client(supabase_url, supabase_key)\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import SupabaseVectorStore\nfrom langchain.document_loaders import TextLoader\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader(\"../../../state_of_the_union.txt\")\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/supabase.html"}
+{"id": "54e9dd38bf3c-2", "text": "docs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\n# We're using the default `documents` table here. You can modify this by passing in a `table_name` argument to the `from_documents` method.\nvector_store = SupabaseVectorStore.from_documents(\n docs, embeddings, client=supabase\n)\nquery = \"What did the president say about Ketanji Brown Jackson\"\nmatched_docs = vector_store.similarity_search(query)\nprint(matched_docs[0].page_content)\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nSimilarity search with score#\nThe returned distance score is cosine distance. Therefore, a lower score is better.\nmatched_docs = vector_store.similarity_search_with_relevance_scores(query)\nmatched_docs[0]", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/supabase.html"}
+{"id": "54e9dd38bf3c-3", "text": "matched_docs[0]\n(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}),\n 0.802509746274066)\nRetriever options#\nThis section goes over different options for how to use SupabaseVectorStore as a retriever.\nMaximal Marginal Relevance Searches#\nIn addition to using similarity search in the retriever object, you can also use mmr.\nretriever = vector_store.as_retriever(search_type=\"mmr\")\nmatched_docs = retriever.get_relevant_documents(query)\nfor i, d in enumerate(matched_docs):\n print(f\"\\n## Document {i}\\n\")\n print(d.page_content)\n## Document 0\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/supabase.html"}
+{"id": "54e9dd38bf3c-4", "text": "Tonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\n## Document 1\nOne was stationed at bases and breathing in toxic smoke from \u201cburn pits\u201d that incinerated wastes of war\u2014medical and hazard material, jet fuel, and more. \nWhen they came home, many of the world\u2019s fittest and best trained warriors were never the same. \nHeadaches. Numbness. Dizziness. \nA cancer that would put them in a flag-draped coffin. \nI know. \nOne of those soldiers was my son Major Beau Biden. \nWe don\u2019t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. \nBut I\u2019m committed to finding out everything we can. \nCommitted to military families like Danielle Robinson from Ohio. \nThe widow of Sergeant First Class Heath Robinson. \nHe was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. \nStationed near Baghdad, just yards from burn pits the size of football fields. \nHeath\u2019s widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their daughter.\n## Document 2", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/supabase.html"}
+{"id": "54e9dd38bf3c-5", "text": "## Document 2\nAnd I\u2019m taking robust action to make sure the pain of our sanctions is targeted at Russia\u2019s economy. And I will use every tool at our disposal to protect American businesses and consumers. \nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. \nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \nThese steps will help blunt gas prices here at home. And I know the news about what\u2019s happening can seem alarming. \nBut I want you to know that we are going to be okay. \nWhen the history of this era is written Putin\u2019s war on Ukraine will have left Russia weaker and the rest of the world stronger. \nWhile it shouldn\u2019t have taken something so terrible for people around the world to see what\u2019s at stake now everyone sees it clearly.\n## Document 3\nWe can\u2019t change how divided we\u2019ve been. But we can change how we move forward\u2014on COVID-19 and other issues we must face together. \nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \nOfficer Mora was 27 years old. \nOfficer Rivera was 22. \nBoth Dominican Americans who\u2019d grown up on the same streets they later chose to patrol as police officers. \nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. \nI\u2019ve worked on these issues a long time.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/supabase.html"}
+{"id": "54e9dd38bf3c-6", "text": "I\u2019ve worked on these issues a long time. \nI know what works: Investing in crime preventionand community police officers who\u2019ll walk the beat, who\u2019ll know the neighborhood, and who can restore trust and safety.\nprevious\nSKLearnVectorStore\nnext\nTair\n Contents\n \nSimilarity search with score\nRetriever options\nMaximal Marginal Relevance Searches\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/supabase.html"}
+{"id": "63212a219bbe-0", "text": ".ipynb\n.pdf\nMatchingEngine\n Contents \nCreate VectorStore from texts\nCreate Index and deploy it to an Endpoint\nImports, Constants and Configs\nUsing Tensorflow Universal Sentence Encoder as an Embedder\nInserting a test embedding\nCreating Index\nCreating Endpoint\nDeploy Index\nMatchingEngine#\nThis notebook shows how to use functionality related to the GCP Vertex AI MatchingEngine vector database.\nVertex AI Matching Engine provides the industry\u2019s leading high-scale low latency vector database. These vector databases are commonly referred to as vector similarity-matching or an approximate nearest neighbor (ANN) service.\nNote: This module expects an endpoint and deployed index already created as the creation time takes close to one hour. To see how to create an index refer to the section Create Index and deploy it to an Endpoint\nCreate VectorStore from texts#\nfrom langchain.vectorstores import MatchingEngine\ntexts = ['The cat sat on', 'the mat.', 'I like to', 'eat pizza for', 'dinner.', 'The sun sets', 'in the west.']\nvector_store = MatchingEngine.from_components(\n texts=texts,\n project_id=\"\",\n region=\"\",\n gcs_bucket_uri=\"\",\n index_id=\"\",\n endpoint_id=\"\"\n)\nvector_store.add_texts(texts=texts)\nvector_store.similarity_search(\"lunch\", k=2)\nCreate Index and deploy it to an Endpoint#\nImports, Constants and Configs#\n# Installing dependencies.\n!pip install tensorflow \\\n google-cloud-aiplatform \\\n tensorflow-hub \\\n tensorflow-text \nimport os\nimport json\nfrom google.cloud import aiplatform\nimport tensorflow_hub as hub\nimport tensorflow_text\nPROJECT_ID = \"\"\nREGION = \"\"", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/matchingengine.html"}
+{"id": "63212a219bbe-1", "text": "import tensorflow_text\nPROJECT_ID = \"\"\nREGION = \"\"\nVPC_NETWORK = \"\"\nPEERING_RANGE_NAME = \"ann-langchain-me-range\" # Name for creating the VPC peering.\nBUCKET_URI = \"gs://\"\n# The number of dimensions for the tensorflow universal sentence encoder. \n# If other embedder is used, the dimensions would probably need to change.\nDIMENSIONS = 512\nDISPLAY_NAME = \"index-test-name\"\nEMBEDDING_DIR = f\"{BUCKET_URI}/banana\"\nDEPLOYED_INDEX_ID = \"endpoint-test-name\"\nPROJECT_NUMBER = !gcloud projects list --filter=\"PROJECT_ID:'{PROJECT_ID}'\" --format='value(PROJECT_NUMBER)'\nPROJECT_NUMBER = PROJECT_NUMBER[0]\nVPC_NETWORK_FULL = f\"projects/{PROJECT_NUMBER}/global/networks/{VPC_NETWORK}\"\n# Change this if you need the VPC to be created.\nCREATE_VPC = False\n# Set the project id\n! gcloud config set project {PROJECT_ID}\n# Remove the if condition to run the encapsulated code\nif CREATE_VPC:\n # Create a VPC network\n ! gcloud compute networks create {VPC_NETWORK} --bgp-routing-mode=regional --subnet-mode=auto --project={PROJECT_ID}\n # Add necessary firewall rules\n ! gcloud compute firewall-rules create {VPC_NETWORK}-allow-icmp --network {VPC_NETWORK} --priority 65534 --project {PROJECT_ID} --allow icmp\n ! gcloud compute firewall-rules create {VPC_NETWORK}-allow-internal --network {VPC_NETWORK} --priority 65534 --project {PROJECT_ID} --allow all --source-ranges 10.128.0.0/9", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/matchingengine.html"}
+{"id": "63212a219bbe-2", "text": "! gcloud compute firewall-rules create {VPC_NETWORK}-allow-rdp --network {VPC_NETWORK} --priority 65534 --project {PROJECT_ID} --allow tcp:3389\n ! gcloud compute firewall-rules create {VPC_NETWORK}-allow-ssh --network {VPC_NETWORK} --priority 65534 --project {PROJECT_ID} --allow tcp:22\n # Reserve IP range\n ! gcloud compute addresses create {PEERING_RANGE_NAME} --global --prefix-length=16 --network={VPC_NETWORK} --purpose=VPC_PEERING --project={PROJECT_ID} --description=\"peering range\"\n # Set up peering with service networking\n # Your account must have the \"Compute Network Admin\" role to run the following.\n ! gcloud services vpc-peerings connect --service=servicenetworking.googleapis.com --network={VPC_NETWORK} --ranges={PEERING_RANGE_NAME} --project={PROJECT_ID}\n# Creating bucket.\n! gsutil mb -l $REGION -p $PROJECT_ID $BUCKET_URI\nUsing Tensorflow Universal Sentence Encoder as an Embedder#\n# Load the Universal Sentence Encoder module\nmodule_url = \"https://tfhub.dev/google/universal-sentence-encoder-multilingual/3\"\nmodel = hub.load(module_url)\n# Generate embeddings for each word\nembeddings = model(['banana'])\nInserting a test embedding#\ninitial_config = {\"id\": \"banana_id\", \"embedding\": [float(x) for x in list(embeddings.numpy()[0])]}\nwith open(\"data.json\", \"w\") as f:\n json.dump(initial_config, f)\n!gsutil cp data.json {EMBEDDING_DIR}/file.json\naiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI)\nCreating Index#", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/matchingengine.html"}
+{"id": "63212a219bbe-3", "text": "Creating Index#\nmy_index = aiplatform.MatchingEngineIndex.create_tree_ah_index(\n display_name=DISPLAY_NAME,\n contents_delta_uri=EMBEDDING_DIR,\n dimensions=DIMENSIONS,\n approximate_neighbors_count=150,\n distance_measure_type=\"DOT_PRODUCT_DISTANCE\"\n)\nCreating Endpoint#\nmy_index_endpoint = aiplatform.MatchingEngineIndexEndpoint.create(\n display_name=f\"{DISPLAY_NAME}-endpoint\",\n network=VPC_NETWORK_FULL,\n)\nDeploy Index#\nmy_index_endpoint = my_index_endpoint.deploy_index(\n index=my_index, \n deployed_index_id=DEPLOYED_INDEX_ID\n)\nmy_index_endpoint.deployed_indexes\nprevious\nLanceDB\nnext\nMilvus\n Contents\n \nCreate VectorStore from texts\nCreate Index and deploy it to an Endpoint\nImports, Constants and Configs\nUsing Tensorflow Universal Sentence Encoder as an Embedder\nInserting a test embedding\nCreating Index\nCreating Endpoint\nDeploy Index\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/matchingengine.html"}
+{"id": "3e8ffb678ec1-0", "text": ".ipynb\n.pdf\nLanceDB\nLanceDB#\nLanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrevial, filtering and management of embeddings. Fully open source.\nThis notebook shows how to use functionality related to the LanceDB vector database based on the Lance data format.\n!pip install lancedb\nWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.\nimport os\nimport getpass\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.vectorstores import LanceDB\nfrom langchain.document_loaders import TextLoader\nfrom langchain.text_splitter import CharacterTextSplitter\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ndocuments = CharacterTextSplitter().split_documents(documents)\nembeddings = OpenAIEmbeddings()\nimport lancedb\ndb = lancedb.connect('/tmp/lancedb')\ntable = db.create_table(\"my_table\", data=[\n {\"vector\": embeddings.embed_query(\"Hello World\"), \"text\": \"Hello World\", \"id\": \"1\"}\n], mode=\"overwrite\")\ndocsearch = LanceDB.from_documents(documents, embeddings, connection=table)\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = docsearch.similarity_search(query)\nprint(docs[0].page_content)\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \nOfficer Mora was 27 years old. \nOfficer Rivera was 22. \nBoth Dominican Americans who\u2019d grown up on the same streets they later chose to patrol as police officers.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/lancedb.html"}
+{"id": "3e8ffb678ec1-1", "text": "I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. \nI\u2019ve worked on these issues a long time. \nI know what works: Investing in crime preventionand community police officers who\u2019ll walk the beat, who\u2019ll know the neighborhood, and who can restore trust and safety. \nSo let\u2019s not abandon our streets. Or choose between safety and equal justice. \nLet\u2019s come together to protect our communities, restore trust, and hold law enforcement accountable. \nThat\u2019s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers. \nThat\u2019s why the American Rescue Plan provided $350 Billion that cities, states, and counties can use to hire more police and invest in proven strategies like community violence interruption\u2014trusted messengers breaking the cycle of violence and trauma and giving young people hope. \nWe should all agree: The answer is not to Defund the police. The answer is to FUND the police with the resources and training they need to protect our communities. \nI ask Democrats and Republicans alike: Pass my budget and keep our neighborhoods safe. \nAnd I will keep doing everything in my power to crack down on gun trafficking and ghost guns you can buy online and make at home\u2014they have no serial numbers and can\u2019t be traced. \nAnd I ask Congress to pass proven measures to reduce gun violence. Pass universal background checks. Why should anyone on a terrorist list be able to purchase a weapon? \nBan assault weapons and high-capacity magazines. \nRepeal the liability shield that makes gun manufacturers the only industry in America that can\u2019t be sued. \nThese laws don\u2019t infringe on the Second Amendment. They save lives.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/lancedb.html"}
+{"id": "3e8ffb678ec1-2", "text": "These laws don\u2019t infringe on the Second Amendment. They save lives. \nThe most fundamental right in America is the right to vote \u2013 and to have it counted. And it\u2019s under assault. \nIn state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \nWe cannot let this happen. \nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence. \nA former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she\u2019s been nominated, she\u2019s received a broad range of support\u2014from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \nWe can do both. At our border, we\u2019ve installed new technology like cutting-edge scanners to better detect drug smuggling. \nWe\u2019ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/lancedb.html"}
+{"id": "3e8ffb678ec1-3", "text": "We\u2019ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \nWe\u2019re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.\nprevious\nFAISS\nnext\nMatchingEngine\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/lancedb.html"}
+{"id": "422cc08f47b4-0", "text": ".ipynb\n.pdf\nQdrant\n Contents \nConnecting to Qdrant from LangChain\nLocal mode\nIn-memory\nOn-disk storage\nOn-premise server deployment\nQdrant Cloud\nReusing the same collection\nSimilarity search\nSimilarity search with score\nMetadata filtering\nMaximum marginal relevance search (MMR)\nQdrant as a Retriever\nCustomizing Qdrant\nQdrant#\nQdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications.\nThis notebook shows how to use functionality related to the Qdrant vector database.\nThere are various modes of how to run Qdrant, and depending on the chosen one, there will be some subtle differences. The options include:\nLocal mode, no server required\nOn-premise server deployment\nQdrant Cloud\nSee the installation instructions.\n!pip install qdrant-client\nWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.\nimport os\nimport getpass\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nOpenAI API Key: \u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import Qdrant\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html"}
+{"id": "422cc08f47b4-1", "text": "docs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\nConnecting to Qdrant from LangChain#\nLocal mode#\nPython client allows you to run the same code in local mode without running the Qdrant server. That\u2019s great for testing things out and debugging or if you plan to store just a small amount of vectors. The embeddings might be fully kepy in memory or persisted on disk.\nIn-memory#\nFor some testing scenarios and quick experiments, you may prefer to keep all the data in memory only, so it gets lost when the client is destroyed - usually at the end of your script/notebook.\nqdrant = Qdrant.from_documents(\n docs, embeddings, \n location=\":memory:\", # Local mode with in-memory storage only\n collection_name=\"my_documents\",\n)\nOn-disk storage#\nLocal mode, without using the Qdrant server, may also store your vectors on disk so they\u2019re persisted between runs.\nqdrant = Qdrant.from_documents(\n docs, embeddings, \n path=\"/tmp/local_qdrant\",\n collection_name=\"my_documents\",\n)\nOn-premise server deployment#\nNo matter if you choose to launch Qdrant locally with a Docker container, or select a Kubernetes deployment with the official Helm chart, the way you\u2019re going to connect to such an instance will be identical. You\u2019ll need to provide a URL pointing to the service.\nurl = \"<---qdrant url here --->\"\nqdrant = Qdrant.from_documents(\n docs, embeddings, \n url, prefer_grpc=True, \n collection_name=\"my_documents\",\n)\nQdrant Cloud#", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html"}
+{"id": "422cc08f47b4-2", "text": "collection_name=\"my_documents\",\n)\nQdrant Cloud#\nIf you prefer not to keep yourself busy with managing the infrastructure, you can choose to set up a fully-managed Qdrant cluster on Qdrant Cloud. There is a free forever 1GB cluster included for trying out. The main difference with using a managed version of Qdrant is that you\u2019ll need to provide an API key to secure your deployment from being accessed publicly.\nurl = \"<---qdrant cloud cluster url here --->\"\napi_key = \"<---api key here--->\"\nqdrant = Qdrant.from_documents(\n docs, embeddings, \n url, prefer_grpc=True, api_key=api_key, \n collection_name=\"my_documents\",\n)\nReusing the same collection#\nBoth Qdrant.from_texts and Qdrant.from_documents methods are great to start using Qdrant with LangChain, but they are going to destroy the collection and create it from scratch! If you want to reuse the existing collection, you can always create an instance of Qdrant on your own and pass the QdrantClient instance with the connection details.\ndel qdrant\nimport qdrant_client\nclient = qdrant_client.QdrantClient(\n path=\"/tmp/local_qdrant\", prefer_grpc=True\n)\nqdrant = Qdrant(\n client=client, collection_name=\"my_documents\", \n embeddings=embeddings\n)\nSimilarity search#\nThe simplest scenario for using Qdrant vector store is to perform a similarity search. Under the hood, our query will be encoded with the embedding_function and used to find similar documents in Qdrant collection.\nquery = \"What did the president say about Ketanji Brown Jackson\"\nfound_docs = qdrant.similarity_search(query)", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html"}
+{"id": "422cc08f47b4-3", "text": "found_docs = qdrant.similarity_search(query)\nprint(found_docs[0].page_content)\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nSimilarity search with score#\nSometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result.\nThe returned distance score is cosine distance. Therefore, a lower score is better.\nquery = \"What did the president say about Ketanji Brown Jackson\"\nfound_docs = qdrant.similarity_search_with_score(query)\ndocument, score = found_docs[0]\nprint(document.page_content)\nprint(f\"\\nScore: {score}\")\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html"}
+{"id": "422cc08f47b4-4", "text": "One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nScore: 0.8153784913324512\nMetadata filtering#\nQdrant has an extensive filtering system with rich type support. It is also possible to use the filters in Langchain, by passing an additional param to both the similarity_search_with_score and similarity_search methods.\nfrom qdrant_client.http import models as rest\nquery = \"What did the president say about Ketanji Brown Jackson\"\nfound_docs = qdrant.similarity_search_with_score(query, filter=rest.Filter(...))\nMaximum marginal relevance search (MMR)#\nIf you\u2019d like to look up for some similar documents, but you\u2019d also like to receive diverse results, MMR is method you should consider. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.\nquery = \"What did the president say about Ketanji Brown Jackson\"\nfound_docs = qdrant.max_marginal_relevance_search(query, k=2, fetch_k=10)\nfor i, doc in enumerate(found_docs):\n print(f\"{i + 1}.\", doc.page_content, \"\\n\")\n1. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html"}
+{"id": "422cc08f47b4-5", "text": "Tonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence. \n2. We can\u2019t change how divided we\u2019ve been. But we can change how we move forward\u2014on COVID-19 and other issues we must face together. \nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \nOfficer Mora was 27 years old. \nOfficer Rivera was 22. \nBoth Dominican Americans who\u2019d grown up on the same streets they later chose to patrol as police officers. \nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. \nI\u2019ve worked on these issues a long time. \nI know what works: Investing in crime preventionand community police officers who\u2019ll walk the beat, who\u2019ll know the neighborhood, and who can restore trust and safety. \nQdrant as a Retriever#\nQdrant, as all the other vector stores, is a LangChain Retriever, by using cosine similarity.\nretriever = qdrant.as_retriever()\nretriever", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html"}
+{"id": "422cc08f47b4-6", "text": "retriever = qdrant.as_retriever()\nretriever\nVectorStoreRetriever(vectorstore=, search_type='similarity', search_kwargs={})\nIt might be also specified to use MMR as a search strategy, instead of similarity.\nretriever = qdrant.as_retriever(search_type=\"mmr\")\nretriever\nVectorStoreRetriever(vectorstore=, search_type='mmr', search_kwargs={})\nquery = \"What did the president say about Ketanji Brown Jackson\"\nretriever.get_relevant_documents(query)[0]\nDocument(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})\nCustomizing Qdrant#", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html"}
+{"id": "422cc08f47b4-7", "text": "Customizing Qdrant#\nQdrant stores your vector embeddings along with the optional JSON-like payload. Payloads are optional, but since LangChain assumes the embeddings are generated from the documents, we keep the context data, so you can extract the original texts as well.\nBy default, your document is going to be stored in the following payload structure:\n{\n \"page_content\": \"Lorem ipsum dolor sit amet\",\n \"metadata\": {\n \"foo\": \"bar\"\n }\n}\nYou can, however, decide to use different keys for the page content and metadata. That\u2019s useful if you already have a collection that you\u2019d like to reuse. You can always change the\nQdrant.from_documents(\n docs, embeddings, \n location=\":memory:\",\n collection_name=\"my_documents_2\",\n content_payload_key=\"my_page_content_key\",\n metadata_payload_key=\"my_meta\",\n)\n\nprevious\nPinecone\nnext\nRedis\n Contents\n \nConnecting to Qdrant from LangChain\nLocal mode\nIn-memory\nOn-disk storage\nOn-premise server deployment\nQdrant Cloud\nReusing the same collection\nSimilarity search\nSimilarity search with score\nMetadata filtering\nMaximum marginal relevance search (MMR)\nQdrant as a Retriever\nCustomizing Qdrant\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html"}
+{"id": "73ca9f9979b6-0", "text": ".ipynb\n.pdf\nDocArrayHnswSearch\n Contents \nSetup\nUsing DocArrayHnswSearch\nSimilarity search\nSimilarity search with score\nDocArrayHnswSearch#\nDocArrayHnswSearch is a lightweight Document Index implementation provided by Docarray that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in hnswlib, and stores all other data in SQLite.\nThis notebook shows how to use functionality related to the DocArrayHnswSearch.\nSetup#\nUncomment the below cells to install docarray and get/set your OpenAI api key if you haven\u2019t already done so.\n# !pip install \"docarray[hnswlib]\"\n# Get an OpenAI token: https://platform.openai.com/account/api-keys\n# import os\n# from getpass import getpass\n# OPENAI_API_KEY = getpass()\n# os.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY\nUsing DocArrayHnswSearch#\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import DocArrayHnswSearch\nfrom langchain.document_loaders import TextLoader\ndocuments = TextLoader('../../../state_of_the_union.txt').load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\ndb = DocArrayHnswSearch.from_documents(docs, embeddings, work_dir='hnswlib_store/', n_dim=1536)\nSimilarity search#\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = db.similarity_search(query)\nprint(docs[0].page_content)", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/docarray_hnsw.html"}
+{"id": "73ca9f9979b6-1", "text": "docs = db.similarity_search(query)\nprint(docs[0].page_content)\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nSimilarity search with score#\nThe returned distance score is cosine distance. Therefore, a lower score is better.\ndocs = db.similarity_search_with_score(query)\ndocs[0]", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/docarray_hnsw.html"}
+{"id": "73ca9f9979b6-2", "text": "docs = db.similarity_search_with_score(query)\ndocs[0]\n(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.', metadata={}),\n 0.36962226)\nimport shutil\n# delete the dir\nshutil.rmtree('hnswlib_store')\nprevious\nDeep Lake\nnext\nDocArrayInMemorySearch\n Contents\n \nSetup\nUsing DocArrayHnswSearch\nSimilarity search\nSimilarity search with score\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/docarray_hnsw.html"}
+{"id": "07a6c8341fbf-0", "text": ".ipynb\n.pdf\nAtlas\nAtlas#\nAtlas is a platform for interacting with both small and internet scale unstructured datasets by Nomic.\nThis notebook shows you how to use functionality related to the AtlasDB vectorstore.\n!pip install spacy\n!python3 -m spacy download en_core_web_sm\n!pip install nomic\nimport time\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import SpacyTextSplitter\nfrom langchain.vectorstores import AtlasDB\nfrom langchain.document_loaders import TextLoader\nATLAS_TEST_API_KEY = '7xDPkYXSYDc1_ErdTPIcoAR9RNd8YDlkS3nVNXcVoIMZ6'\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = SpacyTextSplitter(separator='|')\ntexts = []\nfor doc in text_splitter.split_documents(documents):\n texts.extend(doc.page_content.split('|'))\n \ntexts = [e.strip() for e in texts]\ndb = AtlasDB.from_texts(texts=texts,\n name='test_index_'+str(time.time()), # unique name for your vector store\n description='test_index', #a description for your vector store\n api_key=ATLAS_TEST_API_KEY,\n index_kwargs={'build_topic_model': True})\ndb.project.wait_for_project_lock()\ndb.project\ntest_index_1677255228.136989\n A description for your project 508 datums inserted.\n \n 1 index built.\n Projections\ntest_index_1677255228.136989_index. Status Completed. view online\nProjection ID: db996d77-8981-48a0-897a-ff2c22bbf541\nHide embedded project", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/atlas.html"}
+{"id": "07a6c8341fbf-1", "text": "Hide embedded project\nExplore on atlas.nomic.ai\nprevious\nAnnoy\nnext\nAwaDB\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/atlas.html"}
+{"id": "263f67ef3852-0", "text": ".ipynb\n.pdf\nMyScale\n Contents \nSetting up envrionments\nGet connection info and data schema\nFiltering\nSimilarity search with score\nDeleting your data\nMyScale#\nMyScale is a cloud-based database optimized for AI applications and solutions, built on the open-source ClickHouse.\nThis notebook shows how to use functionality related to the MyScale vector database.\nSetting up envrionments#\n!pip install clickhouse-connect\nWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.\nimport os\nimport getpass\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nThere are two ways to set up parameters for myscale index.\nEnvironment Variables\nBefore you run the app, please set the environment variable with export:\nexport MYSCALE_URL='' MYSCALE_PORT= MYSCALE_USERNAME= MYSCALE_PASSWORD= ...\nYou can easily find your account, password and other info on our SaaS. For details please refer to this document\nEvery attributes under MyScaleSettings can be set with prefix MYSCALE_ and is case insensitive.\nCreate MyScaleSettings object with parameters\nfrom langchain.vectorstores import MyScale, MyScaleSettings\nconfig = MyScaleSetting(host=\"\", port=8443, ...)\nindex = MyScale(embedding_function, config)\nindex.add_documents(...)\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import MyScale\nfrom langchain.document_loaders import TextLoader\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/myscale.html"}
+{"id": "263f67ef3852-1", "text": "loader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\nfor d in docs:\n d.metadata = {'some': 'metadata'}\ndocsearch = MyScale.from_documents(docs, embeddings)\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = docsearch.similarity_search(query)\nInserting data...: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 42/42 [00:18<00:00, 2.21it/s]\nprint(docs[0].page_content)\nAs Frances Haugen, who is here with us tonight, has shown, we must hold social media platforms accountable for the national experiment they\u2019re conducting on our children for profit. \nIt\u2019s time to strengthen privacy protections, ban targeted advertising to children, demand tech companies stop collecting personal data on our children. \nAnd let\u2019s get all Americans the mental health services they need. More people they can turn to for help, and full parity between physical and mental health care. \nThird, support our veterans. \nVeterans are the best of us. \nI\u2019ve always believed that we have a sacred obligation to equip all those we send to war and care for them and their families when they come home. \nMy administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free. \nOur troops in Iraq and Afghanistan faced many dangers.\nGet connection info and data schema#\nprint(str(docsearch))\nFiltering#\nYou can have direct access to myscale SQL where statement. You can write WHERE clause following standard SQL.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/myscale.html"}
+{"id": "263f67ef3852-2", "text": "NOTE: Please be aware of SQL injection, this interface must not be directly called by end-user.\nIf you custimized your column_map under your setting, you search with filter like this:\nfrom langchain.vectorstores import MyScale, MyScaleSettings\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\nfor i, d in enumerate(docs):\n d.metadata = {'doc_id': i}\ndocsearch = MyScale.from_documents(docs, embeddings)\nInserting data...: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 42/42 [00:15<00:00, 2.69it/s]\nSimilarity search with score#\nThe returned distance score is cosine distance. Therefore, a lower score is better.\nmeta = docsearch.metadata_column\noutput = docsearch.similarity_search_with_relevance_scores('What did the president say about Ketanji Brown Jackson?', \n k=4, where_str=f\"{meta}.doc_id<10\")\nfor d, dist in output:\n print(dist, d.metadata, d.page_content[:20] + '...')\n0.252379834651947 {'doc_id': 6, 'some': ''} And I\u2019m taking robus...\n0.25022566318511963 {'doc_id': 1, 'some': ''} Groups of citizens b...\n0.2469480037689209 {'doc_id': 8, 'some': ''} And so many families...\n0.2428302764892578 {'doc_id': 0, 'some': 'metadata'} As Frances Haugen, w...", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/myscale.html"}
+{"id": "263f67ef3852-3", "text": "Deleting your data#\ndocsearch.drop()\nprevious\nCommented out until further notice\nnext\nOpenSearch\n Contents\n \nSetting up envrionments\nGet connection info and data schema\nFiltering\nSimilarity search with score\nDeleting your data\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/myscale.html"}
+{"id": "b09b872537fb-0", "text": ".ipynb\n.pdf\nMilvus\nMilvus#\nMilvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models.\nThis notebook shows how to use functionality related to the Milvus vector database.\nTo run, you should have a Milvus instance up and running.\n!pip install pymilvus\nWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.\nimport os\nimport getpass\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nOpenAI API Key:\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import Milvus\nfrom langchain.document_loaders import TextLoader\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\nvector_db = Milvus.from_documents(\n docs,\n embeddings,\n connection_args={\"host\": \"127.0.0.1\", \"port\": \"19530\"},\n)\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = vector_db.similarity_search(query)\ndocs[0].page_content", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/milvus.html"}
+{"id": "b09b872537fb-1", "text": "docs = vector_db.similarity_search(query)\ndocs[0].page_content\n'Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.'\nprevious\nMatchingEngine\nnext\nCommented out until further notice\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/milvus.html"}
+{"id": "697837f9a78d-0", "text": ".ipynb\n.pdf\nWeaviate\n Contents \nWeaviate\nSimilarity search with score\nPersistance\nRetriever options\nRetriever options\nMMR\nQuestion Answering with Sources\nWeaviate#\nWeaviate is an open-source vector database. It allows you to store data objects and vector embeddings from your favorite ML-models, and scale seamlessly into billions of data objects.\nThis notebook shows how to use functionality related to the Weaviatevector database.\nSee the Weaviate installation instructions.\n!pip install weaviate-client\nRequirement already satisfied: weaviate-client in /workspaces/langchain/.venv/lib/python3.9/site-packages (3.19.1)\nRequirement already satisfied: requests<2.29.0,>=2.28.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (2.28.2)\nRequirement already satisfied: validators<=0.21.0,>=0.18.2 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (0.20.0)\nRequirement already satisfied: tqdm<5.0.0,>=4.59.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (4.65.0)\nRequirement already satisfied: authlib>=1.1.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (1.2.0)\nRequirement already satisfied: cryptography>=3.2 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from authlib>=1.1.0->weaviate-client) (40.0.2)", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-1", "text": "Requirement already satisfied: charset-normalizer<4,>=2 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (3.1.0)\nRequirement already satisfied: idna<4,>=2.5 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (3.4)\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (1.26.15)\nRequirement already satisfied: certifi>=2017.4.17 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (2023.5.7)\nRequirement already satisfied: decorator>=3.4.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from validators<=0.21.0,>=0.18.2->weaviate-client) (5.1.1)\nRequirement already satisfied: cffi>=1.12 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from cryptography>=3.2->authlib>=1.1.0->weaviate-client) (1.15.1)", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-2", "text": "Requirement already satisfied: pycparser in /workspaces/langchain/.venv/lib/python3.9/site-packages (from cffi>=1.12->cryptography>=3.2->authlib>=1.1.0->weaviate-client) (2.21)\nWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.\nimport os\nimport getpass\nos.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")\nWEAVIATE_URL = getpass.getpass(\"WEAVIATE_URL:\")\nos.environ[\"WEAVIATE_API_KEY\"] = getpass.getpass(\"WEAVIATE_API_KEY:\")\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import Weaviate\nfrom langchain.document_loaders import TextLoader\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader(\"../../../state_of_the_union.txt\")\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\ndb = Weaviate.from_documents(docs, embeddings, weaviate_url=WEAVIATE_URL, by_text=False)\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = db.similarity_search(query)\nprint(docs[0].page_content)\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-3", "text": "Tonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nSimilarity search with score#\nSometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result.\nThe returned distance score is cosine distance. Therefore, a lower score is better.\ndocs = db.similarity_search_with_score(query, by_text=False)\ndocs[0]", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-4", "text": "(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.', metadata={'_additional': {'vector': [-0.015289668, -0.011418287, -0.018540842, 0.00274522, 0.008310737, 0.014179829, 0.0080104275, -0.0010217049, -0.022327352, -0.0055002323, 0.018958665, 0.0020548347, -0.0044393567, -0.021609223, -0.013709779, -0.004543812, 0.025722157, 0.01821442, 0.031728342, -0.031388864, -0.01051083, -0.029978717, 0.011555385, 0.0009751897, 0.014675993, -0.02102166, 0.0301354, -0.031754456, 0.013526983,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-5", "text": "-0.031754456, 0.013526983, -0.03392191, 0.002800712, -0.0027778621, -0.024259781, -0.006202043, -0.019950991, 0.0176138, -0.0001134321, 0.008343379, 0.034209162, -0.027654583, 0.03149332, -0.0008389079, 0.0053696632, -0.0024644958, -0.016582303, 0.0066720927, -0.005036711, -0.035514854, 0.002942706, 0.02958701, 0.032825127, 0.015694432, -0.019846536, -0.024520919, -0.021974817, -0.0063293483, -0.01081114, -0.0084282495, 0.003025944, -0.010210521, 0.008780787, 0.014793505, -0.006486031, 0.011966679, 0.01774437, -0.006985459, -0.015459408, 0.01625588, -0.016007798, 0.01706541, 0.035567082, 0.0029900377, 0.021543937, -0.0068483613, 0.040868197, -0.010909067, -0.03339963, 0.010954766, -0.014689049, -0.021596165, 0.0025607906, -0.01599474,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-6", "text": "0.0025607906, -0.01599474, -0.017757427, -0.0041651614, 0.010752384, 0.0053598704, -0.00019248774, 0.008480477, -0.010517359, -0.005017126, 0.0020434097, 0.011699011, 0.0051379027, 0.021687564, -0.010830725, 0.020734407, -0.006606808, 0.029769806, 0.02817686, -0.047318324, 0.024338122, -0.001150642, -0.026231378, -0.012325744, -0.0318328, -0.0094989175, -0.00897664, 0.004736402, 0.0046482678, 0.0023241339, -0.005826656, 0.0072531262, 0.015498579, -0.0077819317, -0.011953622, -0.028934162, -0.033974137, -0.01574666, 0.0086306315, -0.029299757, 0.030213742, -0.0033148287, 0.013448641, -0.013474754, 0.015851116, 0.0076578907, -0.037421167, -0.015185213, 0.010719741, -0.014636821, 0.0001918757, 0.011783881, 0.0036330915, -0.02132197,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-7", "text": "0.0036330915, -0.02132197, 0.0031010215, 0.0024334856, -0.0033229894, 0.050086394, 0.0031973163, -0.01115062, 0.004837593, 0.01298512, -0.018645298, -0.02992649, 0.004837593, 0.0067634913, 0.02992649, 0.0145062525, 0.00566018, -0.0017055618, -0.0056667086, 0.012697867, 0.0150677, -0.007559964, -0.01991182, -0.005268472, -0.008650217, -0.008702445, 0.027550127, 0.0018296026, 0.0018589807, -0.033295177, 0.0036265631, -0.0060290387, 0.014349569, 0.019898765, 0.00023339267, 0.0034568228, -0.018958665, 0.012031963, 0.005186866, 0.020747464, -0.03817847, 0.028202975, -0.01340947, 0.00091643346, 0.014884903, -0.02314994, -0.024468692, 0.0004859627, 0.018828096, 0.012906778, 0.027941836, 0.027550127, -0.015028529, 0.018606128,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-8", "text": "-0.015028529, 0.018606128, 0.03449641, -0.017757427, -0.016020855, -0.012142947, 0.025304336, 0.00821281, -0.0025461016, -0.01902395, -0.635507, -0.030083172, 0.0177052, -0.0104912445, 0.012502013, -0.0010747487, 0.00465806, 0.020825805, -0.006887532, 0.013892576, -0.019977106, 0.029952602, 0.0012004217, -0.015211326, -0.008708973, -0.017809656, 0.008578404, -0.01612531, 0.022614606, -0.022327352, -0.032616217, 0.0050693536, -0.020629952, -0.01357921, 0.011477043, 0.0013938275, -0.0052390937, 0.0142581705, -0.013200559, 0.013252786, -0.033582427, 0.030579336, -0.011568441, 0.0038387382, 0.049564116, 0.016791213, -0.01991182, 0.010889481, -0.0028251936, 0.035932675, -0.02183119, -0.008611047, 0.025121538, 0.008349908, 0.00035641342,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-9", "text": "0.008349908, 0.00035641342, 0.009028868, 0.007631777, -0.01298512, -0.0015350056, 0.009982024, -0.024207553, -0.003332782, 0.006283649, 0.01868447, -0.010732798, -0.00876773, -0.0075273216, -0.016530076, 0.018175248, 0.016020855, -0.00067284, 0.013461698, -0.0065904865, -0.017809656, -0.014741276, 0.016582303, -0.0088526, 0.0046482678, 0.037473395, -0.02237958, 0.010112594, 0.022549322, 9.680491e-05, -0.0059082615, 0.020747464, -0.026923396, 0.01162067, -0.0074816225, 0.00024277734, 0.011842638, 0.016921783, -0.019285088, 0.005565517, 0.0046907025, 0.018109964, 0.0028676286, -0.015080757, -0.01536801, 0.0024726565, 0.020943318, 0.02187036, 0.0037767177, 0.018997835, -0.026766712, 0.005026919, 0.015942514, 0.0097469995,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-10", "text": "0.015942514, 0.0097469995, -0.0067830766, 0.023828901, -0.01523744, -0.0121494755, 0.00744898, 0.010445545, -0.011006993, -0.0032789223, 0.020394927, -0.017796598, -0.0029116957, 0.02318911, -0.031754456, -0.018188305, -0.031441092, -0.030579336, 0.0011832844, 0.0065023527, -0.027053965, 0.009198609, 0.022079272, -0.027785152, 0.005846241, 0.013500868, 0.016699815, 0.010445545, -0.025265165, -0.004396922, 0.0076774764, 0.014597651, -0.009851455, -0.03637661, 0.0004745379, -0.010112594, -0.009205136, 0.01578583, 0.015211326, -0.0011653311, -0.0015847852, 0.01489796, -0.01625588, -0.0029067993, -0.011411758, 0.0046286825, 0.0036330915, -0.0034143878, 0.011894866, -0.03658552, 0.007266183, -0.015172156, -0.02038187, -0.033739112,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-11", "text": "-0.02038187, -0.033739112, 0.0018948873, -0.011379116, -0.0020923733, -0.014075373, 0.01970291, 0.0020352493, -0.0075273216, -0.02136114, 0.0027974476, -0.009577259, -0.023815846, 0.024847344, 0.014675993, -0.019454828, -0.013670608, 0.011059221, -0.005438212, 0.0406854, 0.0006218364, -0.024494806, -0.041259903, 0.022013986, -0.0040019494, -0.0052097156, 0.015798887, 0.016190596, 0.0003794671, -0.017444061, 0.012325744, 0.024769, 0.029482553, -0.0046547963, -0.015955571, -0.018397218, -0.0102431625, 0.020577725, 0.016190596, -0.02038187, 0.030030945, -0.01115062, 0.0032560725, -0.014819618, 0.005647123, -0.0032560725, 0.0038909658, 0.013311543, 0.024285894, -0.0045699263, -0.010112594, 0.009237779, 0.008728559, 0.0423828, 0.010909067,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-12", "text": "0.0423828, 0.010909067, 0.04225223, -0.031806685, -0.013696723, -0.025787441, 0.00838255, -0.008715502, 0.006776548, 0.01825359, -0.014480138, -0.014427911, -0.017600743, -0.030004831, 0.0145845935, 0.013762007, -0.013226673, 0.004168425, 0.0047951583, -0.026923396, 0.014675993, 0.0055851024, 0.015616091, -0.012306159, 0.007670948, 0.038439605, -0.015759716, 0.00016178355, 0.01076544, -0.008232395, -0.009942854, 0.018801982, -0.0025314125, 0.030709906, -0.001442791, -0.042617824, -0.007409809, -0.013109161, 0.031101612, 0.016229765, 0.006162872, 0.017901054, -0.0063619902, -0.0054577976, 0.01872364, -0.0032430156, 0.02966535, 0.006495824, 0.0011008625, -0.00024318536, -0.007011573, -0.002746852, -0.004298995, 0.007710119, 0.03407859,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-13", "text": "0.007710119, 0.03407859, -0.008898299, -0.008565348, 0.030527107, -0.0003027576, 0.025082368, 0.0405026, 0.03867463, 0.0014117807, -0.024076983, 0.003933401, -0.009812284, 0.00829768, -0.0074293944, 0.0061530797, -0.016647588, -0.008147526, -0.015629148, 0.02055161, 0.000504324, 0.03157166, 0.010112594, -0.009009283, 0.026557801, -0.013997031, -0.0071878415, 0.009414048, -0.03480978, 0.006626393, 0.013827291, -0.011444401, -0.011823053, -0.0042957305, -0.016229765, -0.014192886, 0.026531687, -0.012534656, -0.0056569157, -0.0010331298, 0.007977786, 0.0033654245, -0.017352663, 0.034626983, -0.011803466, 0.009035396, 0.0005288057, 0.020421041, 0.013115689, -0.0152504975, -0.0111114485, 0.032355078, 0.0025542623, -0.0030226798, -0.00074261305,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-14", "text": "-0.0030226798, -0.00074261305, 0.030892702, -0.026218321, 0.0062803845, -0.018031623, -0.021504767, -0.012834964, 0.009009283, -0.0029198565, -0.014349569, -0.020434098, 0.009838398, -0.005993132, -0.013618381, -0.031597774, -0.019206747, 0.00086583785, 0.15835446, 0.033765227, 0.00893747, 0.015119928, -0.019128405, 0.0079582, -0.026270548, -0.015877228, 0.014153715, -0.011960151, 0.007853745, 0.006972402, -0.014101488, 0.02456009, 0.015119928, -0.0018850947, 0.019010892, -0.0046188897, -0.0050954674, -0.03548874, -0.01608614, -0.00324628, 0.009466276, 0.031911142, 7.033402e-05, -0.025095424, 0.020225188, 0.014832675, 0.023228282, -0.011829581, -0.011300774, -0.004073763, 0.0032544404, -0.0025983294, -0.020943318, 0.019650683, -0.0074424515,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-15", "text": "0.019650683, -0.0074424515, -0.0030977572, 0.0073379963, -0.00012455089, 0.010230106, -0.0007254758, -0.0025052987, -0.009681715, 0.03439196, -0.035123147, -0.0028806855, 0.012828437, 0.00018646932, 0.0066133365, 0.025539361, -0.00055736775, -0.025356563, -0.004537284, -0.007031158, 0.015825002, -0.013076518, 0.00736411, -0.00075689406, 0.0076578907, -0.019337315, -0.0024187965, -0.0110331075, -0.01187528, 0.0013048771, 0.0009711094, -0.027863493, -0.020616895, -0.0024481746, -0.0040802914, 0.014571536, -0.012306159, -0.037630077, 0.012652168, 0.009068039, -0.0018263385, 0.0371078, -0.0026831995, 0.011333417, -0.011548856, -0.0059049972, -0.025186824, 0.0069789304, -0.010993936, -0.0009066408, 0.0002619547, 0.01727432, -0.008082241,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-16", "text": "0.01727432, -0.008082241, -0.018645298, 0.024507863, 0.0030895968, -0.0014656406, 0.011137563, -0.025513247, -0.022967143, -0.002033617, 0.006887532, 0.016621474, -0.019337315, -0.0030618508, 0.0014697209, -0.011679426, -0.003597185, -0.0049844836, -0.012332273, 0.009068039, 0.009407519, 0.027080078, -0.011215905, -0.0062542707, -0.0013114056, -0.031911142, 0.011209376, 0.009903682, -0.007351053, 0.021335026, -0.005510025, 0.0062053073, -0.010869896, -0.0045601334, 0.017561574, -0.024847344, 0.04115545, -0.00036457402, -0.0061400225, 0.013037347, -0.005480647, 0.005947433, 0.020799693, 0.014702106, 0.03272067, 0.026701428, -0.015550806, -0.036193814, -0.021126116, -0.005412098, -0.013076518, 0.027080078, 0.012900249, -0.0073379963, -0.015119928,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-17", "text": "-0.0073379963, -0.015119928, -0.019781252, 0.0062346854, -0.03266844, 0.025278222, -0.022797402, -0.0028415148, 0.021452539, -0.023162996, 0.005170545, -0.022314297, 0.011215905, -0.009838398, -0.00033233972, 0.0019650683, 0.0026326037, 0.009753528, -0.0029639236, 0.021126116, 0.01944177, -0.00044883206, -0.00961643, 0.008846072, -0.0035775995, 0.02352859, -0.0020956376, 0.0053468137, 0.013305014, 0.0006418298, 0.023802789, 0.013122218, -0.0031548813, -0.027471786, 0.005046504, 0.008545762, 0.011261604, -0.01357921, -0.01110492, -0.014845733, -0.035384286, -0.02550019, 0.008154054, -0.0058331843, -0.008702445, -0.007311882, -0.006525202, 0.03817847, 0.00372449, 0.022914914, -0.0018981516, 0.031545546, -0.01051083, 0.013801178, -0.006296706,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-18", "text": "0.013801178, -0.006296706, -0.00025052988, -0.01795328, -0.026296662, 0.0017659501, 0.021883417, 0.0028937424, 0.00495837, -0.011888337, -0.008950527, -0.012058077, 0.020316586, 0.00804307, -0.0068483613, -0.0038387382, 0.019715967, -0.025069311, -0.000797697, -0.04507253, -0.009179023, -0.016242823, 0.013553096, -0.0019014158, 0.010223578, 0.0062934416, -5.5644974e-05, -0.038282923, -0.038544063, -0.03162389, -0.006815719, 0.009936325, 0.014192886, 0.02277129, -0.006972402, -0.029769806, 0.034862008, 0.01217559, -0.0037179615, 0.0008666539, 0.008924413, -0.026296662, -0.012678281, 0.014480138, 0.020734407, -0.012103776, -0.037499506, 0.022131499, 0.015028529, -0.033843566, 0.00020187242, 0.002650557, -0.0015113399, 0.021570051, -0.008284623,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-19", "text": "0.021570051, -0.008284623, -0.003793039, -0.013422526, -0.009655601, -0.0016614947, -0.02388113, 0.00114901, 0.0034405016, 0.02796795, -0.039118566, 0.0023975791, -0.010608757, 0.00093438674, 0.0017382042, -0.02047327, 0.026283605, -0.020799693, 0.005947433, -0.014349569, 0.009890626, -0.022719061, -0.017248206, 0.0042565595, 0.022327352, -0.015681375, -0.013840348, 6.502964e-05, 0.015485522, -0.002678303, -0.0047984226, -0.012182118, -0.001512972, 0.013931747, -0.009642544, 0.012652168, -0.012932892, -0.027759038, -0.01085031, 0.0050236546, -0.009675186, -0.00893747, -0.0051770736, 0.036011018, 0.003528636, -0.001008648, -0.015811944, -0.008865656, 0.012364916, 0.016621474, -0.01340947, 0.03219839, 0.032955695, -0.021517823, 0.00372449,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-20", "text": "-0.021517823, 0.00372449, -0.045124754, 0.015589978, -0.033582427, -0.01642562, -0.009609901, -0.031179955, 0.0012591778, -0.011176733, -0.018658355, -0.015224383, 0.014884903, 0.013083046, 0.0063587264, -0.008238924, -0.008917884, -0.003877909, 0.022836573, -0.004374072, -0.031127727, 0.02604858, -0.018136078, 0.000769951, -0.002312709, -0.025095424, -0.010621814, 0.013207087, 0.013944804, -0.0070899143, -0.022183727, -0.0028088724, -0.011424815, 0.026087752, -0.0058625625, -0.020186016, -0.010217049, 0.015315781, -0.012580355, 0.01374895, 0.004948577, -0.0021854038, 0.023215225, 0.00207442, 0.029639237, 0.01391869, -0.015811944, -0.005356606, -0.022327352, -0.021844247, -0.008310737, -0.020786636, -0.022484036, 0.011411758, 0.005826656, 0.012188647,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-21", "text": "0.005826656, 0.012188647, -0.020394927, -0.0013024289, -0.027315103, -0.017000126, -0.0010600596, -0.0019014158, 0.016712872, 0.0012673384, 0.02966535, 0.02911696, -0.03081436, 0.025552418, 0.0014215735, -0.02510848, 0.020277414, -0.02672754, 0.01829276, 0.03381745, -0.013957861, 0.0049094064, 0.033556316, 0.005167281, 0.0176138, 0.014140658, -0.0043708077, -0.0095446175, 0.012952477, 0.007853745, -0.01034109, 0.01804468, 0.0038322096, -0.04959023, 0.0023078127, 0.0053794556, -0.015106871, -0.03225062, -0.010073422, 0.007285768, 0.0056079524, -0.009002754, -0.014362626, 0.010909067, 0.009779641, -0.02796795, 0.013246258, 0.025474075, -0.001247753, 0.02442952, 0.012802322, -0.032276735, 0.0029802448, 0.014179829, 0.010321504,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-22", "text": "0.014179829, 0.010321504, 0.0053337566, -0.017156808, -0.010439017, 0.034444187, -0.010393318, -0.006042096, -0.018566957, 0.004517698, -0.011228961, -0.009015812, -0.02089109, 0.022484036, 0.0029867734, -0.029064732, -0.010236635, -0.0006761042, -0.029038617, 0.004367544, -0.012293102, 0.0017528932, -0.023358852, 0.02217067, 0.012606468, -0.008160583, -0.0104912445, -0.0034894652, 0.011078807, 0.00050922035, 0.015759716, 0.23774062, -0.0019291617, 0.006218364, 0.013762007, -0.029900376, 0.018188305, 0.0092965355, 0.0040574414, -0.014976301, -0.006228157, -0.016647588, 0.0035188433, -0.01919369, 0.0037506039, 0.029247528, -0.014532366, -0.049773026, -0.019624569, -0.034783665, -0.015028529, 0.0097469995, 0.016281994, 0.0047135525, -0.011294246,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-23", "text": "0.0047135525, -0.011294246, 0.011477043, 0.015485522, 0.03426139, 0.014323455, 0.011052692, -0.008362965, -0.037969556, -0.00252162, -0.013709779, -0.0030292084, -0.016569246, -0.013879519, 0.0011849166, -0.0016925049, 0.009753528, 0.008349908, -0.008245452, 0.033007924, -0.0035873922, -0.025461018, 0.016791213, 0.05410793, -0.005950697, -0.011672897, -0.0072335405, 0.013814235, -0.0593307, -0.008624103, 0.021400312, 0.034235276, 0.015642203, -0.020068504, 0.03136275, 0.012567298, -0.010419431, 0.027445672, -0.031754456, 0.014219, -0.0075403787, 0.03812624, 0.0009988552, 0.038752973, -0.018005509, 0.013670608, 0.045882057, -0.018841153, -0.031650003, 0.010628343, -0.00459604, -0.011999321, -0.028202975, -0.018593071, 0.029743692, 0.021857304,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-24", "text": "0.029743692, 0.021857304, 0.01438874, 0.00014128008, -0.006156344, -0.006691678, 0.01672593, -0.012821908, -0.0024367499, -0.03219839, 0.0058233915, -0.0056405943, -0.009381405, 0.0064044255, 0.013905633, -0.011228961, -0.0013481282, -0.014023146, 0.00016239559, -0.0051901303, 0.0025265163, 0.023619989, -0.021517823, 0.024703717, -0.025643816, 0.040189236, 0.016295051, -0.0040411204, -0.0113595305, 0.0029981981, -0.015589978, 0.026479458, 0.0067439056, -0.035775993, -0.010550001, -0.014767391, -0.009897154, -0.013944804, -0.0147543335, 0.015798887, -0.02456009, -0.0018850947, 0.024442578, 0.0019715966, -0.02422061, -0.02945644, -0.003443766, 0.0004945313, 0.0011522742, -0.020773578, -0.011777353, 0.008173639, -0.012325744, -0.021348083,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-25", "text": "-0.012325744, -0.021348083, 0.0036461484, 0.0063228197, 0.00028970066, -0.0036200345, -0.021596165, -0.003949722, -0.0006034751, 0.007305354, -0.023424136, 0.004834329, -0.008833014, -0.013435584, 0.0026097542, -0.0012240873, -0.0028349862, -0.01706541, 0.027863493, -0.026414175, -0.011783881, 0.014075373, -0.005634066, -0.006313027, -0.004638475, -0.012495484, 0.022836573, -0.022719061, -0.031284407, -0.022405695, -0.017352663, 0.021113059, -0.03494035, 0.002772966, 0.025643816, -0.0064240107, -0.009897154, 0.0020711557, -0.16409951, 0.009688243, 0.010393318, 0.0033262535, 0.011059221, -0.012919835, 0.0014493194, -0.021857304, -0.0075730206, -0.0020695236, 0.017822713, 0.017417947, -0.034835894, -0.009159437, -0.0018573486, -0.0024840813,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-26", "text": "-0.0018573486, -0.0024840813, -0.022444865, 0.0055687814, 0.0037767177, 0.0033915383, 0.0301354, -0.012227817, 0.0021854038, -0.042878963, 0.021517823, -0.010419431, -0.0051183174, 0.01659536, 0.0017333078, -0.00727924, -0.0020026069, -0.0012493852, 0.031441092, 0.0017431005, 0.008702445, -0.0072335405, -0.020081561, -0.012423672, -0.0042239176, 0.031049386, 0.04324456, 0.02550019, 0.014362626, -0.0107393265, -0.0037538682, -0.0061791935, -0.006737377, 0.011548856, -0.0166737, -0.012828437, -0.003375217, -0.01642562, -0.011424815, 0.007181313, 0.017600743, -0.0030226798, -0.014192886, 0.0128937205, -0.009975496, 0.0051444313, -0.0044654706, -0.008826486, 0.004158633, 0.004971427, -0.017835768, 0.025017083, -0.021792019, 0.013657551,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-27", "text": "-0.021792019, 0.013657551, -0.01872364, 0.009100681, -0.0079582, -0.011640254, -0.01093518, -0.0147543335, -0.005000805, 0.02345025, -0.028908048, 0.0104912445, -0.00753385, 0.017561574, -0.012025435, 0.042670052, -0.0041978033, 0.0013056932, -0.009263893, -0.010941708, -0.004471999, 0.01008648, -0.002578744, -0.013931747, 0.018619185, -0.04029369, -0.00025909848, 0.0030063589, 0.003149985, 0.011091864, 0.006495824, 0.00026583098, 0.0045503406, -0.007586078, -0.0007475094, -0.016856499, -0.003528636, 0.038282923, -0.0010494508, 0.024494806, 0.012593412, 0.032433417, -0.003203845, 0.005947433, -0.019937934, -0.00017800271, 0.027706811, 0.03047488, 0.02047327, 0.0019258976, -0.0068940604, -0.0014990991, 0.013305014, -0.007690533, 0.058808424,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-28", "text": "-0.007690533, 0.058808424, -0.0016859764, -0.0044622063, -0.0037734534, 0.01578583, -0.0018459238, -0.1196015, -0.0007075225, 0.0030341048, 0.012306159, -0.0068483613, 0.01851473, 0.015315781, 0.031388864, -0.015563863, 0.04776226, -0.008199753, -0.02591801, 0.00546759, -0.004915935, 0.0050824108, 0.0027011528, -0.009205136, -0.016712872, -0.0033409426, 0.0043218443, -0.018279705, 0.00876773, 0.0050138617, -0.009688243, -0.017783541, -0.018645298, -0.010380261, 0.018606128, 0.0077492893, 0.007324939, -0.012704396, -0.002692992, -0.01259994, -0.0076970616, -0.013814235, -0.0004365912, -0.023606932, -0.020186016, 0.025330449, -0.00991674, -0.0048278007, -0.019350372, 0.015433294, -0.0056144805, -0.0034927295, -0.00043455104, 0.008611047,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-29", "text": "-0.00043455104, 0.008611047, 0.025748271, 0.022353467, -0.020747464, -0.015759716, 0.029038617, -0.000377631, -0.028725252, 0.018109964, -0.0016125311, -0.022719061, -0.009133324, -0.033060152, 0.011248547, -0.0019797573, -0.007181313, 0.0018867267, 0.0070899143, 0.004077027, 0.0055328747, -0.014245113, -0.021217514, -0.006750434, -0.038230695, 0.013233202, 0.014219, -0.017692143, 0.024742888, -0.008833014, -0.00753385, -0.026923396, -0.0021527617, 0.013135274, -0.018070793, -0.013500868, -0.0016696552, 0.011568441, -0.03230285, 0.023646105, 0.0111114485, -0.015172156, 0.0257091, 0.0045699263, -0.00919208, 0.021517823, 0.037838988, 0.00787333, -0.007755818, -0.028281316, 0.011170205, -0.005412098, -0.016321165, 0.009929797, 0.004609097,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-30", "text": "0.009929797, 0.004609097, -0.03047488, 0.002688096, -0.07264877, 0.024455635, -0.020930262, -0.015381066, -0.0033148287, 0.027236762, 0.0014501355, -0.014101488, -0.024076983, 0.026218321, -0.009009283, 0.019624569, 0.0020646274, -0.009081096, -0.01565526, -0.003358896, 0.048571788, -0.004857179, 0.022444865, 0.024181439, 0.00080708164, 0.024873456, 3.463147e-05, 0.0010535312, -0.017940223, 0.0012159267, -0.011065749, 0.008258509, -0.018527785, -0.022797402, 0.012377972, -0.002087477, 0.010791554, 0.022288183, 0.0048604426, -0.032590102, 0.013709779, 0.004922463, 0.020055447, -0.0150677, -0.0057222005, -0.036246043, 0.0021364405, 0.021387255, -0.013435584, 0.010732798, 0.0075534354, -0.00061612396, -0.002018928, -0.004432828, -0.032746784,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-31", "text": "-0.004432828, -0.032746784, 0.025513247, -0.0025852725, 0.014467081, -0.008617575, -0.019755138, 0.003966043, -0.0033915383, 0.0004088452, -0.025173767, 0.02796795, 0.0023763615, 0.0052358294, 0.017796598, 0.014806561, 0.0150024155, -0.005859298, 0.01259994, 0.021726735, -0.026466403, -0.017457118, -0.0025493659, 0.0070899143, 0.02668837, 0.015485522, -0.011588027, 0.01906312, -0.003388274, -0.010210521, 0.020956375, 0.028620796, -0.018540842, 0.0025722156, 0.0110331075, -0.003992157, 0.020930262, 0.008487006, 0.0016557822, -0.0009882465, 0.0062640635, -0.016242823, -0.0007785196, -0.0007213955, 0.018971723, 0.021687564, 0.0039464575, -0.01574666, 0.011783881, -0.0019797573, -0.013383356, -0.002706049, 0.0037734534, 0.020394927,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-32", "text": "0.0037734534, 0.020394927, -0.00021931567, 0.0041814824, 0.025121538, -0.036246043, -0.019428715, -0.023802789, 0.014845733, 0.015420238, 0.019650683, 0.008186696, 0.025304336, -0.03204171, 0.01774437, 0.0021233836, -0.008434778, -0.0059441687, 0.038335152, 0.022653777, -0.0066002794, 0.02149171, 0.015093814, 0.025382677, -0.007579549, 0.0030357367, -0.0014117807, -0.015341896, 0.014545423, 0.007135614, -0.0113595305, -0.04387129, 0.016308108, -0.008186696, -0.013370299, -0.014297341, 0.017431004, -0.022666834, 0.039458048, 0.0032005806, -0.02081275, 0.008526176, -0.0019307939, 0.024024757, 0.009068039, 0.00953156, 0.010608757, 0.013801178, 0.035932675, -0.015185213, -0.0038322096, -0.012462842, -0.03655941, 0.0013946436, 0.00025726235,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-33", "text": "0.0013946436, 0.00025726235, 0.008016956, -0.0042565595, 0.008447835, 0.0038191527, -0.014702106, 0.02196176, 0.0052097156, -0.010869896, 0.0051640165, 0.030840475, -0.041468814, 0.009250836, -0.018997835, 0.020107675, 0.008421721, -0.016373392, 0.004602568, 0.0327729, -0.00812794, 0.001581521, 0.019350372, 0.016112253, 0.02132197, 0.00043944738, -0.01472822, -0.025735214, -0.03313849, 0.0033817457, 0.028855821, -0.016033912, 0.0050791465, -0.01808385]}, 'source': '../../../state_of_the_union.txt'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-34", "text": "0.8154189703772676)\nPersistance#\nAnything uploaded to weaviate is automatically persistent into the database. You do not need to call any specific method or pass any param for this to happen.\nRetriever options#\nRetriever options#\nThis section goes over different options for how to use Weaviate as a retriever.\nMMR#\nIn addition to using similarity search in the retriever object, you can also use mmr.\nretriever = db.as_retriever(search_type=\"mmr\")\nretriever.get_relevant_documents(query)[0]\nDocument(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})\nQuestion Answering with Sources#\nThis section goes over how to do question-answering with sources over an Index. It does this by using the RetrievalQAWithSourcesChain, which does the lookup of the documents from an Index.\nfrom langchain.chains import RetrievalQAWithSourcesChain\nfrom langchain import OpenAI", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "697837f9a78d-35", "text": "from langchain.chains import RetrievalQAWithSourcesChain\nfrom langchain import OpenAI\nwith open(\"../../../state_of_the_union.txt\") as f:\n state_of_the_union = f.read()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ntexts = text_splitter.split_text(state_of_the_union)\ndocsearch = Weaviate.from_texts(\n texts,\n embeddings,\n weaviate_url=WEAVIATE_URL,\n by_text=False,\n metadatas=[{\"source\": f\"{i}-pl\"} for i in range(len(texts))],\n)\nchain = RetrievalQAWithSourcesChain.from_chain_type(\n OpenAI(temperature=0), chain_type=\"stuff\", retriever=docsearch.as_retriever()\n)\nchain(\n {\"question\": \"What did the president say about Justice Breyer\"},\n return_only_outputs=True,\n)\n{'answer': \" The president honored Justice Breyer for his service and mentioned his legacy of excellence. He also nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to continue Justice Breyer's legacy.\\n\",\n 'sources': '31-pl, 34-pl'}\nprevious\nVectara\nnext\nZilliz\n Contents\n \nWeaviate\nSimilarity search with score\nPersistance\nRetriever options\nRetriever options\nMMR\nQuestion Answering with Sources\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"}
+{"id": "869fee634a86-0", "text": ".ipynb\n.pdf\nZilliz\nZilliz#\nZilliz Cloud is a fully managed service on cloud for LF AI Milvus\u00ae,\nThis notebook shows how to use functionality related to the Zilliz Cloud managed vector database.\nTo run, you should have a Zilliz Cloud instance up and running. Here are the installation instructions\n!pip install pymilvus\nWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.\nimport os\nimport getpass\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nOpenAI API Key:\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\n# replace \nZILLIZ_CLOUD_URI = \"\" # example: \"https://in01-17f69c292d4a5sa.aws-us-west-2.vectordb.zillizcloud.com:19536\"\nZILLIZ_CLOUD_USERNAME = \"\" # example: \"username\"\nZILLIZ_CLOUD_PASSWORD = \"\" # example: \"*********\"\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import Milvus\nfrom langchain.document_loaders import TextLoader\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\nvector_db = Milvus.from_documents(\n docs,\n embeddings,\n connection_args={\n \"uri\": ZILLIZ_CLOUD_URI,\n \"user\": ZILLIZ_CLOUD_USERNAME,\n \"password\": ZILLIZ_CLOUD_PASSWORD,\n \"secure\": True", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/zilliz.html"}
+{"id": "869fee634a86-1", "text": "\"password\": ZILLIZ_CLOUD_PASSWORD,\n \"secure\": True\n }\n)\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = vector_db.similarity_search(query)\ndocs[0].page_content\n'Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.'\nprevious\nWeaviate\nnext\nRetrievers\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/zilliz.html"}
+{"id": "335384951c0f-0", "text": ".ipynb\n.pdf\nTigris\n Contents \nInitialize Tigris vector store\nSimilarity Search\nSimilarity Search with score (vector distance)\nTigris#\nTigris is an open source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications.\nTigris eliminates the infrastructure complexity of managing, operating, and synchronizing multiple tools, allowing you to focus on building great applications instead.\nThis notebook guides you how to use Tigris as your VectorStore\nPre requisites\nAn OpenAI account. You can sign up for an account here\nSign up for a free Tigris account. Once you have signed up for the Tigris account, create a new project called vectordemo. Next, make a note of the Uri for the region you\u2019ve created your project in, the clientId and clientSecret. You can get all this information from the Application Keys section of the project.\nLet\u2019s first install our dependencies:\n!pip install tigrisdb openapi-schema-pydantic openai tiktoken\nWe will load the OpenAI api key and Tigris credentials in our environment\nimport os\nimport getpass\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nos.environ['TIGRIS_PROJECT'] = getpass.getpass('Tigris Project Name:')\nos.environ['TIGRIS_CLIENT_ID'] = getpass.getpass('Tigris Client Id:')\nos.environ['TIGRIS_CLIENT_SECRET'] = getpass.getpass('Tigris Client Secret:')\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import Tigris\nfrom langchain.document_loaders import TextLoader\nInitialize Tigris vector store#\nLet\u2019s import our test dataset:", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/tigris.html"}
+{"id": "335384951c0f-1", "text": "Initialize Tigris vector store#\nLet\u2019s import our test dataset:\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\nvector_store = Tigris.from_documents(docs, embeddings, index_name=\"my_embeddings\")\nSimilarity Search#\nquery = \"What did the president say about Ketanji Brown Jackson\"\nfound_docs = vector_store.similarity_search(query)\nprint(found_docs)\nSimilarity Search with score (vector distance)#\nquery = \"What did the president say about Ketanji Brown Jackson\"\nresult = vector_store.similarity_search_with_score(query)\nfor (doc, score) in result:\n print(f\"document={doc}, score={score}\")\nprevious\nTair\nnext\nTypesense\n Contents\n \nInitialize Tigris vector store\nSimilarity Search\nSimilarity Search with score (vector distance)\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/tigris.html"}
+{"id": "b308e69412e9-0", "text": ".ipynb\n.pdf\nChroma\n Contents \nSimilarity search with score\nPersistance\nInitialize PeristedChromaDB\nPersist the Database\nLoad the Database from disk, and create the chain\nRetriever options\nMMR\nUpdating a Document\nChroma#\nChroma is a database for building AI applications with embeddings.\nThis notebook shows how to use functionality related to the Chroma vector database.\n!pip install chromadb\n# get a token: https://platform.openai.com/account/api-keys\nfrom getpass import getpass\nOPENAI_API_KEY = getpass()\n \u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\nimport os\nos.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import Chroma\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\ndb = Chroma.from_documents(docs, embeddings)\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = db.similarity_search(query)\nUsing embedded DuckDB without persistence: data will be transient\nprint(docs[0].page_content)\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/chroma.html"}
+{"id": "b308e69412e9-1", "text": "Tonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nSimilarity search with score#\nThe returned distance score is cosine distance. Therefore, a lower score is better.\ndocs = db.similarity_search_with_score(query)\ndocs[0]\n(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}),\n 0.3949805498123169)\nPersistance#\nThe below steps cover how to persist a ChromaDB instance\nInitialize PeristedChromaDB#", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/chroma.html"}
+{"id": "b308e69412e9-2", "text": "Initialize PeristedChromaDB#\nCreate embeddings for each chunk and insert into the Chroma vector database. The persist_directory argument tells ChromaDB where to store the database when it\u2019s persisted.\n# Embed and store the texts\n# Supplying a persist_directory will store the embeddings on disk\npersist_directory = 'db'\nembedding = OpenAIEmbeddings()\nvectordb = Chroma.from_documents(documents=docs, embedding=embedding, persist_directory=persist_directory)\nRunning Chroma using direct local API.\nNo existing DB found in db, skipping load\nNo existing DB found in db, skipping load\nPersist the Database#\nWe should call persist() to ensure the embeddings are written to disk.\nvectordb.persist()\nvectordb = None\nPersisting DB to disk, putting it in the save folder db\nPersistentDuckDB del, about to run persist\nPersisting DB to disk, putting it in the save folder db\nLoad the Database from disk, and create the chain#\nBe sure to pass the same persist_directory and embedding_function as you did when you instantiated the database. Initialize the chain we will use for question answering.\n# Now we can load the persisted database from disk, and use it as normal. \nvectordb = Chroma(persist_directory=persist_directory, embedding_function=embedding)\nRunning Chroma using direct local API.\nloaded in 4 embeddings\nloaded in 1 collections\nRetriever options#\nThis section goes over different options for how to use Chroma as a retriever.\nMMR#\nIn addition to using similarity search in the retriever object, you can also use mmr.\nretriever = db.as_retriever(search_type=\"mmr\")\nretriever.get_relevant_documents(query)[0]", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/chroma.html"}
+{"id": "b308e69412e9-3", "text": "retriever.get_relevant_documents(query)[0]\nDocument(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})\nUpdating a Document#\nThe update_document function allows you to modify the content of a document in the Chroma instance after it has been added. Let\u2019s see an example of how to use this function.\n# Import Document class\nfrom langchain.docstore.document import Document\n# Initial document content and id\ninitial_content = \"This is an initial document content\"\ndocument_id = \"doc1\"\n# Create an instance of Document with initial content and metadata\noriginal_doc = Document(page_content=initial_content, metadata={\"page\": \"0\"})\n# Initialize a Chroma instance with the original document\nnew_db = Chroma.from_documents(\n collection_name=\"test_collection\",\n documents=[original_doc],\n embedding=OpenAIEmbeddings(), # using the same embeddings as before\n ids=[document_id],\n)", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/chroma.html"}
+{"id": "b308e69412e9-4", "text": "ids=[document_id],\n)\nAt this point, we have a new Chroma instance with a single document \u201cThis is an initial document content\u201d with id \u201cdoc1\u201d. Now, let\u2019s update the content of the document.\n# Updated document content\nupdated_content = \"This is the updated document content\"\n# Create a new Document instance with the updated content\nupdated_doc = Document(page_content=updated_content, metadata={\"page\": \"1\"})\n# Update the document in the Chroma instance by passing the document id and the updated document\nnew_db.update_document(document_id=document_id, document=updated_doc)\n# Now, let's retrieve the updated document using similarity search\noutput = new_db.similarity_search(updated_content, k=1)\n# Print the content of the retrieved document\nprint(output[0].page_content, output[0].metadata)\nThis is the updated document content {'page': '1'}\nprevious\nAwaDB\nnext\nClickHouse Vector Search\n Contents\n \nSimilarity search with score\nPersistance\nInitialize PeristedChromaDB\nPersist the Database\nLoad the Database from disk, and create the chain\nRetriever options\nMMR\nUpdating a Document\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/chroma.html"}
+{"id": "772beae91490-0", "text": ".ipynb\n.pdf\nTair\nTair#\nTair is a cloud native in-memory database service developed by Alibaba Cloud.\nIt provides rich data models and enterprise-grade capabilities to support your real-time online scenarios while maintaining full compatibility with open source Redis. Tair also introduces persistent memory-optimized instances that are based on the new non-volatile memory (NVM) storage medium.\nThis notebook shows how to use functionality related to the Tair vector database.\nTo run, you should have a Tair instance up and running.\nfrom langchain.embeddings.fake import FakeEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import Tair\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = FakeEmbeddings(size=128)\nConnect to Tair using the TAIR_URL environment variable\nexport TAIR_URL=\"redis://{username}:{password}@{tair_address}:{tair_port}\"\nor the keyword argument tair_url.\nThen store documents and embeddings into Tair.\ntair_url = \"redis://localhost:6379\"\n# drop first if index already exists\nTair.drop_index(tair_url=tair_url)\nvector_store = Tair.from_documents(\n docs,\n embeddings,\n tair_url=tair_url\n)\nQuery similar documents.\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = vector_store.similarity_search(query)\ndocs[0]", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/tair.html"}
+{"id": "772beae91490-1", "text": "docs = vector_store.similarity_search(query)\ndocs[0]\nDocument(page_content='We\u2019re going after the criminals who stole billions in relief money meant for small businesses and millions of Americans. \\n\\nAnd tonight, I\u2019m announcing that the Justice Department will name a chief prosecutor for pandemic fraud. \\n\\nBy the end of this year, the deficit will be down to less than half what it was before I took office. \\n\\nThe only president ever to cut the deficit by more than one trillion dollars in a single year. \\n\\nLowering your costs also means demanding more competition. \\n\\nI\u2019m a capitalist, but capitalism without competition isn\u2019t capitalism. \\n\\nIt\u2019s exploitation\u2014and it drives up prices. \\n\\nWhen corporations don\u2019t have to compete, their profits go up, your prices go up, and small businesses and family farmers and ranchers go under. \\n\\nWe see it happening with ocean carriers moving goods in and out of America. \\n\\nDuring the pandemic, these foreign-owned companies raised prices by as much as 1,000% and made record profits.', metadata={'source': '../../../state_of_the_union.txt'})\nprevious\nSupabase (Postgres)\nnext\nTigris\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/tair.html"}
+{"id": "422d8487663c-0", "text": ".ipynb\n.pdf\nPinecone\nPinecone#\nPinecone is a vector database with broad functionality.\nThis notebook shows how to use functionality related to the Pinecone vector database.\nTo use Pinecone, you must have an API key.\nHere are the installation instructions.\n!pip install pinecone-client\nimport os\nimport getpass\nPINECONE_API_KEY = getpass.getpass('Pinecone API Key:')\nPINECONE_ENV = getpass.getpass('Pinecone Environment:')\nWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import Pinecone\nfrom langchain.document_loaders import TextLoader\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\nimport pinecone \n# initialize pinecone\npinecone.init(\n api_key=PINECONE_API_KEY, # find at app.pinecone.io\n environment=PINECONE_ENV # next to api key in console\n)\nindex_name = \"langchain-demo\"\ndocsearch = Pinecone.from_documents(docs, embeddings, index_name=index_name)\n# if you already have an index, you can load it like this\n# docsearch = Pinecone.from_existing_index(index_name, embeddings)\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = docsearch.similarity_search(query)", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pinecone.html"}
+{"id": "422d8487663c-1", "text": "docs = docsearch.similarity_search(query)\nprint(docs[0].page_content)\nprevious\nPGVector\nnext\nQdrant\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pinecone.html"}
+{"id": "90990b44916d-0", "text": ".ipynb\n.pdf\nCommented out until further notice\nCommented out until further notice#\nMongoDB Atlas Vector Search\nMongoDB Atlas is a document database managed in the cloud. It also enables Lucene and its vector search feature.\nThis notebook shows how to use the functionality related to the MongoDB Atlas Vector Search feature where you can store your embeddings in MongoDB documents and create a Lucene vector index to perform a KNN search.\nIt uses the knnBeta Operator available in MongoDB Atlas Search. This feature is in early access and available only for evaluation purposes, to validate functionality, and to gather feedback from a small closed group of early access users. It is not recommended for production deployments as we may introduce breaking changes.\nTo use MongoDB Atlas, you must have first deployed a cluster. Free clusters are available.\nHere is the MongoDB Atlas quick start.\n!pip install pymongo\nimport os\nMONGODB_ATLAS_URI = os.environ['MONGODB_ATLAS_URI']\nWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key. Make sure the environment variable OPENAI_API_KEY is set up before proceeding.\nNow, let\u2019s create a Lucene vector index on your cluster. In the below example, embedding is the name of the field that contains the embedding vector. Please refer to the documentation to get more details on how to define an Atlas Search index.\nYou can name the index langchain_demo and create the index on the namespace lanchain_db.langchain_col. Finally, write the following definition in the JSON editor:\n{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"embedding\": {\n \"dimensions\": 1536,\n \"similarity\": \"cosine\",\n \"type\": \"knnVector\"\n }\n }\n }\n}", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/mongodb_atlas_vector_search.html"}
+{"id": "90990b44916d-1", "text": "}\n }\n }\n}\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import MongoDBAtlasVectorSearch\nfrom langchain.document_loaders import TextLoader\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\nfrom pymongo import MongoClient\n# initialize MongoDB python client\nclient = MongoClient(MONGODB_ATLAS_CONNECTION_STRING)\ndb_name = \"lanchain_db\"\ncollection_name = \"langchain_col\"\ncollection = client[db_name][collection_name]\nindex_name = \"langchain_demo\"\n# insert the documents in MongoDB Atlas with their embedding\ndocsearch = MongoDBAtlasVectorSearch.from_documents(\n docs,\n embeddings,\n collection=collection,\n index_name=index_name\n)\n# perform a similarity search between the embedding of the query and the embeddings of the documents\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = docsearch.similarity_search(query)\nprint(docs[0].page_content)\nYou can reuse vector index you created before, make sure environment variable OPENAI_API_KEY is set up, then create another file.\nfrom pymongo import MongoClient\nfrom langchain.vectorstores import MongoDBAtlasVectorSearch\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nimport os\nMONGODB_ATLAS_URI = os.environ['MONGODB_ATLAS_URI']\n# initialize MongoDB python client\nclient = MongoClient(MONGODB_ATLAS_URI)\ndb_name = \"langchain_db\"\ncollection_name = \"langchain_col\"", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/mongodb_atlas_vector_search.html"}
+{"id": "90990b44916d-2", "text": "db_name = \"langchain_db\"\ncollection_name = \"langchain_col\"\ncollection = client[db_name][collection_name]\nindex_name = \"langchain_index\"\n# initialize vector store\nvectorStore = MongoDBAtlasVectorSearch(\n collection, OpenAIEmbeddings(), index_name=index_name)\n# perform a similarity search between the embedding of the query and the embeddings of the documents\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = vectorStore.similarity_search(query)\nprint(docs[0].page_content)\nprevious\nMilvus\nnext\nMyScale\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/mongodb_atlas_vector_search.html"}
+{"id": "9ac1abf616a3-0", "text": ".ipynb\n.pdf\nSKLearnVectorStore\n Contents \nBasic usage\nLoad a sample document corpus\nCreate the SKLearnVectorStore, index the document corpus and run a sample query\nSaving and loading a vector store\nClean-up\nSKLearnVectorStore#\nscikit-learn is an open source collection of machine learning algorithms, including some implementations of the k nearest neighbors. SKLearnVectorStore wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format.\nThis notebook shows how to use the SKLearnVectorStore vector database.\n%pip install scikit-learn\n# # if you plan to use bson serialization, install also:\n# %pip install bson\n# # if you plan to use parquet serialization, install also:\n%pip install pandas pyarrow\nTo use OpenAI embeddings, you will need an OpenAI key. You can get one at https://platform.openai.com/account/api-keys or feel free to use any other embeddings.\nimport os\nfrom getpass import getpass\nos.environ['OPENAI_API_KEY'] = getpass('Enter your OpenAI key:')\nBasic usage#\nLoad a sample document corpus#\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import SKLearnVectorStore\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\nCreate the SKLearnVectorStore, index the document corpus and run a sample query#\nimport tempfile\npersist_path = os.path.join(tempfile.gettempdir(), 'union.parquet')", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/sklearn.html"}
+{"id": "9ac1abf616a3-1", "text": "persist_path = os.path.join(tempfile.gettempdir(), 'union.parquet')\nvector_store = SKLearnVectorStore.from_documents(\n documents=docs, \n embedding=embeddings,\n persist_path=persist_path, # persist_path and serializer are optional\n serializer='parquet'\n)\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = vector_store.similarity_search(query)\nprint(docs[0].page_content)\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nSaving and loading a vector store#\nvector_store.persist()\nprint('Vector store was persisted to', persist_path)\nVector store was persisted to /var/folders/6r/wc15p6m13nl_nl_n_xfqpc5c0000gp/T/union.parquet\nvector_store2 = SKLearnVectorStore(\n embedding=embeddings,\n persist_path=persist_path,\n serializer='parquet'\n)\nprint('A new instance of vector store was loaded from', persist_path)", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/sklearn.html"}
+{"id": "9ac1abf616a3-2", "text": ")\nprint('A new instance of vector store was loaded from', persist_path)\nA new instance of vector store was loaded from /var/folders/6r/wc15p6m13nl_nl_n_xfqpc5c0000gp/T/union.parquet\ndocs = vector_store2.similarity_search(query)\nprint(docs[0].page_content)\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nClean-up#\nos.remove(persist_path)\nprevious\nSingleStoreDB vector search\nnext\nSupabase (Postgres)\n Contents\n \nBasic usage\nLoad a sample document corpus\nCreate the SKLearnVectorStore, index the document corpus and run a sample query\nSaving and loading a vector store\nClean-up\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/sklearn.html"}
+{"id": "711d468a4096-0", "text": ".ipynb\n.pdf\nDeep Lake\n Contents \nRetrieval Question/Answering\nAttribute based filtering in metadata\nChoosing distance function\nMaximal Marginal relevance\nDelete dataset\nDeep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or in memory\nCreating dataset on AWS S3\nDeep Lake API\nTransfer local dataset to cloud\nDeep Lake#\nDeep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes.\nThis notebook showcases basic functionality related to Deep Lake. While Deep Lake can store embeddings, it is capable of storing any type of data. It is a fully fledged serverless data lake with version control, query engine and streaming dataloader to deep learning frameworks.\nFor more information, please see the Deep Lake documentation or api reference\n!pip install openai deeplake tiktoken\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import DeepLake\nimport os\nimport getpass\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nembeddings = OpenAIEmbeddings()\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html"}
+{"id": "711d468a4096-1", "text": "docs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\nCreate a dataset locally at ./deeplake/, then run similiarity search. The Deeplake+LangChain integration uses Deep Lake datasets under the hood, so dataset and vector store are used interchangeably. To create a dataset in your own cloud, or in the Deep Lake storage, adjust the path accordingly.\ndb = DeepLake(dataset_path=\"./my_deeplake/\", embedding_function=embeddings)\ndb.add_documents(docs)\n# or shorter\n# db = DeepLake.from_documents(docs, dataset_path=\"./my_deeplake/\", embedding=embeddings, overwrite=True)\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = db.similarity_search(query)\n/home/leo/.local/lib/python3.10/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.3.2) is available. It's recommended that you update to the latest version using `pip install -U deeplake`.\n warnings.warn(\n./my_deeplake/ loaded successfully.\nEvaluating ingest: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1/1 [00:07<00:00\nDataset(path='./my_deeplake/', tensors=['embedding', 'ids', 'metadata', 'text'])\n tensor htype shape dtype compression\n ------- ------- ------- ------- ------- \n embedding generic (42, 1536) float32 None \n ids text (42, 1) str None \n metadata json (42, 1) str None \n text text (42, 1) str None", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html"}
+{"id": "711d468a4096-2", "text": "text text (42, 1) str None \nprint(docs[0].page_content)\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nLater, you can reload the dataset without recomputing embeddings\ndb = DeepLake(dataset_path=\"./my_deeplake/\", embedding_function=embeddings, read_only=True)\ndocs = db.similarity_search(query)\n./my_deeplake/ loaded successfully.\nDeep Lake Dataset in ./my_deeplake/ already exists, loading from the storage\nDataset(path='./my_deeplake/', read_only=True, tensors=['embedding', 'ids', 'metadata', 'text'])\n tensor htype shape dtype compression\n ------- ------- ------- ------- ------- \n embedding generic (42, 1536) float32 None \n ids text (42, 1) str None \n metadata json (42, 1) str None \n text text (42, 1) str None", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html"}
+{"id": "711d468a4096-3", "text": "text text (42, 1) str None \nDeep Lake, for now, is single writer and multiple reader. Setting read_only=True helps to avoid acquring the writer lock.\nRetrieval Question/Answering#\nfrom langchain.chains import RetrievalQA\nfrom langchain.llms import OpenAIChat\nqa = RetrievalQA.from_chain_type(llm=OpenAIChat(model='gpt-3.5-turbo'), chain_type='stuff', retriever=db.as_retriever())\n/home/leo/.local/lib/python3.10/site-packages/langchain/llms/openai.py:624: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`\n warnings.warn(\nquery = 'What did the president say about Ketanji Brown Jackson'\nqa.run(query)\n'The president nominated Ketanji Brown Jackson to serve on the United States Supreme Court. He described her as a former top litigator in private practice, a former federal public defender, a consensus builder, and from a family of public school educators and police officers. He also mentioned that she has received broad support from various groups since being nominated.'\nAttribute based filtering in metadata#\nimport random\nfor d in docs:\n d.metadata['year'] = random.randint(2012, 2014)\ndb = DeepLake.from_documents(docs, embeddings, dataset_path=\"./my_deeplake/\", overwrite=True)\n./my_deeplake/ loaded successfully.\nEvaluating ingest: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1/1 [00:04<00:00\nDataset(path='./my_deeplake/', tensors=['embedding', 'ids', 'metadata', 'text'])", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html"}
+{"id": "711d468a4096-4", "text": "tensor htype shape dtype compression\n ------- ------- ------- ------- ------- \n embedding generic (4, 1536) float32 None \n ids text (4, 1) str None \n metadata json (4, 1) str None \n text text (4, 1) str None \ndb.similarity_search('What did the president say about Ketanji Brown Jackson', filter={'year': 2013})\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4/4 [00:00<00:00, 1080.24it/s]\n[Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013}),", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html"}
+{"id": "711d468a4096-5", "text": "Document(page_content='And for our LGBTQ+ Americans, let\u2019s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \\n\\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \\n\\nWhile it often appears that we never agree, that isn\u2019t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \\n\\nAnd soon, we\u2019ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \\n\\nSo tonight I\u2019m offering a Unity Agenda for the Nation. Four big things we can do together. \\n\\nFirst, beat the opioid epidemic.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013})]\nChoosing distance function#\nDistance function L2 for Euclidean, L1 for Nuclear, Max l-infinity distnace, cos for cosine similarity, dot for dot product\ndb.similarity_search('What did the president say about Ketanji Brown Jackson?', distance_metric='cos')", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html"}
+{"id": "711d468a4096-6", "text": "[Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013}),", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html"}
+{"id": "711d468a4096-7", "text": "Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she\u2019s been nominated, she\u2019s received a broad range of support\u2014from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \\n\\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \\n\\nWe can do both. At our border, we\u2019ve installed new technology like cutting-edge scanners to better detect drug smuggling. \\n\\nWe\u2019ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \\n\\nWe\u2019re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \\n\\nWe\u2019re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2012}),", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html"}
+{"id": "711d468a4096-8", "text": "Document(page_content='And for our LGBTQ+ Americans, let\u2019s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \\n\\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \\n\\nWhile it often appears that we never agree, that isn\u2019t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \\n\\nAnd soon, we\u2019ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \\n\\nSo tonight I\u2019m offering a Unity Agenda for the Nation. Four big things we can do together. \\n\\nFirst, beat the opioid epidemic.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013}),", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html"}
+{"id": "711d468a4096-9", "text": "Document(page_content='Tonight, I\u2019m announcing a crackdown on these companies overcharging American businesses and consumers. \\n\\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \\n\\nThat ends on my watch. \\n\\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \\n\\nWe\u2019ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \\n\\nLet\u2019s pass the Paycheck Fairness Act and paid leave. \\n\\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \\n\\nLet\u2019s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill\u2014our First Lady who teaches full-time\u2014calls America\u2019s best-kept secret: community colleges.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2012})]\nMaximal Marginal relevance#\nUsing maximal marginal relevance\ndb.max_marginal_relevance_search('What did the president say about Ketanji Brown Jackson?')", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html"}
+{"id": "711d468a4096-10", "text": "[Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013}),", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html"}
+{"id": "711d468a4096-11", "text": "Document(page_content='Tonight, I\u2019m announcing a crackdown on these companies overcharging American businesses and consumers. \\n\\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \\n\\nThat ends on my watch. \\n\\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \\n\\nWe\u2019ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \\n\\nLet\u2019s pass the Paycheck Fairness Act and paid leave. \\n\\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \\n\\nLet\u2019s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill\u2014our First Lady who teaches full-time\u2014calls America\u2019s best-kept secret: community colleges.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2012}),", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html"}
+{"id": "711d468a4096-12", "text": "Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she\u2019s been nominated, she\u2019s received a broad range of support\u2014from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \\n\\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \\n\\nWe can do both. At our border, we\u2019ve installed new technology like cutting-edge scanners to better detect drug smuggling. \\n\\nWe\u2019ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \\n\\nWe\u2019re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \\n\\nWe\u2019re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2012}),", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html"}
+{"id": "711d468a4096-13", "text": "Document(page_content='And for our LGBTQ+ Americans, let\u2019s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \\n\\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \\n\\nWhile it often appears that we never agree, that isn\u2019t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \\n\\nAnd soon, we\u2019ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \\n\\nSo tonight I\u2019m offering a Unity Agenda for the Nation. Four big things we can do together. \\n\\nFirst, beat the opioid epidemic.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013})]\nDelete dataset#\ndb.delete_dataset()\nand if delete fails you can also force delete\nDeepLake.force_delete_by_path(\"./my_deeplake\")\nDeep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or in memory#\nBy default deep lake datasets are stored locally, in case you want to store them in memory, in the Deep Lake Managed DB, or in any object storage, you can provide the corresponding path to the dataset. You can retrieve your user token from app.activeloop.ai\nos.environ['ACTIVELOOP_TOKEN'] = getpass.getpass('Activeloop Token:')\n# Embed and store the texts\nusername = \"\" # your username on app.activeloop.ai", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html"}
+{"id": "711d468a4096-14", "text": "username = \"\" # your username on app.activeloop.ai \ndataset_path = f\"hub://{username}/langchain_test\" # could be also ./local/path (much faster locally), s3://bucket/path/to/dataset, gcs://path/to/dataset, etc.\nembedding = OpenAIEmbeddings()\ndb = DeepLake(dataset_path=dataset_path, embedding_function=embeddings, overwrite=True)\ndb.add_documents(docs)\nYour Deep Lake dataset has been successfully created!\nThe dataset is private so make sure you are logged in!\nThis dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test\nhub://davitbun/langchain_test loaded successfully.\nEvaluating ingest: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1/1 [00:14<00:00\n \nDataset(path='hub://davitbun/langchain_test', tensors=['embedding', 'ids', 'metadata', 'text'])\n tensor htype shape dtype compression\n ------- ------- ------- ------- ------- \n embedding generic (4, 1536) float32 None \n ids text (4, 1) str None \n metadata json (4, 1) str None \n text text (4, 1) str None \n['d6d6ccb4-e187-11ed-b66d-41c5f7b85421',\n 'd6d6ccb5-e187-11ed-b66d-41c5f7b85421',\n 'd6d6ccb6-e187-11ed-b66d-41c5f7b85421',", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html"}
+{"id": "711d468a4096-15", "text": "'d6d6ccb7-e187-11ed-b66d-41c5f7b85421']\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = db.similarity_search(query)\nprint(docs[0].page_content)\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nCreating dataset on AWS S3#\ndataset_path = f\"s3://BUCKET/langchain_test\" # could be also ./local/path (much faster locally), hub://bucket/path/to/dataset, gcs://path/to/dataset, etc.\nembedding = OpenAIEmbeddings()\ndb = DeepLake.from_documents(docs, dataset_path=dataset_path, embedding=embeddings, overwrite=True, creds = {\n 'aws_access_key_id': os.environ['AWS_ACCESS_KEY_ID'], \n 'aws_secret_access_key': os.environ['AWS_SECRET_ACCESS_KEY'], \n 'aws_session_token': os.environ['AWS_SESSION_TOKEN'], # Optional\n})\ns3://hub-2.0-datasets-n/langchain_test loaded successfully.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html"}
+{"id": "711d468a4096-16", "text": "})\ns3://hub-2.0-datasets-n/langchain_test loaded successfully.\nEvaluating ingest: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1/1 [00:10<00:00\n\\\nDataset(path='s3://hub-2.0-datasets-n/langchain_test', tensors=['embedding', 'ids', 'metadata', 'text'])\n tensor htype shape dtype compression\n ------- ------- ------- ------- ------- \n embedding generic (4, 1536) float32 None \n ids text (4, 1) str None \n metadata json (4, 1) str None \n text text (4, 1) str None \n \nDeep Lake API#\nyou can access the Deep Lake dataset at db.ds\n# get structure of the dataset\ndb.ds.summary()\nDataset(path='hub://davitbun/langchain_test', tensors=['embedding', 'ids', 'metadata', 'text'])\n tensor htype shape dtype compression\n ------- ------- ------- ------- ------- \n embedding generic (4, 1536) float32 None \n ids text (4, 1) str None \n metadata json (4, 1) str None \n text text (4, 1) str None \n# get embeddings numpy array\nembeds = db.ds.embedding.numpy()\nTransfer local dataset to cloud#\nCopy already created dataset to the cloud. You can also transfer from cloud to local.\nimport deeplake\nusername = \"davitbun\" # your username on app.activeloop.ai", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html"}
+{"id": "711d468a4096-17", "text": "username = \"davitbun\" # your username on app.activeloop.ai \nsource = f\"hub://{username}/langchain_test\" # could be local, s3, gcs, etc.\ndestination = f\"hub://{username}/langchain_test_copy\" # could be local, s3, gcs, etc.\ndeeplake.deepcopy(src=source, dest=destination, overwrite=True)\nCopying dataset: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 56/56 [00:38<00:00\nThis dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test_copy\nYour Deep Lake dataset has been successfully created!\nThe dataset is private so make sure you are logged in!\nDataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text'])\ndb = DeepLake(dataset_path=destination, embedding_function=embeddings)\ndb.add_documents(docs)\n \nThis dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test_copy\n/\nhub://davitbun/langchain_test_copy loaded successfully.\nDeep Lake Dataset in hub://davitbun/langchain_test_copy already exists, loading from the storage\nDataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text'])\n tensor htype shape dtype compression\n ------- ------- ------- ------- ------- \n embedding generic (4, 1536) float32 None \n ids text (4, 1) str None \n metadata json (4, 1) str None", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html"}
+{"id": "711d468a4096-18", "text": "metadata json (4, 1) str None \n text text (4, 1) str None \nEvaluating ingest: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1/1 [00:31<00:00\n-\nDataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text'])\n tensor htype shape dtype compression\n ------- ------- ------- ------- ------- \n embedding generic (8, 1536) float32 None \n ids text (8, 1) str None \n metadata json (8, 1) str None \n text text (8, 1) str None \n \n['ad42f3fe-e188-11ed-b66d-41c5f7b85421',\n 'ad42f3ff-e188-11ed-b66d-41c5f7b85421',\n 'ad42f400-e188-11ed-b66d-41c5f7b85421',\n 'ad42f401-e188-11ed-b66d-41c5f7b85421']\nprevious\nClickHouse Vector Search\nnext\nDocArrayHnswSearch\n Contents\n \nRetrieval Question/Answering\nAttribute based filtering in metadata\nChoosing distance function\nMaximal Marginal relevance\nDelete dataset\nDeep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or in memory\nCreating dataset on AWS S3\nDeep Lake API\nTransfer local dataset to cloud\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html"}
+{"id": "1a8b4e3f2da2-0", "text": ".ipynb\n.pdf\nPGVector\n Contents \nSimilarity search with score\nSimilarity Search with Euclidean Distance (Default)\nWorking with vectorstore in PG\nUploading a vectorstore in PG\nRetrieving a vectorstore in PG\nPGVector#\nPGVector is an open-source vector similarity search for Postgres\nIt supports:\nexact and approximate nearest neighbor search\nL2 distance, inner product, and cosine distance\nThis notebook shows how to use the Postgres vector database (PGVector).\nSee the installation instruction.\n# Pip install necessary package\n!pip install pgvector\n!pip install openai\n!pip install psycopg2-binary\n!pip install tiktoken\nRequirement already satisfied: pgvector in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (0.1.8)\nRequirement already satisfied: numpy in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from pgvector) (1.24.3)\nRequirement already satisfied: openai in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (0.27.7)\nRequirement already satisfied: requests>=2.20 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from openai) (2.28.2)\nRequirement already satisfied: tqdm in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from openai) (4.65.0)\nRequirement already satisfied: aiohttp in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from openai) (3.8.4)", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html"}
+{"id": "1a8b4e3f2da2-1", "text": "Requirement already satisfied: charset-normalizer<4,>=2 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.20->openai) (3.1.0)\nRequirement already satisfied: idna<4,>=2.5 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.20->openai) (3.4)\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.20->openai) (1.26.15)\nRequirement already satisfied: certifi>=2017.4.17 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.20->openai) (2023.5.7)\nRequirement already satisfied: attrs>=17.3.0 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (23.1.0)\nRequirement already satisfied: multidict<7.0,>=4.5 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (6.0.4)\nRequirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (4.0.2)", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html"}
+{"id": "1a8b4e3f2da2-2", "text": "Requirement already satisfied: yarl<2.0,>=1.0 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (1.9.2)\nRequirement already satisfied: frozenlist>=1.1.1 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (1.3.3)\nRequirement already satisfied: aiosignal>=1.1.2 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (1.3.1)\nRequirement already satisfied: psycopg2-binary in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (2.9.6)\nRequirement already satisfied: tiktoken in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (0.4.0)\nRequirement already satisfied: regex>=2022.1.18 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from tiktoken) (2023.5.5)\nRequirement already satisfied: requests>=2.26.0 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from tiktoken) (2.28.2)\nRequirement already satisfied: charset-normalizer<4,>=2 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.26.0->tiktoken) (3.1.0)", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html"}
+{"id": "1a8b4e3f2da2-3", "text": "Requirement already satisfied: idna<4,>=2.5 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.26.0->tiktoken) (3.4)\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.26.0->tiktoken) (1.26.15)\nRequirement already satisfied: certifi>=2017.4.17 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.26.0->tiktoken) (2023.5.7)\nWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.\nimport os\nimport getpass\nos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\nOpenAI API Key:\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\n## Loading Environment Variables\nfrom typing import List, Tuple\nfrom dotenv import load_dotenv\nload_dotenv()\nFalse\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores.pgvector import PGVector\nfrom langchain.document_loaders import TextLoader\nfrom langchain.docstore.document import Document\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\nembeddings = OpenAIEmbeddings()\n## PGVector needs the connection string to the database.\n## We will load it from the environment variables.\nimport os", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html"}
+{"id": "1a8b4e3f2da2-4", "text": "## We will load it from the environment variables.\nimport os\nCONNECTION_STRING = PGVector.connection_string_from_db_params(\n driver=os.environ.get(\"PGVECTOR_DRIVER\", \"psycopg2\"),\n host=os.environ.get(\"PGVECTOR_HOST\", \"localhost\"),\n port=int(os.environ.get(\"PGVECTOR_PORT\", \"5432\")),\n database=os.environ.get(\"PGVECTOR_DATABASE\", \"postgres\"),\n user=os.environ.get(\"PGVECTOR_USER\", \"postgres\"),\n password=os.environ.get(\"PGVECTOR_PASSWORD\", \"postgres\"),\n)\n## Example\n# postgresql+psycopg2://username:password@localhost:5432/database_name\n# ## PGVector needs the connection string to the database.\n# ## We will load it from the environment variables.\n# import os\n# CONNECTION_STRING = PGVector.connection_string_from_db_params(\n# driver=os.environ.get(\"PGVECTOR_DRIVER\", \"psycopg2\"),\n# host=os.environ.get(\"PGVECTOR_HOST\", \"localhost\"),\n# port=int(os.environ.get(\"PGVECTOR_PORT\", \"5432\")),\n# database=os.environ.get(\"PGVECTOR_DATABASE\", \"rd-embeddings\"),\n# user=os.environ.get(\"PGVECTOR_USER\", \"admin\"),\n# password=os.environ.get(\"PGVECTOR_PASSWORD\", \"password\"),\n# )\n# ## Example\n# # postgresql+psycopg2://username:password@localhost:5432/database_name\nSimilarity search with score#\nSimilarity Search with Euclidean Distance (Default)#\n# The PGVector Module will try to create a table with the name of the collection. So, make sure that the collection name is unique and the user has the \n# permission to create a table.\ndb = PGVector.from_documents(\n embedding=embeddings,\n documents=docs,\n collection_name=\"state_of_the_union\",", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html"}
+{"id": "1a8b4e3f2da2-5", "text": "documents=docs,\n collection_name=\"state_of_the_union\",\n connection_string=CONNECTION_STRING,\n)\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs_with_score: List[Tuple[Document, float]] = db.similarity_search_with_score(query)\nfor doc, score in docs_with_score:\n print(\"-\" * 80)\n print(\"Score: \", score)\n print(doc.page_content)\n print(\"-\" * 80)\n--------------------------------------------------------------------------------\nScore: 0.6076804864602984\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\nScore: 0.6076804864602984\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html"}
+{"id": "1a8b4e3f2da2-6", "text": "Tonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\nScore: 0.659062774389974\nA former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she\u2019s been nominated, she\u2019s received a broad range of support\u2014from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \nWe can do both. At our border, we\u2019ve installed new technology like cutting-edge scanners to better detect drug smuggling. \nWe\u2019ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \nWe\u2019re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \nWe\u2019re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.\n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\nScore: 0.659062774389974\nA former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she\u2019s been nominated, she\u2019s received a broad range of support\u2014from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html"}
+{"id": "1a8b4e3f2da2-7", "text": "And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \nWe can do both. At our border, we\u2019ve installed new technology like cutting-edge scanners to better detect drug smuggling. \nWe\u2019ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \nWe\u2019re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \nWe\u2019re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.\n--------------------------------------------------------------------------------\nWorking with vectorstore in PG#\nUploading a vectorstore in PG#\ndata=docs\napi_key=os.environ['OPENAI_API_KEY']\ndb = PGVector.from_documents(\n documents=docs,\n embedding=embeddings,\n collection_name=collection_name,\n connection_string=connection_string,\n distance_strategy=DistanceStrategy.COSINE,\n openai_api_key=api_key,\n pre_delete_collection=False \n)\nRetrieving a vectorstore in PG#\nconnection_string = CONNECTION_STRING \nembedding=embeddings\ncollection_name=\"state_of_the_union\"\nfrom langchain.vectorstores.pgvector import DistanceStrategy\nstore = PGVector(\n connection_string=connection_string, \n embedding_function=embedding, \n collection_name=collection_name,\n distance_strategy=DistanceStrategy.COSINE\n)\nretriever = store.as_retriever()\nprint(retriever)\nvectorstore= search_type='similarity' search_kwargs={}\n# When we have an existing PG VEctor \nDEFAULT_DISTANCE_STRATEGY = DistanceStrategy.EUCLIDEAN\ndb1 = PGVector.from_existing_index(\n embedding=embeddings,", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html"}
+{"id": "1a8b4e3f2da2-8", "text": "db1 = PGVector.from_existing_index(\n embedding=embeddings,\n collection_name=\"state_of_the_union\",\n distance_strategy=DEFAULT_DISTANCE_STRATEGY,\n pre_delete_collection = False,\n connection_string=CONNECTION_STRING,\n)\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs_with_score: List[Tuple[Document, float]] = db1.similarity_search_with_score(query)\nprint(docs_with_score)", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html"}
+{"id": "1a8b4e3f2da2-9", "text": "[(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}), 0.6075870262188066), (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html"}
+{"id": "1a8b4e3f2da2-10", "text": "Breyer\u2019s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}), 0.6075870262188066), (Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she\u2019s been nominated, she\u2019s received a broad range of support\u2014from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \\n\\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \\n\\nWe can do both. At our border, we\u2019ve installed new technology like cutting-edge scanners to better detect drug smuggling. \\n\\nWe\u2019ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \\n\\nWe\u2019re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \\n\\nWe\u2019re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt'}), 0.6589478388546668), (Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she\u2019s been nominated, she\u2019s received a broad range of support\u2014from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \\n\\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \\n\\nWe can do both. At our border, we\u2019ve installed new technology like cutting-edge scanners to better detect drug smuggling. \\n\\nWe\u2019ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \\n\\nWe\u2019re putting in place dedicated immigration judges so families fleeing persecution and violence", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html"}
+{"id": "1a8b4e3f2da2-11", "text": "\\n\\nWe\u2019re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \\n\\nWe\u2019re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt'}), 0.6589478388546668)]", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html"}
+{"id": "1a8b4e3f2da2-12", "text": "for doc, score in docs_with_score:\n print(\"-\" * 80)\n print(\"Score: \", score)\n print(doc.page_content)\n print(\"-\" * 80)\n--------------------------------------------------------------------------------\nScore: 0.6075870262188066\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\nScore: 0.6075870262188066\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html"}
+{"id": "1a8b4e3f2da2-13", "text": "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\nScore: 0.6589478388546668\nA former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she\u2019s been nominated, she\u2019s received a broad range of support\u2014from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \nWe can do both. At our border, we\u2019ve installed new technology like cutting-edge scanners to better detect drug smuggling. \nWe\u2019ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \nWe\u2019re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \nWe\u2019re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.\n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\nScore: 0.6589478388546668\nA former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she\u2019s been nominated, she\u2019s received a broad range of support\u2014from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \nWe can do both. At our border, we\u2019ve installed new technology like cutting-edge scanners to better detect drug smuggling. \nWe\u2019ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html"}
+{"id": "1a8b4e3f2da2-14", "text": "We\u2019ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \nWe\u2019re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \nWe\u2019re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.\n--------------------------------------------------------------------------------\nprevious\nOpenSearch\nnext\nPinecone\n Contents\n \nSimilarity search with score\nSimilarity Search with Euclidean Distance (Default)\nWorking with vectorstore in PG\nUploading a vectorstore in PG\nRetrieving a vectorstore in PG\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html"}
+{"id": "950bb5a9938a-0", "text": ".ipynb\n.pdf\nAwaDB\n Contents \nSimilarity search with score\nRestore the table created and added data before\nAwaDB#\nAwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications.\nThis notebook shows how to use functionality related to the AwaDB.\n!pip install awadb\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import AwaDB\nfrom langchain.document_loaders import TextLoader\nloader = TextLoader('../../../state_of_the_union.txt')\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size= 100, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\ndb = AwaDB.from_documents(docs)\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = db.similarity_search(query)\nprint(docs[0].page_content)\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\nSimilarity search with score#\nThe returned distance score is between 0-1. 0 is dissimilar, 1 is the most similar\ndocs = db.similarity_search_with_score(query)\nprint(docs[0])\n(Document(page_content=\u2019And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\u2019, metadata={\u2018source\u2019: \u2018../../../state_of_the_union.txt\u2019}), 0.561813814013747)\nRestore the table created and added data before#\nAwaDB automatically persists added document data", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/awadb.html"}
+{"id": "950bb5a9938a-1", "text": "Restore the table created and added data before#\nAwaDB automatically persists added document data\nIf you can restore the table you created and added before, you can just do this as below:\nawadb_client = awadb.Client()\nret = awadb_client.Load('langchain_awadb')\nif ret : print('awadb load table success')\nelse:\n print('awadb load table failed')\nawadb load table success\nprevious\nAtlas\nnext\nChroma\n Contents\n \nSimilarity search with score\nRestore the table created and added data before\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/awadb.html"}
+{"id": "2858e3ae8241-0", "text": ".ipynb\n.pdf\nAnnoy\n Contents \nCreate VectorStore from texts\nCreate VectorStore from docs\nCreate VectorStore via existing embeddings\nSearch via embeddings\nSearch via docstore id\nSave and load\nConstruct from scratch\nAnnoy#\nAnnoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data.\nThis notebook shows how to use functionality related to the Annoy vector database.\nNote\nNOTE: Annoy is read-only - once the index is built you cannot add any more emebddings!\nIf you want to progressively add new entries to your VectorStore then better choose an alternative!\n#!pip install annoy\nCreate VectorStore from texts#\nfrom langchain.embeddings import HuggingFaceEmbeddings\nfrom langchain.vectorstores import Annoy\nembeddings_func = HuggingFaceEmbeddings()\ntexts = [\"pizza is great\", \"I love salad\", \"my car\", \"a dog\"]\n# default metric is angular\nvector_store = Annoy.from_texts(texts, embeddings_func)\n# allows for custom annoy parameters, defaults are n_trees=100, n_jobs=-1, metric=\"angular\"\nvector_store_v2 = Annoy.from_texts(\n texts, embeddings_func, metric=\"dot\", n_trees=100, n_jobs=1\n)\nvector_store.similarity_search(\"food\", k=3)\n[Document(page_content='pizza is great', metadata={}),\n Document(page_content='I love salad', metadata={}),\n Document(page_content='my car', metadata={})]\n# the score is a distance metric, so lower is better", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html"}
+{"id": "2858e3ae8241-1", "text": "# the score is a distance metric, so lower is better\nvector_store.similarity_search_with_score(\"food\", k=3)\n[(Document(page_content='pizza is great', metadata={}), 1.0944390296936035),\n (Document(page_content='I love salad', metadata={}), 1.1273186206817627),\n (Document(page_content='my car', metadata={}), 1.1580758094787598)]\nCreate VectorStore from docs#\nfrom langchain.document_loaders import TextLoader\nfrom langchain.text_splitter import CharacterTextSplitter\nloader = TextLoader(\"../../../state_of_the_union.txt\")\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\ndocs[:5]", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html"}
+{"id": "2858e3ae8241-2", "text": "docs = text_splitter.split_documents(documents)\ndocs[:5]\n[Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\nWith a duty to one another to the American people to the Constitution. \\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \\n\\nSix days ago, Russia\u2019s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \\n\\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \\n\\nHe met the Ukrainian people. \\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.', metadata={'source': '../../../state_of_the_union.txt'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html"}
+{"id": "2858e3ae8241-3", "text": "Document(page_content='Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \\n\\nIn this struggle as President Zelenskyy said in his speech to the European Parliament \u201cLight will win over darkness.\u201d The Ukrainian Ambassador to the United States is here tonight. \\n\\nLet each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. \\n\\nPlease rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. \\n\\nThroughout our history we\u2019ve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. \\n\\nThey keep moving. \\n\\nAnd the costs and the threats to America and the world keep rising. \\n\\nThat\u2019s why the NATO Alliance was created to secure peace and stability in Europe after World War 2. \\n\\nThe United States is a member along with 29 other nations. \\n\\nIt matters. American diplomacy matters. American resolve matters.', metadata={'source': '../../../state_of_the_union.txt'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html"}
+{"id": "2858e3ae8241-4", "text": "Document(page_content='Putin\u2019s latest attack on Ukraine was premeditated and unprovoked. \\n\\nHe rejected repeated efforts at diplomacy. \\n\\nHe thought the West and NATO wouldn\u2019t respond. And he thought he could divide us at home. Putin was wrong. We were ready. Here is what we did. \\n\\nWe prepared extensively and carefully. \\n\\nWe spent months building a coalition of other freedom-loving nations from Europe and the Americas to Asia and Africa to confront Putin. \\n\\nI spent countless hours unifying our European allies. We shared with the world in advance what we knew Putin was planning and precisely how he would try to falsely justify his aggression. \\n\\nWe countered Russia\u2019s lies with truth. \\n\\nAnd now that he has acted the free world is holding him accountable. \\n\\nAlong with twenty-seven members of the European Union including France, Germany, Italy, as well as countries like the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland.', metadata={'source': '../../../state_of_the_union.txt'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html"}
+{"id": "2858e3ae8241-5", "text": "Document(page_content='We are inflicting pain on Russia and supporting the people of Ukraine. Putin is now isolated from the world more than ever. \\n\\nTogether with our allies \u2013we are right now enforcing powerful economic sanctions. \\n\\nWe are cutting off Russia\u2019s largest banks from the international financial system. \\n\\nPreventing Russia\u2019s central bank from defending the Russian Ruble making Putin\u2019s $630 Billion \u201cwar fund\u201d worthless. \\n\\nWe are choking off Russia\u2019s access to technology that will sap its economic strength and weaken its military for years to come. \\n\\nTonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \\n\\nThe U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \\n\\nWe are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains.', metadata={'source': '../../../state_of_the_union.txt'}),", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html"}
+{"id": "2858e3ae8241-6", "text": "Document(page_content='And tonight I am announcing that we will join our allies in closing off American air space to all Russian flights \u2013 further isolating Russia \u2013 and adding an additional squeeze \u2013on their economy. The Ruble has lost 30% of its value. \\n\\nThe Russian stock market has lost 40% of its value and trading remains suspended. Russia\u2019s economy is reeling and Putin alone is to blame. \\n\\nTogether with our allies we are providing support to the Ukrainians in their fight for freedom. Military assistance. Economic assistance. Humanitarian assistance. \\n\\nWe are giving more than $1 Billion in direct assistance to Ukraine. \\n\\nAnd we will continue to aid the Ukrainian people as they defend their country and to help ease their suffering. \\n\\nLet me be clear, our forces are not engaged and will not engage in conflict with Russian forces in Ukraine. \\n\\nOur forces are not going to Europe to fight in Ukraine, but to defend our NATO Allies \u2013 in the event that Putin decides to keep moving west.', metadata={'source': '../../../state_of_the_union.txt'})]\nvector_store_from_docs = Annoy.from_documents(docs, embeddings_func)\nquery = \"What did the president say about Ketanji Brown Jackson\"\ndocs = vector_store_from_docs.similarity_search(query)\nprint(docs[0].page_content[:100])\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Ac\nCreate VectorStore via existing embeddings#\nembs = embeddings_func.embed_documents(texts)\ndata = list(zip(texts, embs))\nvector_store_from_embeddings = Annoy.from_embeddings(data, embeddings_func)\nvector_store_from_embeddings.similarity_search_with_score(\"food\", k=3)\n[(Document(page_content='pizza is great', metadata={}), 1.0944390296936035),", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html"}
+{"id": "2858e3ae8241-7", "text": "(Document(page_content='I love salad', metadata={}), 1.1273186206817627),\n (Document(page_content='my car', metadata={}), 1.1580758094787598)]\nSearch via embeddings#\nmotorbike_emb = embeddings_func.embed_query(\"motorbike\")\nvector_store.similarity_search_by_vector(motorbike_emb, k=3)\n[Document(page_content='my car', metadata={}),\n Document(page_content='a dog', metadata={}),\n Document(page_content='pizza is great', metadata={})]\nvector_store.similarity_search_with_score_by_vector(motorbike_emb, k=3)\n[(Document(page_content='my car', metadata={}), 1.0870471000671387),\n (Document(page_content='a dog', metadata={}), 1.2095637321472168),\n (Document(page_content='pizza is great', metadata={}), 1.3254905939102173)]\nSearch via docstore id#\nvector_store.index_to_docstore_id\n{0: '2d1498a8-a37c-4798-acb9-0016504ed798',\n 1: '2d30aecc-88e0-4469-9d51-0ef7e9858e6d',\n 2: '927f1120-985b-4691-b577-ad5cb42e011c',\n 3: '3056ddcf-a62f-48c8-bd98-b9e57a3dfcae'}\nsome_docstore_id = 0 # texts[0]\nvector_store.docstore._dict[vector_store.index_to_docstore_id[some_docstore_id]]\nDocument(page_content='pizza is great', metadata={})\n# same document has distance 0", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html"}
+{"id": "2858e3ae8241-8", "text": "Document(page_content='pizza is great', metadata={})\n# same document has distance 0\nvector_store.similarity_search_with_score_by_index(some_docstore_id, k=3)\n[(Document(page_content='pizza is great', metadata={}), 0.0),\n (Document(page_content='I love salad', metadata={}), 1.0734446048736572),\n (Document(page_content='my car', metadata={}), 1.2895267009735107)]\nSave and load#\nvector_store.save_local(\"my_annoy_index_and_docstore\")\nsaving config\nloaded_vector_store = Annoy.load_local(\n \"my_annoy_index_and_docstore\", embeddings=embeddings_func\n)\n# same document has distance 0\nloaded_vector_store.similarity_search_with_score_by_index(some_docstore_id, k=3)\n[(Document(page_content='pizza is great', metadata={}), 0.0),\n (Document(page_content='I love salad', metadata={}), 1.0734446048736572),\n (Document(page_content='my car', metadata={}), 1.2895267009735107)]\nConstruct from scratch#\nimport uuid\nfrom annoy import AnnoyIndex\nfrom langchain.docstore.document import Document\nfrom langchain.docstore.in_memory import InMemoryDocstore\nmetadatas = [{\"x\": \"food\"}, {\"x\": \"food\"}, {\"x\": \"stuff\"}, {\"x\": \"animal\"}]\n# embeddings\nembeddings = embeddings_func.embed_documents(texts)\n# embedding dim\nf = len(embeddings[0])\n# index\nmetric = \"angular\"\nindex = AnnoyIndex(f, metric=metric)\nfor i, emb in enumerate(embeddings):\n index.add_item(i, emb)\nindex.build(10)\n# docstore\ndocuments = []", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html"}
+{"id": "2858e3ae8241-9", "text": "index.build(10)\n# docstore\ndocuments = []\nfor i, text in enumerate(texts):\n metadata = metadatas[i] if metadatas else {}\n documents.append(Document(page_content=text, metadata=metadata))\nindex_to_docstore_id = {i: str(uuid.uuid4()) for i in range(len(documents))}\ndocstore = InMemoryDocstore(\n {index_to_docstore_id[i]: doc for i, doc in enumerate(documents)}\n)\ndb_manually = Annoy(\n embeddings_func.embed_query, index, metric, docstore, index_to_docstore_id\n)\ndb_manually.similarity_search_with_score(\"eating!\", k=3)\n[(Document(page_content='pizza is great', metadata={'x': 'food'}),\n 1.1314140558242798),\n (Document(page_content='I love salad', metadata={'x': 'food'}),\n 1.1668788194656372),\n (Document(page_content='my car', metadata={'x': 'stuff'}), 1.226445198059082)]\nprevious\nAnalyticDB\nnext\nAtlas\n Contents\n \nCreate VectorStore from texts\nCreate VectorStore from docs\nCreate VectorStore via existing embeddings\nSearch via embeddings\nSearch via docstore id\nSave and load\nConstruct from scratch\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html"}
+{"id": "6dd313444ec9-0", "text": ".ipynb\n.pdf\nChat Prompt Templates\n Contents \nFormat output\nDifferent types of MessagePromptTemplate\nChat Prompt Templates#\nChat Models take a list of chat messages as input - this list commonly referred to as a prompt.\nThese chat messages differ from raw string (which you would pass into a LLM model) in that every message is associated with a role.\nFor example, in OpenAI Chat Completion API, a chat message can be associated with the AI, human or system role. The model is supposed to follow instruction from system chat message more closely.\nLangChain provides several prompt templates to make constructing and working with prompts easily. You are encouraged to use these chat related prompt templates instead of PromptTemplate when querying chat models to fully exploit the potential of underlying chat model.\nfrom langchain.prompts import (\n ChatPromptTemplate,\n PromptTemplate,\n SystemMessagePromptTemplate,\n AIMessagePromptTemplate,\n HumanMessagePromptTemplate,\n)\nfrom langchain.schema import (\n AIMessage,\n HumanMessage,\n SystemMessage\n)\nTo create a message template associated with a role, you use MessagePromptTemplate.\nFor convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:\ntemplate=\"You are a helpful assistant that translates {input_language} to {output_language}.\"\nsystem_message_prompt = SystemMessagePromptTemplate.from_template(template)\nhuman_template=\"{text}\"\nhuman_message_prompt = HumanMessagePromptTemplate.from_template(human_template)\nIf you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate outside and then pass it in, eg:\nprompt=PromptTemplate(\n template=\"You are a helpful assistant that translates {input_language} to {output_language}.\",\n input_variables=[\"input_language\", \"output_language\"],\n)", "source": "https://python.langchain.com/en/latest/modules/prompts/chat_prompt_template.html"}
+{"id": "6dd313444ec9-1", "text": "input_variables=[\"input_language\", \"output_language\"],\n)\nsystem_message_prompt_2 = SystemMessagePromptTemplate(prompt=prompt)\nassert system_message_prompt == system_message_prompt_2\nAfter that, you can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate\u2019s format_prompt \u2013 this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.\nchat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])\n# get a chat completion from the formatted messages\nchat_prompt.format_prompt(input_language=\"English\", output_language=\"French\", text=\"I love programming.\").to_messages()\n[SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}),\n HumanMessage(content='I love programming.', additional_kwargs={})]\nFormat output#\nThe output of the format method is available as string, list of messages and ChatPromptValue\nAs string:\noutput = chat_prompt.format(input_language=\"English\", output_language=\"French\", text=\"I love programming.\")\noutput\n'System: You are a helpful assistant that translates English to French.\\nHuman: I love programming.'\n# or alternatively \noutput_2 = chat_prompt.format_prompt(input_language=\"English\", output_language=\"French\", text=\"I love programming.\").to_string()\nassert output == output_2\nAs ChatPromptValue\nchat_prompt.format_prompt(input_language=\"English\", output_language=\"French\", text=\"I love programming.\")\nChatPromptValue(messages=[SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}), HumanMessage(content='I love programming.', additional_kwargs={})])\nAs list of Message objects\nchat_prompt.format_prompt(input_language=\"English\", output_language=\"French\", text=\"I love programming.\").to_messages()", "source": "https://python.langchain.com/en/latest/modules/prompts/chat_prompt_template.html"}
+{"id": "6dd313444ec9-2", "text": "[SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}),\n HumanMessage(content='I love programming.', additional_kwargs={})]\nDifferent types of MessagePromptTemplate#\nLangChain provides different types of MessagePromptTemplate. The most commonly used are AIMessagePromptTemplate, SystemMessagePromptTemplate and HumanMessagePromptTemplate, which create an AI message, system message and human message respectively.\nHowever, in cases where the chat model supports taking chat message with arbitrary role, you can use ChatMessagePromptTemplate, which allows user to specify the role name.\nfrom langchain.prompts import ChatMessagePromptTemplate\nprompt = \"May the {subject} be with you\"\nchat_message_prompt = ChatMessagePromptTemplate.from_template(role=\"Jedi\", template=prompt)\nchat_message_prompt.format(subject=\"force\")\nChatMessage(content='May the force be with you', additional_kwargs={}, role='Jedi')\nLangChain also provides MessagesPlaceholder, which gives you full control of what messages to be rendered during formatting. This can be useful when you are uncertain of what role you should be using for your message prompt templates or when you wish to insert a list of messages during formatting.\nfrom langchain.prompts import MessagesPlaceholder\nhuman_prompt = \"Summarize our conversation so far in {word_count} words.\"\nhuman_message_template = HumanMessagePromptTemplate.from_template(human_prompt)\nchat_prompt = ChatPromptTemplate.from_messages([MessagesPlaceholder(variable_name=\"conversation\"), human_message_template])\nhuman_message = HumanMessage(content=\"What is the best way to learn programming?\")\nai_message = AIMessage(content=\"\"\"\\\n1. Choose a programming language: Decide on a programming language that you want to learn. \n2. Start with the basics: Familiarize yourself with the basic programming concepts such as variables, data types and control structures.", "source": "https://python.langchain.com/en/latest/modules/prompts/chat_prompt_template.html"}
+{"id": "6dd313444ec9-3", "text": "3. Practice, practice, practice: The best way to learn programming is through hands-on experience\\\n\"\"\")\nchat_prompt.format_prompt(conversation=[human_message, ai_message], word_count=\"10\").to_messages()\n[HumanMessage(content='What is the best way to learn programming?', additional_kwargs={}),\n AIMessage(content='1. Choose a programming language: Decide on a programming language that you want to learn. \\n\\n2. Start with the basics: Familiarize yourself with the basic programming concepts such as variables, data types and control structures.\\n\\n3. Practice, practice, practice: The best way to learn programming is through hands-on experience', additional_kwargs={}),\n HumanMessage(content='Summarize our conversation so far in 10 words.', additional_kwargs={})]\nprevious\nOutput Parsers\nnext\nExample Selectors\n Contents\n \nFormat output\nDifferent types of MessagePromptTemplate\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/chat_prompt_template.html"}
+{"id": "8a504f9d2a07-0", "text": ".rst\n.pdf\nExample Selectors\nExample Selectors#\nNote\nConceptual Guide\nIf you have a large number of examples, you may need to select which ones to include in the prompt. The ExampleSelector is the class responsible for doing so.\nThe base interface is defined as below:\nclass BaseExampleSelector(ABC):\n \"\"\"Interface for selecting examples to include in prompts.\"\"\"\n @abstractmethod\n def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:\n \"\"\"Select which examples to use based on the inputs.\"\"\"\nThe only method it needs to expose is a select_examples method. This takes in the input variables and then returns a list of examples. It is up to each specific implementation as to how those examples are selected. Let\u2019s take a look at some below.\nSee below for a list of example selectors.\nHow to create a custom example selector\nLengthBased ExampleSelector\nMaximal Marginal Relevance ExampleSelector\nNGram Overlap ExampleSelector\nSimilarity ExampleSelector\nprevious\nChat Prompt Templates\nnext\nHow to create a custom example selector\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/example_selectors.html"}
+{"id": "7ac1a1514c64-0", "text": ".ipynb\n.pdf\nGetting Started\n Contents \nPromptTemplates\nto_string\nto_messages\nGetting Started#\nThis section contains everything related to prompts. A prompt is the value passed into the Language Model. This value can either be a string (for LLMs) or a list of messages (for Chat Models).\nThe data types of these prompts are rather simple, but their construction is anything but. Value props of LangChain here include:\nA standard interface for string prompts and message prompts\nA standard (to get started) interface for string prompt templates and message prompt templates\nExample Selectors: methods for inserting examples into the prompt for the language model to follow\nOutputParsers: methods for inserting instructions into the prompt as the format in which the language model should output information, as well as methods for then parsing that string output into a format.\nWe have in depth documentation for specific types of string prompts, specific types of chat prompts, example selectors, and output parsers.\nHere, we cover a quick-start for a standard interface for getting started with simple prompts.\nPromptTemplates#\nPromptTemplates are responsible for constructing a prompt value. These PromptTemplates can do things like formatting, example selection, and more. At a high level, these are basically objects that expose a format_prompt method for constructing a prompt. Under the hood, ANYTHING can happen.\nfrom langchain.prompts import PromptTemplate, ChatPromptTemplate\nstring_prompt = PromptTemplate.from_template(\"tell me a joke about {subject}\")\nchat_prompt = ChatPromptTemplate.from_template(\"tell me a joke about {subject}\")\nstring_prompt_value = string_prompt.format_prompt(subject=\"soccer\")\nchat_prompt_value = chat_prompt.format_prompt(subject=\"soccer\")\nto_string#\nThis is what is called when passing to an LLM (which expects raw text)\nstring_prompt_value.to_string()\n'tell me a joke about soccer'", "source": "https://python.langchain.com/en/latest/modules/prompts/getting_started.html"}
+{"id": "7ac1a1514c64-1", "text": "string_prompt_value.to_string()\n'tell me a joke about soccer'\nchat_prompt_value.to_string()\n'Human: tell me a joke about soccer'\nto_messages#\nThis is what is called when passing to ChatModel (which expects a list of messages)\nstring_prompt_value.to_messages()\n[HumanMessage(content='tell me a joke about soccer', additional_kwargs={}, example=False)]\nchat_prompt_value.to_messages()\n[HumanMessage(content='tell me a joke about soccer', additional_kwargs={}, example=False)]\nprevious\nPrompts\nnext\nPrompt Templates\n Contents\n \nPromptTemplates\nto_string\nto_messages\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/getting_started.html"}
+{"id": "759c58d1b6a4-0", "text": ".rst\n.pdf\nOutput Parsers\nOutput Parsers#\nNote\nConceptual Guide\nLanguage models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.\nOutput parsers are classes that help structure language model responses. There are two main methods an output parser must implement:\nget_format_instructions() -> str: A method which returns a string containing instructions for how the output of a language model should be formatted.\nparse(str) -> Any: A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.\nAnd then one optional one:\nparse_with_prompt(str) -> Any: A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.\nTo start, we recommend familiarizing yourself with the Getting Started section\nOutput Parsers\nAfter that, we provide deep dives on all the different types of output parsers.\nCommaSeparatedListOutputParser\nDatetime\nEnum Output Parser\nOutputFixingParser\nPydanticOutputParser\nRetryOutputParser\nStructured Output Parser\nprevious\nSimilarity ExampleSelector\nnext\nOutput Parsers\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/output_parsers.html"}
+{"id": "a0ffbad3b183-0", "text": ".rst\n.pdf\nPrompt Templates\nPrompt Templates#\nNote\nConceptual Guide\nLanguage models take text as input - that text is commonly referred to as a prompt.\nTypically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input.\nLangChain provides several classes and functions to make constructing and working with prompts easy.\nThe following sections of documentation are provided:\nGetting Started: An overview of all the functionality LangChain provides for working with and constructing prompts.\nHow-To Guides: A collection of how-to guides. These highlight how to accomplish various objectives with our prompt class.\nReference: API reference documentation for all prompt classes.\nprevious\nGetting Started\nnext\nGetting Started\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates.html"}
+{"id": "e49b3b610a7a-0", "text": ".ipynb\n.pdf\nOutput Parsers\nOutput Parsers#\nLanguage models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.\nOutput parsers are classes that help structure language model responses. There are two main methods an output parser must implement:\nget_format_instructions() -> str: A method which returns a string containing instructions for how the output of a language model should be formatted.\nparse(str) -> Any: A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.\nAnd then one optional one:\nparse_with_prompt(str, PromptValue) -> Any: A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.\nBelow we go over the main type of output parser, the PydanticOutputParser. See the examples folder for other options.\nfrom langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate\nfrom langchain.llms import OpenAI\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.output_parsers import PydanticOutputParser\nfrom pydantic import BaseModel, Field, validator\nfrom typing import List\nmodel_name = 'text-davinci-003'\ntemperature = 0.0\nmodel = OpenAI(model_name=model_name, temperature=temperature)\n# Define your desired data structure.\nclass Joke(BaseModel):\n setup: str = Field(description=\"question to set up a joke\")\n punchline: str = Field(description=\"answer to resolve the joke\")", "source": "https://python.langchain.com/en/latest/modules/prompts/output_parsers/getting_started.html"}
+{"id": "e49b3b610a7a-1", "text": "punchline: str = Field(description=\"answer to resolve the joke\")\n \n # You can add custom validation logic easily with Pydantic.\n @validator('setup')\n def question_ends_with_question_mark(cls, field):\n if field[-1] != '?':\n raise ValueError(\"Badly formed question!\")\n return field\n# Set up a parser + inject instructions into the prompt template.\nparser = PydanticOutputParser(pydantic_object=Joke)\nprompt = PromptTemplate(\n template=\"Answer the user query.\\n{format_instructions}\\n{query}\\n\",\n input_variables=[\"query\"],\n partial_variables={\"format_instructions\": parser.get_format_instructions()}\n)\n# And a query intented to prompt a language model to populate the data structure.\njoke_query = \"Tell me a joke.\"\n_input = prompt.format_prompt(query=joke_query)\noutput = model(_input.to_string())\nparser.parse(output)\nJoke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')\nprevious\nOutput Parsers\nnext\nCommaSeparatedListOutputParser\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/output_parsers/getting_started.html"}
+{"id": "4d3b4a7b6afb-0", "text": ".ipynb\n.pdf\nRetryOutputParser\nRetryOutputParser#\nWhile in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it can\u2019t. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example.\nfrom langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate\nfrom langchain.llms import OpenAI\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.output_parsers import PydanticOutputParser, OutputFixingParser, RetryOutputParser\nfrom pydantic import BaseModel, Field, validator\nfrom typing import List\ntemplate = \"\"\"Based on the user question, provide an Action and Action Input for what step should be taken.\n{format_instructions}\nQuestion: {query}\nResponse:\"\"\"\nclass Action(BaseModel):\n action: str = Field(description=\"action to take\")\n action_input: str = Field(description=\"input to the action\")\n \nparser = PydanticOutputParser(pydantic_object=Action)\nprompt = PromptTemplate(\n template=\"Answer the user query.\\n{format_instructions}\\n{query}\\n\",\n input_variables=[\"query\"],\n partial_variables={\"format_instructions\": parser.get_format_instructions()}\n)\nprompt_value = prompt.format_prompt(query=\"who is leo di caprios gf?\")\nbad_response = '{\"action\": \"search\"}'\nIf we try to parse this response as is, we will get an error\nparser.parse(bad_response)\n---------------------------------------------------------------------------\nValidationError Traceback (most recent call last)\nFile ~/workplace/langchain/langchain/output_parsers/pydantic.py:24, in PydanticOutputParser.parse(self, text)\n 23 json_object = json.loads(json_str)", "source": "https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/retry.html"}
+{"id": "4d3b4a7b6afb-1", "text": "23 json_object = json.loads(json_str)\n---> 24 return self.pydantic_object.parse_obj(json_object)\n 26 except (json.JSONDecodeError, ValidationError) as e:\nFile ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:527, in pydantic.main.BaseModel.parse_obj()\nFile ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:342, in pydantic.main.BaseModel.__init__()\nValidationError: 1 validation error for Action\naction_input\n field required (type=value_error.missing)\nDuring handling of the above exception, another exception occurred:\nOutputParserException Traceback (most recent call last)\nCell In[6], line 1\n----> 1 parser.parse(bad_response)\nFile ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text)\n 27 name = self.pydantic_object.__name__\n 28 msg = f\"Failed to parse {name} from completion {text}. Got: {e}\"\n---> 29 raise OutputParserException(msg)\nOutputParserException: Failed to parse Action from completion {\"action\": \"search\"}. Got: 1 validation error for Action\naction_input\n field required (type=value_error.missing)\nIf we try to use the OutputFixingParser to fix this error, it will be confused - namely, it doesn\u2019t know what to actually put for action input.\nfix_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI())\nfix_parser.parse(bad_response)\nAction(action='search', action_input='')", "source": "https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/retry.html"}
+{"id": "4d3b4a7b6afb-2", "text": "fix_parser.parse(bad_response)\nAction(action='search', action_input='')\nInstead, we can use the RetryOutputParser, which passes in the prompt (as well as the original output) to try again to get a better response.\nfrom langchain.output_parsers import RetryWithErrorOutputParser\nretry_parser = RetryWithErrorOutputParser.from_llm(parser=parser, llm=OpenAI(temperature=0))\nretry_parser.parse_with_prompt(bad_response, prompt_value)\nAction(action='search', action_input='who is leo di caprios gf?')\nprevious\nPydanticOutputParser\nnext\nStructured Output Parser\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/retry.html"}
+{"id": "3b48b7d29560-0", "text": ".ipynb\n.pdf\nPydanticOutputParser\nPydanticOutputParser#\nThis output parser allows users to specify an arbitrary JSON schema and query LLMs for JSON outputs that conform to that schema.\nKeep in mind that large language models are leaky abstractions! You\u2019ll have to use an LLM with sufficient capacity to generate well-formed JSON. In the OpenAI family, DaVinci can do reliably but Curie\u2019s ability already drops off dramatically.\nUse Pydantic to declare your data model. Pydantic\u2019s BaseModel like a Python dataclass, but with actual type checking + coercion.\nfrom langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate\nfrom langchain.llms import OpenAI\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.output_parsers import PydanticOutputParser\nfrom pydantic import BaseModel, Field, validator\nfrom typing import List\nmodel_name = 'text-davinci-003'\ntemperature = 0.0\nmodel = OpenAI(model_name=model_name, temperature=temperature)\n# Define your desired data structure.\nclass Joke(BaseModel):\n setup: str = Field(description=\"question to set up a joke\")\n punchline: str = Field(description=\"answer to resolve the joke\")\n \n # You can add custom validation logic easily with Pydantic.\n @validator('setup')\n def question_ends_with_question_mark(cls, field):\n if field[-1] != '?':\n raise ValueError(\"Badly formed question!\")\n return field\n# And a query intented to prompt a language model to populate the data structure.\njoke_query = \"Tell me a joke.\"\n# Set up a parser + inject instructions into the prompt template.\nparser = PydanticOutputParser(pydantic_object=Joke)\nprompt = PromptTemplate(", "source": "https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/pydantic.html"}
+{"id": "3b48b7d29560-1", "text": "prompt = PromptTemplate(\n template=\"Answer the user query.\\n{format_instructions}\\n{query}\\n\",\n input_variables=[\"query\"],\n partial_variables={\"format_instructions\": parser.get_format_instructions()}\n)\n_input = prompt.format_prompt(query=joke_query)\noutput = model(_input.to_string())\nparser.parse(output)\nJoke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')\n# Here's another example, but with a compound typed field.\nclass Actor(BaseModel):\n name: str = Field(description=\"name of an actor\")\n film_names: List[str] = Field(description=\"list of names of films they starred in\")\n \nactor_query = \"Generate the filmography for a random actor.\"\nparser = PydanticOutputParser(pydantic_object=Actor)\nprompt = PromptTemplate(\n template=\"Answer the user query.\\n{format_instructions}\\n{query}\\n\",\n input_variables=[\"query\"],\n partial_variables={\"format_instructions\": parser.get_format_instructions()}\n)\n_input = prompt.format_prompt(query=actor_query)\noutput = model(_input.to_string())\nparser.parse(output)\nActor(name='Tom Hanks', film_names=['Forrest Gump', 'Saving Private Ryan', 'The Green Mile', 'Cast Away', 'Toy Story'])\nprevious\nOutputFixingParser\nnext\nRetryOutputParser\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/pydantic.html"}
+{"id": "b31d9f978c78-0", "text": ".ipynb\n.pdf\nDatetime\nDatetime#\nThis OutputParser shows out to parse LLM output into datetime format.\nfrom langchain.prompts import PromptTemplate\nfrom langchain.output_parsers import DatetimeOutputParser\nfrom langchain.chains import LLMChain\nfrom langchain.llms import OpenAI\noutput_parser = DatetimeOutputParser()\ntemplate = \"\"\"Answer the users question:\n{question}\n{format_instructions}\"\"\"\nprompt = PromptTemplate.from_template(template, partial_variables={\"format_instructions\": output_parser.get_format_instructions()})\nchain = LLMChain(prompt=prompt, llm=OpenAI())\noutput = chain.run(\"around when was bitcoin founded?\")\noutput\n'\\n\\n2008-01-03T18:15:05.000000Z'\noutput_parser.parse(output)\ndatetime.datetime(2008, 1, 3, 18, 15, 5)\nprevious\nCommaSeparatedListOutputParser\nnext\nEnum Output Parser\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/datetime.html"}
+{"id": "d5c947ef6cfe-0", "text": ".ipynb\n.pdf\nEnum Output Parser\nEnum Output Parser#\nThis notebook shows how to use an Enum output parser\nfrom langchain.output_parsers.enum import EnumOutputParser\nfrom enum import Enum\nclass Colors(Enum):\n RED = \"red\"\n GREEN = \"green\"\n BLUE = \"blue\"\nparser = EnumOutputParser(enum=Colors)\nparser.parse(\"red\")\n\n# Can handle spaces\nparser.parse(\" green\")\n\n# And new lines\nparser.parse(\"blue\\n\")\n\n# And raises errors when appropriate\nparser.parse(\"yellow\")\n---------------------------------------------------------------------------\nValueError Traceback (most recent call last)\nFile ~/workplace/langchain/langchain/output_parsers/enum.py:25, in EnumOutputParser.parse(self, response)\n 24 try:\n---> 25 return self.enum(response.strip())\n 26 except ValueError:\nFile ~/.pyenv/versions/3.9.1/lib/python3.9/enum.py:315, in EnumMeta.__call__(cls, value, names, module, qualname, type, start)\n 314 if names is None: # simple value lookup\n--> 315 return cls.__new__(cls, value)\n 316 # otherwise, functional API: we're creating a new Enum type\nFile ~/.pyenv/versions/3.9.1/lib/python3.9/enum.py:611, in Enum.__new__(cls, value)\n 610 if result is None and exc is None:\n--> 611 raise ve_exc\n 612 elif exc is None:\nValueError: 'yellow' is not a valid Colors\nDuring handling of the above exception, another exception occurred:", "source": "https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/enum.html"}
+{"id": "d5c947ef6cfe-1", "text": "During handling of the above exception, another exception occurred:\nOutputParserException Traceback (most recent call last)\nCell In[8], line 2\n 1 # And raises errors when appropriate\n----> 2 parser.parse(\"yellow\")\nFile ~/workplace/langchain/langchain/output_parsers/enum.py:27, in EnumOutputParser.parse(self, response)\n 25 return self.enum(response.strip())\n 26 except ValueError:\n---> 27 raise OutputParserException(\n 28 f\"Response '{response}' is not one of the \"\n 29 f\"expected values: {self._valid_values}\"\n 30 )\nOutputParserException: Response 'yellow' is not one of the expected values: ['red', 'green', 'blue']\nprevious\nDatetime\nnext\nOutputFixingParser\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/enum.html"}
+{"id": "6dc1e2c8f2da-0", "text": ".ipynb\n.pdf\nStructured Output Parser\nStructured Output Parser#\nWhile the Pydantic/JSON parser is more powerful, we initially experimented data structures having text fields only.\nfrom langchain.output_parsers import StructuredOutputParser, ResponseSchema\nfrom langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate\nfrom langchain.llms import OpenAI\nfrom langchain.chat_models import ChatOpenAI\nHere we define the response schema we want to receive.\nresponse_schemas = [\n ResponseSchema(name=\"answer\", description=\"answer to the user's question\"),\n ResponseSchema(name=\"source\", description=\"source used to answer the user's question, should be a website.\")\n]\noutput_parser = StructuredOutputParser.from_response_schemas(response_schemas)\nWe now get a string that contains instructions for how the response should be formatted, and we then insert that into our prompt.\nformat_instructions = output_parser.get_format_instructions()\nprompt = PromptTemplate(\n template=\"answer the users question as best as possible.\\n{format_instructions}\\n{question}\",\n input_variables=[\"question\"],\n partial_variables={\"format_instructions\": format_instructions}\n)\nWe can now use this to format a prompt to send to the language model, and then parse the returned result.\nmodel = OpenAI(temperature=0)\n_input = prompt.format_prompt(question=\"what's the capital of france?\")\noutput = model(_input.to_string())\noutput_parser.parse(output)\n{'answer': 'Paris',\n 'source': 'https://www.worldatlas.com/articles/what-is-the-capital-of-france.html'}\nAnd here\u2019s an example of using this in a chat model\nchat_model = ChatOpenAI(temperature=0)\nprompt = ChatPromptTemplate(\n messages=[", "source": "https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/structured.html"}
+{"id": "6dc1e2c8f2da-1", "text": "prompt = ChatPromptTemplate(\n messages=[\n HumanMessagePromptTemplate.from_template(\"answer the users question as best as possible.\\n{format_instructions}\\n{question}\") \n ],\n input_variables=[\"question\"],\n partial_variables={\"format_instructions\": format_instructions}\n)\n_input = prompt.format_prompt(question=\"what's the capital of france?\")\noutput = chat_model(_input.to_messages())\noutput_parser.parse(output.content)\n{'answer': 'Paris', 'source': 'https://en.wikipedia.org/wiki/Paris'}\nprevious\nRetryOutputParser\nnext\nMemory\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/structured.html"}
+{"id": "64a47a5e554a-0", "text": ".ipynb\n.pdf\nOutputFixingParser\nOutputFixingParser#\nThis output parser wraps another output parser and tries to fix any mistakes\nThe Pydantic guardrail simply tries to parse the LLM response. If it does not parse correctly, then it errors.\nBut we can do other things besides throw errors. Specifically, we can pass the misformatted output, along with the formatted instructions, to the model and ask it to fix it.\nFor this example, we\u2019ll use the above OutputParser. Here\u2019s what happens if we pass it a result that does not comply with the schema:\nfrom langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate\nfrom langchain.llms import OpenAI\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.output_parsers import PydanticOutputParser\nfrom pydantic import BaseModel, Field, validator\nfrom typing import List\nclass Actor(BaseModel):\n name: str = Field(description=\"name of an actor\")\n film_names: List[str] = Field(description=\"list of names of films they starred in\")\n \nactor_query = \"Generate the filmography for a random actor.\"\nparser = PydanticOutputParser(pydantic_object=Actor)\nmisformatted = \"{'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}\"\nparser.parse(misformatted)\n---------------------------------------------------------------------------\nJSONDecodeError Traceback (most recent call last)\nFile ~/workplace/langchain/langchain/output_parsers/pydantic.py:23, in PydanticOutputParser.parse(self, text)\n 22 json_str = match.group()\n---> 23 json_object = json.loads(json_str)\n 24 return self.pydantic_object.parse_obj(json_object)", "source": "https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html"}
+{"id": "64a47a5e554a-1", "text": "24 return self.pydantic_object.parse_obj(json_object)\nFile ~/.pyenv/versions/3.9.1/lib/python3.9/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)\n 343 if (cls is None and object_hook is None and\n 344 parse_int is None and parse_float is None and\n 345 parse_constant is None and object_pairs_hook is None and not kw):\n--> 346 return _default_decoder.decode(s)\n 347 if cls is None:\nFile ~/.pyenv/versions/3.9.1/lib/python3.9/json/decoder.py:337, in JSONDecoder.decode(self, s, _w)\n 333 \"\"\"Return the Python representation of ``s`` (a ``str`` instance\n 334 containing a JSON document).\n 335 \n 336 \"\"\"\n--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n 338 end = _w(s, end).end()\nFile ~/.pyenv/versions/3.9.1/lib/python3.9/json/decoder.py:353, in JSONDecoder.raw_decode(self, s, idx)\n 352 try:\n--> 353 obj, end = self.scan_once(s, idx)\n 354 except StopIteration as err:\nJSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)\nDuring handling of the above exception, another exception occurred:\nOutputParserException Traceback (most recent call last)\nCell In[6], line 1\n----> 1 parser.parse(misformatted)", "source": "https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html"}
+{"id": "64a47a5e554a-2", "text": "Cell In[6], line 1\n----> 1 parser.parse(misformatted)\nFile ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text)\n 27 name = self.pydantic_object.__name__\n 28 msg = f\"Failed to parse {name} from completion {text}. Got: {e}\"\n---> 29 raise OutputParserException(msg)\nOutputParserException: Failed to parse Actor from completion {'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}. Got: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)\nNow we can construct and use a OutputFixingParser. This output parser takes as an argument another output parser but also an LLM with which to try to correct any formatting mistakes.\nfrom langchain.output_parsers import OutputFixingParser\nnew_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI())\nnew_parser.parse(misformatted)\nActor(name='Tom Hanks', film_names=['Forrest Gump'])\nprevious\nEnum Output Parser\nnext\nPydanticOutputParser\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html"}
+{"id": "1dd9b75c08a2-0", "text": ".ipynb\n.pdf\nCommaSeparatedListOutputParser\nCommaSeparatedListOutputParser#\nHere\u2019s another parser strictly less powerful than Pydantic/JSON parsing.\nfrom langchain.output_parsers import CommaSeparatedListOutputParser\nfrom langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate\nfrom langchain.llms import OpenAI\nfrom langchain.chat_models import ChatOpenAI\noutput_parser = CommaSeparatedListOutputParser()\nformat_instructions = output_parser.get_format_instructions()\nprompt = PromptTemplate(\n template=\"List five {subject}.\\n{format_instructions}\",\n input_variables=[\"subject\"],\n partial_variables={\"format_instructions\": format_instructions}\n)\nmodel = OpenAI(temperature=0)\n_input = prompt.format(subject=\"ice cream flavors\")\noutput = model(_input)\noutput_parser.parse(output)\n['Vanilla',\n 'Chocolate',\n 'Strawberry',\n 'Mint Chocolate Chip',\n 'Cookies and Cream']\nprevious\nOutput Parsers\nnext\nDatetime\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/comma_separated.html"}
+{"id": "b70414e18f65-0", "text": ".rst\n.pdf\nHow-To Guides\nHow-To Guides#\nIf you\u2019re new to the library, you may want to start with the Quickstart.\nThe user guide here shows more advanced workflows and how to use the library in different ways.\nConnecting to a Feature Store\nHow to create a custom prompt template\nHow to create a prompt template that uses few shot examples\nHow to work with partial Prompt Templates\nPrompt Composition\nHow to serialize prompts\nprevious\nGetting Started\nnext\nConnecting to a Feature Store\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/how_to_guides.html"}
+{"id": "c4daaf11019d-0", "text": ".md\n.pdf\nGetting Started\n Contents \nWhat is a prompt template?\nCreate a prompt template\nTemplate formats\nValidate template\nSerialize prompt template\nPass few shot examples to a prompt template\nSelect examples for a prompt template\nGetting Started#\nIn this tutorial, we will learn about:\nwhat a prompt template is, and why it is needed,\nhow to create a prompt template,\nhow to pass few shot examples to a prompt template,\nhow to select examples for a prompt template.\nWhat is a prompt template?#\nA prompt template refers to a reproducible way to generate a prompt. It contains a text string (\u201cthe template\u201d), that can take in a set of parameters from the end user and generate a prompt.\nThe prompt template may contain:\ninstructions to the language model,\na set of few shot examples to help the language model generate a better response,\na question to the language model.\nThe following code snippet contains an example of a prompt template:\nfrom langchain import PromptTemplate\ntemplate = \"\"\"\nI want you to act as a naming consultant for new companies.\nWhat is a good name for a company that makes {product}?\n\"\"\"\nprompt = PromptTemplate(\n input_variables=[\"product\"],\n template=template,\n)\nprompt.format(product=\"colorful socks\")\n# -> I want you to act as a naming consultant for new companies.\n# -> What is a good name for a company that makes colorful socks?\nCreate a prompt template#\nYou can create simple hardcoded prompts using the PromptTemplate class. Prompt templates can take any number of input variables, and can be formatted to generate a prompt.\nfrom langchain import PromptTemplate\n# An example prompt with no input variables\nno_input_prompt = PromptTemplate(input_variables=[], template=\"Tell me a joke.\")\nno_input_prompt.format()\n# -> \"Tell me a joke.\"", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html"}
+{"id": "c4daaf11019d-1", "text": "no_input_prompt.format()\n# -> \"Tell me a joke.\"\n# An example prompt with one input variable\none_input_prompt = PromptTemplate(input_variables=[\"adjective\"], template=\"Tell me a {adjective} joke.\")\none_input_prompt.format(adjective=\"funny\")\n# -> \"Tell me a funny joke.\"\n# An example prompt with multiple input variables\nmultiple_input_prompt = PromptTemplate(\n input_variables=[\"adjective\", \"content\"], \n template=\"Tell me a {adjective} joke about {content}.\"\n)\nmultiple_input_prompt.format(adjective=\"funny\", content=\"chickens\")\n# -> \"Tell me a funny joke about chickens.\"\nIf you do not wish to specify input_variables manually, you can also create a PromptTemplate using from_template class method. langchain will automatically infer the input_variables based on the template passed.\ntemplate = \"Tell me a {adjective} joke about {content}.\"\nprompt_template = PromptTemplate.from_template(template)\nprompt_template.input_variables\n# -> ['adjective', 'content']\nprompt_template.format(adjective=\"funny\", content=\"chickens\")\n# -> Tell me a funny joke about chickens.\nYou can create custom prompt templates that format the prompt in any way you want. For more information, see Custom Prompt Templates.\nTemplate formats#\nBy default, PromptTemplate will treat the provided template as a Python f-string. You can specify other template format through template_format argument:\n# Make sure jinja2 is installed before running this\njinja2_template = \"Tell me a {{ adjective }} joke about {{ content }}\"\nprompt_template = PromptTemplate.from_template(template=jinja2_template, template_format=\"jinja2\")\nprompt_template.format(adjective=\"funny\", content=\"chickens\")\n# -> Tell me a funny joke about chickens.", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html"}
+{"id": "c4daaf11019d-2", "text": "# -> Tell me a funny joke about chickens.\nCurrently, PromptTemplate only supports jinja2 and f-string templating format. If there is any other templating format that you would like to use, feel free to open an issue in the Github page.\nValidate template#\nBy default, PromptTemplate will validate the template string by checking whether the input_variables match the variables defined in template. You can disable this behavior by setting validate_template to False\ntemplate = \"I am learning langchain because {reason}.\"\nprompt_template = PromptTemplate(template=template, \n input_variables=[\"reason\", \"foo\"]) # ValueError due to extra variables\nprompt_template = PromptTemplate(template=template, \n input_variables=[\"reason\", \"foo\"], \n validate_template=False) # No error\nSerialize prompt template#\nYou can save your PromptTemplate into a file in your local filesystem. langchain will automatically infer the file format through the file extension name. Currently, langchain supports saving template to YAML and JSON file.\nprompt_template.save(\"awesome_prompt.json\") # Save to JSON file\nfrom langchain.prompts import load_prompt\nloaded_prompt = load_prompt(\"awesome_prompt.json\")\nassert prompt_template == loaded_prompt\nlangchain also supports loading prompt template from LangChainHub, which contains a collection of useful prompts you can use in your project. You can read more about LangChainHub and the prompts available with it here.\nfrom langchain.prompts import load_prompt\nprompt = load_prompt(\"lc://prompts/conversation/prompt.json\")\nprompt.format(history=\"\", input=\"What is 1 + 1?\")\nYou can learn more about serializing prompt template in How to serialize prompts.\nPass few shot examples to a prompt template#\nFew shot examples are a set of examples that can be used to help the language model generate a better response.", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html"}
+{"id": "c4daaf11019d-3", "text": "To generate a prompt with few shot examples, you can use the FewShotPromptTemplate. This class takes in a PromptTemplate and a list of few shot examples. It then formats the prompt template with the few shot examples.\nIn this example, we\u2019ll create a prompt to generate word antonyms.\nfrom langchain import PromptTemplate, FewShotPromptTemplate\n# First, create the list of few shot examples.\nexamples = [\n {\"word\": \"happy\", \"antonym\": \"sad\"},\n {\"word\": \"tall\", \"antonym\": \"short\"},\n]\n# Next, we specify the template to format the examples we have provided.\n# We use the `PromptTemplate` class for this.\nexample_formatter_template = \"\"\"Word: {word}\nAntonym: {antonym}\n\"\"\"\nexample_prompt = PromptTemplate(\n input_variables=[\"word\", \"antonym\"],\n template=example_formatter_template,\n)\n# Finally, we create the `FewShotPromptTemplate` object.\nfew_shot_prompt = FewShotPromptTemplate(\n # These are the examples we want to insert into the prompt.\n examples=examples,\n # This is how we want to format the examples when we insert them into the prompt.\n example_prompt=example_prompt,\n # The prefix is some text that goes before the examples in the prompt.\n # Usually, this consists of intructions.\n prefix=\"Give the antonym of every input\\n\",\n # The suffix is some text that goes after the examples in the prompt.\n # Usually, this is where the user input will go\n suffix=\"Word: {input}\\nAntonym: \",\n # The input variables are the variables that the overall prompt expects.\n input_variables=[\"input\"],", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html"}
+{"id": "c4daaf11019d-4", "text": "input_variables=[\"input\"],\n # The example_separator is the string we will use to join the prefix, examples, and suffix together with.\n example_separator=\"\\n\",\n)\n# We can now generate a prompt using the `format` method.\nprint(few_shot_prompt.format(input=\"big\"))\n# -> Give the antonym of every input\n# -> \n# -> Word: happy\n# -> Antonym: sad\n# ->\n# -> Word: tall\n# -> Antonym: short\n# ->\n# -> Word: big\n# -> Antonym: \nSelect examples for a prompt template#\nIf you have a large number of examples, you can use the ExampleSelector to select a subset of examples that will be most informative for the Language Model. This will help you generate a prompt that is more likely to generate a good response.\nBelow, we\u2019ll use the LengthBasedExampleSelector, which selects examples based on the length of the input. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more.\nWe\u2019ll continue with the example from the previous section, but this time we\u2019ll use the LengthBasedExampleSelector to select the examples.\nfrom langchain.prompts.example_selector import LengthBasedExampleSelector\n# These are a lot of examples of a pretend task of creating antonyms.\nexamples = [\n {\"word\": \"happy\", \"antonym\": \"sad\"},\n {\"word\": \"tall\", \"antonym\": \"short\"},\n {\"word\": \"energetic\", \"antonym\": \"lethargic\"},\n {\"word\": \"sunny\", \"antonym\": \"gloomy\"},\n {\"word\": \"windy\", \"antonym\": \"calm\"},\n]", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html"}
+{"id": "c4daaf11019d-5", "text": "{\"word\": \"windy\", \"antonym\": \"calm\"},\n]\n# We'll use the `LengthBasedExampleSelector` to select the examples.\nexample_selector = LengthBasedExampleSelector(\n # These are the examples is has available to choose from.\n examples=examples, \n # This is the PromptTemplate being used to format the examples.\n example_prompt=example_prompt, \n # This is the maximum length that the formatted examples should be.\n # Length is measured by the get_text_length function below.\n max_length=25\n # This is the function used to get the length of a string, which is used\n # to determine which examples to include. It is commented out because\n # it is provided as a default value if none is specified.\n # get_text_length: Callable[[str], int] = lambda x: len(re.split(\"\\n| \", x))\n)\n# We can now use the `example_selector` to create a `FewShotPromptTemplate`.\ndynamic_prompt = FewShotPromptTemplate(\n # We provide an ExampleSelector instead of examples.\n example_selector=example_selector,\n example_prompt=example_prompt,\n prefix=\"Give the antonym of every input\",\n suffix=\"Word: {input}\\nAntonym:\",\n input_variables=[\"input\"],\n example_separator=\"\\n\\n\",\n)\n# We can now generate a prompt using the `format` method.\nprint(dynamic_prompt.format(input=\"big\"))\n# -> Give the antonym of every input\n# ->\n# -> Word: happy\n# -> Antonym: sad\n# ->\n# -> Word: tall\n# -> Antonym: short\n# ->\n# -> Word: energetic\n# -> Antonym: lethargic\n# ->\n# -> Word: sunny", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html"}
+{"id": "c4daaf11019d-6", "text": "# -> Antonym: lethargic\n# ->\n# -> Word: sunny\n# -> Antonym: gloomy\n# ->\n# -> Word: windy\n# -> Antonym: calm\n# ->\n# -> Word: big\n# -> Antonym:\nIn contrast, if we provide a very long input, the LengthBasedExampleSelector will select fewer examples to include in the prompt.\nlong_string = \"big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else\"\nprint(dynamic_prompt.format(input=long_string))\n# -> Give the antonym of every input\n# -> Word: happy\n# -> Antonym: sad\n# ->\n# -> Word: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else\n# -> Antonym:\nLangChain comes with a few example selectors that you can use. For more details on how to use them, see Example Selectors.\nYou can create custom example selectors that select examples based on any criteria you want. For more details on how to do this, see Creating a custom example selector.\nprevious\nPrompt Templates\nnext\nHow-To Guides\n Contents\n \nWhat is a prompt template?\nCreate a prompt template\nTemplate formats\nValidate template\nSerialize prompt template\nPass few shot examples to a prompt template\nSelect examples for a prompt template\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html"}
+{"id": "2811086c268e-0", "text": ".ipynb\n.pdf\nPrompt Composition\nPrompt Composition#\nThis notebook goes over how to compose multiple prompts together. This can be useful when you want to reuse parts of prompts. This can be done with a PipelinePrompt. A PipelinePrompt consists of two main parts:\nfinal_prompt: This is the final prompt that is returned\npipeline_prompts: This is a list of tuples, consisting of a string (name) and a Prompt Template. Each PromptTemplate will be formatted and then passed to future prompt templates as a variable with the same name as name\nfrom langchain.prompts.pipeline import PipelinePromptTemplate\nfrom langchain.prompts.prompt import PromptTemplate\nfull_template = \"\"\"{introduction}\n{example}\n{start}\"\"\"\nfull_prompt = PromptTemplate.from_template(full_template)\nintroduction_template = \"\"\"You are impersonating {person}.\"\"\"\nintroduction_prompt = PromptTemplate.from_template(introduction_template)\nexample_template = \"\"\"Here's an example of an interaction: \nQ: {example_q}\nA: {example_a}\"\"\"\nexample_prompt = PromptTemplate.from_template(example_template)\nstart_template = \"\"\"Now, do this for real!\nQ: {input}\nA:\"\"\"\nstart_prompt = PromptTemplate.from_template(start_template)\ninput_prompts = [\n (\"introduction\", introduction_prompt),\n (\"example\", example_prompt),\n (\"start\", start_prompt)\n]\npipeline_prompt = PipelinePromptTemplate(final_prompt=full_prompt, pipeline_prompts=input_prompts)\npipeline_prompt.input_variables\n['example_a', 'person', 'example_q', 'input']\nprint(pipeline_prompt.format(\n person=\"Elon Musk\",\n example_q=\"What's your favorite car?\",\n example_a=\"Telsa\",\n input=\"What's your favorite social media site?\"\n))\nYou are impersonating Elon Musk.", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_composition.html"}
+{"id": "2811086c268e-1", "text": "))\nYou are impersonating Elon Musk.\nHere's an example of an interaction: \nQ: What's your favorite car?\nA: Telsa\nNow, do this for real!\nQ: What's your favorite social media site?\nA:\nprevious\nHow to work with partial Prompt Templates\nnext\nHow to serialize prompts\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_composition.html"}
+{"id": "c7fe63509ff3-0", "text": ".ipynb\n.pdf\nHow to create a custom prompt template\n Contents \nWhy are custom prompt templates needed?\nCreating a Custom Prompt Template\nUse the custom prompt template\nHow to create a custom prompt template#\nLet\u2019s suppose we want the LLM to generate English language explanations of a function given its name. To achieve this task, we will create a custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function.\nWhy are custom prompt templates needed?#\nLangChain provides a set of default prompt templates that can be used to generate prompts for a variety of tasks. However, there may be cases where the default prompt templates do not meet your needs. For example, you may want to create a prompt template with specific dynamic instructions for your language model. In such cases, you can create a custom prompt template.\nTake a look at the current set of default prompt templates here.\nCreating a Custom Prompt Template#\nThere are essentially two distinct prompt templates available - string prompt templates and chat prompt templates. String prompt templates provides a simple prompt in string format, while chat prompt templates produces a more structured prompt to be used with a chat API.\nIn this guide, we will create a custom prompt using a string prompt template.\nTo create a custom string prompt template, there are two requirements:\nIt has an input_variables attribute that exposes what input variables the prompt template expects.\nIt exposes a format method that takes in keyword arguments corresponding to the expected input_variables and returns the formatted prompt.\nWe will create a custom prompt template that takes in the function name as input and formats the prompt to provide the source code of the function. To achieve this, let\u2019s first create a function that will return the source code of a function given its name.\nimport inspect\ndef get_source_code(function_name):\n # Get the source code of the function", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/custom_prompt_template.html"}
+{"id": "c7fe63509ff3-1", "text": "def get_source_code(function_name):\n # Get the source code of the function\n return inspect.getsource(function_name)\nNext, we\u2019ll create a custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function.\nfrom langchain.prompts import StringPromptTemplate\nfrom pydantic import BaseModel, validator\nclass FunctionExplainerPromptTemplate(StringPromptTemplate, BaseModel):\n \"\"\" A custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function. \"\"\"\n @validator(\"input_variables\")\n def validate_input_variables(cls, v):\n \"\"\" Validate that the input variables are correct. \"\"\"\n if len(v) != 1 or \"function_name\" not in v:\n raise ValueError(\"function_name must be the only input_variable.\")\n return v\n def format(self, **kwargs) -> str:\n # Get the source code of the function\n source_code = get_source_code(kwargs[\"function_name\"])\n # Generate the prompt to be sent to the language model\n prompt = f\"\"\"\n Given the function name and source code, generate an English language explanation of the function.\n Function Name: {kwargs[\"function_name\"].__name__}\n Source Code:\n {source_code}\n Explanation:\n \"\"\"\n return prompt\n \n def _prompt_type(self):\n return \"function-explainer\"\nUse the custom prompt template#\nNow that we have created a custom prompt template, we can use it to generate prompts for our task.\nfn_explainer = FunctionExplainerPromptTemplate(input_variables=[\"function_name\"])\n# Generate a prompt for the function \"get_source_code\"\nprompt = fn_explainer.format(function_name=get_source_code)\nprint(prompt)", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/custom_prompt_template.html"}
+{"id": "c7fe63509ff3-2", "text": "prompt = fn_explainer.format(function_name=get_source_code)\nprint(prompt)\n Given the function name and source code, generate an English language explanation of the function.\n Function Name: get_source_code\n Source Code:\n def get_source_code(function_name):\n # Get the source code of the function\n return inspect.getsource(function_name)\n Explanation:\n \nprevious\nConnecting to a Feature Store\nnext\nHow to create a prompt template that uses few shot examples\n Contents\n \nWhy are custom prompt templates needed?\nCreating a Custom Prompt Template\nUse the custom prompt template\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/custom_prompt_template.html"}
+{"id": "f4f538a1cf96-0", "text": ".ipynb\n.pdf\nHow to create a prompt template that uses few shot examples\n Contents \nUse Case\nUsing an example set\nCreate the example set\nCreate a formatter for the few shot examples\nFeed examples and formatter to FewShotPromptTemplate\nUsing an example selector\nFeed examples into ExampleSelector\nFeed example selector into FewShotPromptTemplate\nHow to create a prompt template that uses few shot examples#\nIn this tutorial, we\u2019ll learn how to create a prompt template that uses few shot examples.\nWe\u2019ll use the FewShotPromptTemplate class to create a prompt template that uses few shot examples. This class either takes in a set of examples, or an ExampleSelector object. In this tutorial, we\u2019ll go over both options.\nUse Case#\nIn this tutorial, we\u2019ll configure few shot examples for self-ask with search.\nUsing an example set#\nCreate the example set#\nTo get started, create a list of few shot examples. Each example should be a dictionary with the keys being the input variables and the values being the values for those input variables.\nfrom langchain.prompts.few_shot import FewShotPromptTemplate\nfrom langchain.prompts.prompt import PromptTemplate\nexamples = [\n {\n \"question\": \"Who lived longer, Muhammad Ali or Alan Turing?\",\n \"answer\": \n\"\"\"\nAre follow up questions needed here: Yes.\nFollow up: How old was Muhammad Ali when he died?\nIntermediate answer: Muhammad Ali was 74 years old when he died.\nFollow up: How old was Alan Turing when he died?\nIntermediate answer: Alan Turing was 41 years old when he died.\nSo the final answer is: Muhammad Ali\n\"\"\"\n },\n {\n \"question\": \"When was the founder of craigslist born?\",\n \"answer\": \n\"\"\"\nAre follow up questions needed here: Yes.", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html"}
+{"id": "f4f538a1cf96-1", "text": "\"answer\": \n\"\"\"\nAre follow up questions needed here: Yes.\nFollow up: Who was the founder of craigslist?\nIntermediate answer: Craigslist was founded by Craig Newmark.\nFollow up: When was Craig Newmark born?\nIntermediate answer: Craig Newmark was born on December 6, 1952.\nSo the final answer is: December 6, 1952\n\"\"\"\n },\n {\n \"question\": \"Who was the maternal grandfather of George Washington?\",\n \"answer\":\n\"\"\"\nAre follow up questions needed here: Yes.\nFollow up: Who was the mother of George Washington?\nIntermediate answer: The mother of George Washington was Mary Ball Washington.\nFollow up: Who was the father of Mary Ball Washington?\nIntermediate answer: The father of Mary Ball Washington was Joseph Ball.\nSo the final answer is: Joseph Ball\n\"\"\"\n },\n {\n \"question\": \"Are both the directors of Jaws and Casino Royale from the same country?\",\n \"answer\":\n\"\"\"\nAre follow up questions needed here: Yes.\nFollow up: Who is the director of Jaws?\nIntermediate Answer: The director of Jaws is Steven Spielberg.\nFollow up: Where is Steven Spielberg from?\nIntermediate Answer: The United States.\nFollow up: Who is the director of Casino Royale?\nIntermediate Answer: The director of Casino Royale is Martin Campbell.\nFollow up: Where is Martin Campbell from?\nIntermediate Answer: New Zealand.\nSo the final answer is: No\n\"\"\"\n }\n]\nCreate a formatter for the few shot examples#\nConfigure a formatter that will format the few shot examples into a string. This formatter should be a PromptTemplate object.\nexample_prompt = PromptTemplate(input_variables=[\"question\", \"answer\"], template=\"Question: {question}\\n{answer}\")", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html"}
+{"id": "f4f538a1cf96-2", "text": "print(example_prompt.format(**examples[0]))\nQuestion: Who lived longer, Muhammad Ali or Alan Turing?\nAre follow up questions needed here: Yes.\nFollow up: How old was Muhammad Ali when he died?\nIntermediate answer: Muhammad Ali was 74 years old when he died.\nFollow up: How old was Alan Turing when he died?\nIntermediate answer: Alan Turing was 41 years old when he died.\nSo the final answer is: Muhammad Ali\nFeed examples and formatter to FewShotPromptTemplate#\nFinally, create a FewShotPromptTemplate object. This object takes in the few shot examples and the formatter for the few shot examples.\nprompt = FewShotPromptTemplate(\n examples=examples, \n example_prompt=example_prompt, \n suffix=\"Question: {input}\", \n input_variables=[\"input\"]\n)\nprint(prompt.format(input=\"Who was the father of Mary Ball Washington?\"))\nQuestion: Who lived longer, Muhammad Ali or Alan Turing?\nAre follow up questions needed here: Yes.\nFollow up: How old was Muhammad Ali when he died?\nIntermediate answer: Muhammad Ali was 74 years old when he died.\nFollow up: How old was Alan Turing when he died?\nIntermediate answer: Alan Turing was 41 years old when he died.\nSo the final answer is: Muhammad Ali\nQuestion: When was the founder of craigslist born?\nAre follow up questions needed here: Yes.\nFollow up: Who was the founder of craigslist?\nIntermediate answer: Craigslist was founded by Craig Newmark.\nFollow up: When was Craig Newmark born?\nIntermediate answer: Craig Newmark was born on December 6, 1952.\nSo the final answer is: December 6, 1952\nQuestion: Who was the maternal grandfather of George Washington?\nAre follow up questions needed here: Yes.", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html"}
+{"id": "f4f538a1cf96-3", "text": "Are follow up questions needed here: Yes.\nFollow up: Who was the mother of George Washington?\nIntermediate answer: The mother of George Washington was Mary Ball Washington.\nFollow up: Who was the father of Mary Ball Washington?\nIntermediate answer: The father of Mary Ball Washington was Joseph Ball.\nSo the final answer is: Joseph Ball\nQuestion: Are both the directors of Jaws and Casino Royale from the same country?\nAre follow up questions needed here: Yes.\nFollow up: Who is the director of Jaws?\nIntermediate Answer: The director of Jaws is Steven Spielberg.\nFollow up: Where is Steven Spielberg from?\nIntermediate Answer: The United States.\nFollow up: Who is the director of Casino Royale?\nIntermediate Answer: The director of Casino Royale is Martin Campbell.\nFollow up: Where is Martin Campbell from?\nIntermediate Answer: New Zealand.\nSo the final answer is: No\nQuestion: Who was the father of Mary Ball Washington?\nUsing an example selector#\nFeed examples into ExampleSelector#\nWe will reuse the example set and the formatter from the previous section. However, instead of feeding the examples directly into the FewShotPromptTemplate object, we will feed them into an ExampleSelector object.\nIn this tutorial, we will use the SemanticSimilarityExampleSelector class. This class selects few shot examples based on their similarity to the input. It uses an embedding model to compute the similarity between the input and the few shot examples, as well as a vector store to perform the nearest neighbor search.\nfrom langchain.prompts.example_selector import SemanticSimilarityExampleSelector\nfrom langchain.vectorstores import Chroma\nfrom langchain.embeddings import OpenAIEmbeddings\nexample_selector = SemanticSimilarityExampleSelector.from_examples(\n # This is the list of examples available to select from.\n examples,", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html"}
+{"id": "f4f538a1cf96-4", "text": "# This is the list of examples available to select from.\n examples,\n # This is the embedding class used to produce embeddings which are used to measure semantic similarity.\n OpenAIEmbeddings(),\n # This is the VectorStore class that is used to store the embeddings and do a similarity search over.\n Chroma,\n # This is the number of examples to produce.\n k=1\n)\n# Select the most similar example to the input.\nquestion = \"Who was the father of Mary Ball Washington?\"\nselected_examples = example_selector.select_examples({\"question\": question})\nprint(f\"Examples most similar to the input: {question}\")\nfor example in selected_examples:\n print(\"\\n\")\n for k, v in example.items():\n print(f\"{k}: {v}\")\nRunning Chroma using direct local API.\nUsing DuckDB in-memory for database. Data will be transient.\nExamples most similar to the input: Who was the father of Mary Ball Washington?\nquestion: Who was the maternal grandfather of George Washington?\nanswer: \nAre follow up questions needed here: Yes.\nFollow up: Who was the mother of George Washington?\nIntermediate answer: The mother of George Washington was Mary Ball Washington.\nFollow up: Who was the father of Mary Ball Washington?\nIntermediate answer: The father of Mary Ball Washington was Joseph Ball.\nSo the final answer is: Joseph Ball\nFeed example selector into FewShotPromptTemplate#\nFinally, create a FewShotPromptTemplate object. This object takes in the example selector and the formatter for the few shot examples.\nprompt = FewShotPromptTemplate(\n example_selector=example_selector, \n example_prompt=example_prompt, \n suffix=\"Question: {input}\", \n input_variables=[\"input\"]\n)", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html"}
+{"id": "f4f538a1cf96-5", "text": "suffix=\"Question: {input}\", \n input_variables=[\"input\"]\n)\nprint(prompt.format(input=\"Who was the father of Mary Ball Washington?\"))\nQuestion: Who was the maternal grandfather of George Washington?\nAre follow up questions needed here: Yes.\nFollow up: Who was the mother of George Washington?\nIntermediate answer: The mother of George Washington was Mary Ball Washington.\nFollow up: Who was the father of Mary Ball Washington?\nIntermediate answer: The father of Mary Ball Washington was Joseph Ball.\nSo the final answer is: Joseph Ball\nQuestion: Who was the father of Mary Ball Washington?\nprevious\nHow to create a custom prompt template\nnext\nHow to work with partial Prompt Templates\n Contents\n \nUse Case\nUsing an example set\nCreate the example set\nCreate a formatter for the few shot examples\nFeed examples and formatter to FewShotPromptTemplate\nUsing an example selector\nFeed examples into ExampleSelector\nFeed example selector into FewShotPromptTemplate\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html"}
+{"id": "3093c5c7403c-0", "text": ".ipynb\n.pdf\nHow to work with partial Prompt Templates\n Contents \nPartial With Strings\nPartial With Functions\nHow to work with partial Prompt Templates#\nA prompt template is a class with a .format method which takes in a key-value map and returns a string (a prompt) to pass to the language model. Like other methods, it can make sense to \u201cpartial\u201d a prompt template - eg pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values.\nLangChain supports this in two ways: we allow for partially formatted prompts (1) with string values, (2) with functions that return string values. These two different ways support different use cases. In the documentation below we go over the motivations for both use cases as well as how to do it in LangChain.\nPartial With Strings#\nOne common use case for wanting to partial a prompt template is if you get some of the variables before others. For example, suppose you have a prompt template that requires two variables, foo and baz. If you get the foo value early on in the chain, but the baz value later, it can be annoying to wait until you have both variables in the same place to pass them to the prompt template. Instead, you can partial the prompt template with the foo value, and then pass the partialed prompt template along and just use that. Below is an example of doing this:\nfrom langchain.prompts import PromptTemplate\nprompt = PromptTemplate(template=\"{foo}{bar}\", input_variables=[\"foo\", \"bar\"])\npartial_prompt = prompt.partial(foo=\"foo\");\nprint(partial_prompt.format(bar=\"baz\"))\nfoobaz\nYou can also just initialize the prompt with the partialed variables.\nprompt = PromptTemplate(template=\"{foo}{bar}\", input_variables=[\"bar\"], partial_variables={\"foo\": \"foo\"})\nprint(prompt.format(bar=\"baz\"))\nfoobaz", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/partial.html"}
+{"id": "3093c5c7403c-1", "text": "print(prompt.format(bar=\"baz\"))\nfoobaz\nPartial With Functions#\nThe other common use is to partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can\u2019t hard code it in the prompt, and passing it along with the other input variables is a bit annoying. In this case, it\u2019s very handy to be able to partial the prompt with a function that always returns the current date.\nfrom datetime import datetime\ndef _get_datetime():\n now = datetime.now()\n return now.strftime(\"%m/%d/%Y, %H:%M:%S\")\nprompt = PromptTemplate(\n template=\"Tell me a {adjective} joke about the day {date}\", \n input_variables=[\"adjective\", \"date\"]\n);\npartial_prompt = prompt.partial(date=_get_datetime)\nprint(partial_prompt.format(adjective=\"funny\"))\nTell me a funny joke about the day 02/27/2023, 22:15:16\nYou can also just initialize the prompt with the partialed variables, which often makes more sense in this workflow.\nprompt = PromptTemplate(\n template=\"Tell me a {adjective} joke about the day {date}\", \n input_variables=[\"adjective\"],\n partial_variables={\"date\": _get_datetime}\n);\nprint(prompt.format(adjective=\"funny\"))\nTell me a funny joke about the day 02/27/2023, 22:15:16\nprevious\nHow to create a prompt template that uses few shot examples\nnext\nPrompt Composition\n Contents\n \nPartial With Strings\nPartial With Functions\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/partial.html"}
+{"id": "3093c5c7403c-2", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/partial.html"}
+{"id": "e766dc68a8c4-0", "text": ".ipynb\n.pdf\nHow to serialize prompts\n Contents \nPromptTemplate\nLoading from YAML\nLoading from JSON\nLoading Template from a File\nFewShotPromptTemplate\nExamples\nLoading from YAML\nLoading from JSON\nExamples in the Config\nExample Prompt from a File\nPromptTempalte with OutputParser\nHow to serialize prompts#\nIt is often preferrable to store prompts not as python code but as files. This can make it easy to share, store, and version prompts. This notebook covers how to do that in LangChain, walking through all the different types of prompts and the different serialization options.\nAt a high level, the following design principles are applied to serialization:\nBoth JSON and YAML are supported. We want to support serialization methods that are human readable on disk, and YAML and JSON are two of the most popular methods for that. Note that this rule applies to prompts. For other assets, like Examples, different serialization methods may be supported.\nWe support specifying everything in one file, or storing different components (templates, examples, etc) in different files and referencing them. For some cases, storing everything in file makes the most sense, but for others it is preferrable to split up some of the assets (long templates, large examples, reusable components). LangChain supports both.\nThere is also a single entry point to load prompts from disk, making it easy to load any type of prompt.\n# All prompts are loaded through the `load_prompt` function.\nfrom langchain.prompts import load_prompt\nPromptTemplate#\nThis section covers examples for loading a PromptTemplate.\nLoading from YAML#\nThis shows an example of loading a PromptTemplate from YAML.\n!cat simple_prompt.yaml\n_type: prompt\ninput_variables:\n [\"adjective\", \"content\"]\ntemplate: \n Tell me a {adjective} joke about {content}.\nprompt = load_prompt(\"simple_prompt.yaml\")", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html"}
+{"id": "e766dc68a8c4-1", "text": "prompt = load_prompt(\"simple_prompt.yaml\")\nprint(prompt.format(adjective=\"funny\", content=\"chickens\"))\nTell me a funny joke about chickens.\nLoading from JSON#\nThis shows an example of loading a PromptTemplate from JSON.\n!cat simple_prompt.json\n{\n \"_type\": \"prompt\",\n \"input_variables\": [\"adjective\", \"content\"],\n \"template\": \"Tell me a {adjective} joke about {content}.\"\n}\nprompt = load_prompt(\"simple_prompt.json\")\nprint(prompt.format(adjective=\"funny\", content=\"chickens\"))\nTell me a funny joke about chickens.\nLoading Template from a File#\nThis shows an example of storing the template in a separate file and then referencing it in the config. Notice that the key changes from template to template_path.\n!cat simple_template.txt\nTell me a {adjective} joke about {content}.\n!cat simple_prompt_with_template_file.json\n{\n \"_type\": \"prompt\",\n \"input_variables\": [\"adjective\", \"content\"],\n \"template_path\": \"simple_template.txt\"\n}\nprompt = load_prompt(\"simple_prompt_with_template_file.json\")\nprint(prompt.format(adjective=\"funny\", content=\"chickens\"))\nTell me a funny joke about chickens.\nFewShotPromptTemplate#\nThis section covers examples for loading few shot prompt templates.\nExamples#\nThis shows an example of what examples stored as json might look like.\n!cat examples.json\n[\n {\"input\": \"happy\", \"output\": \"sad\"},\n {\"input\": \"tall\", \"output\": \"short\"}\n]\nAnd here is what the same examples stored as yaml might look like.\n!cat examples.yaml\n- input: happy\n output: sad\n- input: tall\n output: short\nLoading from YAML#", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html"}
+{"id": "e766dc68a8c4-2", "text": "output: sad\n- input: tall\n output: short\nLoading from YAML#\nThis shows an example of loading a few shot example from YAML.\n!cat few_shot_prompt.yaml\n_type: few_shot\ninput_variables:\n [\"adjective\"]\nprefix: \n Write antonyms for the following words.\nexample_prompt:\n _type: prompt\n input_variables:\n [\"input\", \"output\"]\n template:\n \"Input: {input}\\nOutput: {output}\"\nexamples:\n examples.json\nsuffix:\n \"Input: {adjective}\\nOutput:\"\nprompt = load_prompt(\"few_shot_prompt.yaml\")\nprint(prompt.format(adjective=\"funny\"))\nWrite antonyms for the following words.\nInput: happy\nOutput: sad\nInput: tall\nOutput: short\nInput: funny\nOutput:\nThe same would work if you loaded examples from the yaml file.\n!cat few_shot_prompt_yaml_examples.yaml\n_type: few_shot\ninput_variables:\n [\"adjective\"]\nprefix: \n Write antonyms for the following words.\nexample_prompt:\n _type: prompt\n input_variables:\n [\"input\", \"output\"]\n template:\n \"Input: {input}\\nOutput: {output}\"\nexamples:\n examples.yaml\nsuffix:\n \"Input: {adjective}\\nOutput:\"\nprompt = load_prompt(\"few_shot_prompt_yaml_examples.yaml\")\nprint(prompt.format(adjective=\"funny\"))\nWrite antonyms for the following words.\nInput: happy\nOutput: sad\nInput: tall\nOutput: short\nInput: funny\nOutput:\nLoading from JSON#\nThis shows an example of loading a few shot example from JSON.\n!cat few_shot_prompt.json\n{\n \"_type\": \"few_shot\",", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html"}
+{"id": "e766dc68a8c4-3", "text": "!cat few_shot_prompt.json\n{\n \"_type\": \"few_shot\",\n \"input_variables\": [\"adjective\"],\n \"prefix\": \"Write antonyms for the following words.\",\n \"example_prompt\": {\n \"_type\": \"prompt\",\n \"input_variables\": [\"input\", \"output\"],\n \"template\": \"Input: {input}\\nOutput: {output}\"\n },\n \"examples\": \"examples.json\",\n \"suffix\": \"Input: {adjective}\\nOutput:\"\n} \nprompt = load_prompt(\"few_shot_prompt.json\")\nprint(prompt.format(adjective=\"funny\"))\nWrite antonyms for the following words.\nInput: happy\nOutput: sad\nInput: tall\nOutput: short\nInput: funny\nOutput:\nExamples in the Config#\nThis shows an example of referencing the examples directly in the config.\n!cat few_shot_prompt_examples_in.json\n{\n \"_type\": \"few_shot\",\n \"input_variables\": [\"adjective\"],\n \"prefix\": \"Write antonyms for the following words.\",\n \"example_prompt\": {\n \"_type\": \"prompt\",\n \"input_variables\": [\"input\", \"output\"],\n \"template\": \"Input: {input}\\nOutput: {output}\"\n },\n \"examples\": [\n {\"input\": \"happy\", \"output\": \"sad\"},\n {\"input\": \"tall\", \"output\": \"short\"}\n ],\n \"suffix\": \"Input: {adjective}\\nOutput:\"\n} \nprompt = load_prompt(\"few_shot_prompt_examples_in.json\")\nprint(prompt.format(adjective=\"funny\"))\nWrite antonyms for the following words.\nInput: happy\nOutput: sad\nInput: tall\nOutput: short\nInput: funny\nOutput:\nExample Prompt from a File#", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html"}
+{"id": "e766dc68a8c4-4", "text": "Output: short\nInput: funny\nOutput:\nExample Prompt from a File#\nThis shows an example of loading the PromptTemplate that is used to format the examples from a separate file. Note that the key changes from example_prompt to example_prompt_path.\n!cat example_prompt.json\n{\n \"_type\": \"prompt\",\n \"input_variables\": [\"input\", \"output\"],\n \"template\": \"Input: {input}\\nOutput: {output}\" \n}\n!cat few_shot_prompt_example_prompt.json \n{\n \"_type\": \"few_shot\",\n \"input_variables\": [\"adjective\"],\n \"prefix\": \"Write antonyms for the following words.\",\n \"example_prompt_path\": \"example_prompt.json\",\n \"examples\": \"examples.json\",\n \"suffix\": \"Input: {adjective}\\nOutput:\"\n} \nprompt = load_prompt(\"few_shot_prompt_example_prompt.json\")\nprint(prompt.format(adjective=\"funny\"))\nWrite antonyms for the following words.\nInput: happy\nOutput: sad\nInput: tall\nOutput: short\nInput: funny\nOutput:\nPromptTempalte with OutputParser#\nThis shows an example of loading a prompt along with an OutputParser from a file.\n! cat prompt_with_output_parser.json\n{\n \"input_variables\": [\n \"question\",\n \"student_answer\"\n ],\n \"output_parser\": {\n \"regex\": \"(.*?)\\\\nScore: (.*)\",\n \"output_keys\": [\n \"answer\",\n \"score\"\n ],\n \"default_output_key\": null,\n \"_type\": \"regex_parser\"\n },\n \"partial_variables\": {},", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html"}
+{"id": "e766dc68a8c4-5", "text": "\"_type\": \"regex_parser\"\n },\n \"partial_variables\": {},\n \"template\": \"Given the following question and student answer, provide a correct answer and score the student answer.\\nQuestion: {question}\\nStudent Answer: {student_answer}\\nCorrect Answer:\",\n \"template_format\": \"f-string\",\n \"validate_template\": true,\n \"_type\": \"prompt\"\n}\nprompt = load_prompt(\"prompt_with_output_parser.json\")\nprompt.output_parser.parse(\"George Washington was born in 1732 and died in 1799.\\nScore: 1/2\")\n{'answer': 'George Washington was born in 1732 and died in 1799.',\n 'score': '1/2'}\nprevious\nPrompt Composition\nnext\nPrompts\n Contents\n \nPromptTemplate\nLoading from YAML\nLoading from JSON\nLoading Template from a File\nFewShotPromptTemplate\nExamples\nLoading from YAML\nLoading from JSON\nExamples in the Config\nExample Prompt from a File\nPromptTempalte with OutputParser\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html"}
+{"id": "d20e2d3f9f23-0", "text": ".ipynb\n.pdf\nConnecting to a Feature Store\n Contents \nFeast\nLoad Feast Store\nPrompts\nUse in a chain\nTecton\nPrerequisites\nDefine and Load Features\nPrompts\nUse in a chain\nFeatureform\nInitialize Featureform\nPrompts\nUse in a chain\nConnecting to a Feature Store#\nFeature stores are a concept from traditional machine learning that make sure data fed into models is up-to-date and relevant. For more on this, see here.\nThis concept is extremely relevant when considering putting LLM applications in production. In order to personalize LLM applications, you may want to combine LLMs with up-to-date information about particular users. Feature stores can be a great way to keep that data fresh, and LangChain provides an easy way to combine that data with LLMs.\nIn this notebook we will show how to connect prompt templates to feature stores. The basic idea is to call a feature store from inside a prompt template to retrieve values that are then formatted into the prompt.\nFeast#\nTo start, we will use the popular open source feature store framework Feast.\nThis assumes you have already run the steps in the README around getting started. We will build of off that example in getting started, and create and LLMChain to write a note to a specific driver regarding their up-to-date statistics.\nLoad Feast Store#\nAgain, this should be set up according to the instructions in the Feast README\nfrom feast import FeatureStore\n# You may need to update the path depending on where you stored it\nfeast_repo_path = \"../../../../../my_feature_repo/feature_repo/\"\nstore = FeatureStore(repo_path=feast_repo_path)\nPrompts#\nHere we will set up a custom FeastPromptTemplate. This prompt template will take in a driver id, look up their stats, and format those stats into a prompt.", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html"}
+{"id": "d20e2d3f9f23-1", "text": "Note that the input to this prompt template is just driver_id, since that is the only user defined piece (all other variables are looked up inside the prompt template).\nfrom langchain.prompts import PromptTemplate, StringPromptTemplate\ntemplate = \"\"\"Given the driver's up to date stats, write them note relaying those stats to them.\nIf they have a conversation rate above .5, give them a compliment. Otherwise, make a silly joke about chickens at the end to make them feel better\nHere are the drivers stats:\nConversation rate: {conv_rate}\nAcceptance rate: {acc_rate}\nAverage Daily Trips: {avg_daily_trips}\nYour response:\"\"\"\nprompt = PromptTemplate.from_template(template)\nclass FeastPromptTemplate(StringPromptTemplate):\n \n def format(self, **kwargs) -> str:\n driver_id = kwargs.pop(\"driver_id\")\n feature_vector = store.get_online_features(\n features=[\n 'driver_hourly_stats:conv_rate',\n 'driver_hourly_stats:acc_rate',\n 'driver_hourly_stats:avg_daily_trips'\n ],\n entity_rows=[{\"driver_id\": driver_id}]\n ).to_dict()\n kwargs[\"conv_rate\"] = feature_vector[\"conv_rate\"][0]\n kwargs[\"acc_rate\"] = feature_vector[\"acc_rate\"][0]\n kwargs[\"avg_daily_trips\"] = feature_vector[\"avg_daily_trips\"][0]\n return prompt.format(**kwargs)\nprompt_template = FeastPromptTemplate(input_variables=[\"driver_id\"])\nprint(prompt_template.format(driver_id=1001))\nGiven the driver's up to date stats, write them note relaying those stats to them.\nIf they have a conversation rate above .5, give them a compliment. Otherwise, make a silly joke about chickens at the end to make them feel better\nHere are the drivers stats:", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html"}
+{"id": "d20e2d3f9f23-2", "text": "Here are the drivers stats:\nConversation rate: 0.4745151400566101\nAcceptance rate: 0.055561766028404236\nAverage Daily Trips: 936\nYour response:\nUse in a chain#\nWe can now use this in a chain, successfully creating a chain that achieves personalization backed by a feature store\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.chains import LLMChain\nchain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template)\nchain.run(1001)\n\"Hi there! I wanted to update you on your current stats. Your acceptance rate is 0.055561766028404236 and your average daily trips are 936. While your conversation rate is currently 0.4745151400566101, I have no doubt that with a little extra effort, you'll be able to exceed that .5 mark! Keep up the great work! And remember, even chickens can't always cross the road, but they still give it their best shot.\"\nTecton#\nAbove, we showed how you could use Feast, a popular open source and self-managed feature store, with LangChain. Our examples below will show a similar integration using Tecton. Tecton is a fully managed feature platform built to orchestrate the complete ML feature lifecycle, from transformation to online serving, with enterprise-grade SLAs.\nPrerequisites#\nTecton Deployment (sign up at https://tecton.ai)\nTECTON_API_KEY environment variable set to a valid Service Account key\nDefine and Load Features#\nWe will use the user_transaction_counts Feature View from the Tecton tutorial as part of a Feature Service. For simplicity, we are only using a single Feature View; however, more sophisticated applications may require more feature views to retrieve the features needed for its prompt.\nuser_transaction_metrics = FeatureService(", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html"}
+{"id": "d20e2d3f9f23-3", "text": "user_transaction_metrics = FeatureService(\n name = \"user_transaction_metrics\",\n features = [user_transaction_counts]\n)\nThe above Feature Service is expected to be applied to a live workspace. For this example, we will be using the \u201cprod\u201d workspace.\nimport tecton\nworkspace = tecton.get_workspace(\"prod\")\nfeature_service = workspace.get_feature_service(\"user_transaction_metrics\")\nPrompts#\nHere we will set up a custom TectonPromptTemplate. This prompt template will take in a user_id , look up their stats, and format those stats into a prompt.\nNote that the input to this prompt template is just user_id, since that is the only user defined piece (all other variables are looked up inside the prompt template).\nfrom langchain.prompts import PromptTemplate, StringPromptTemplate\ntemplate = \"\"\"Given the vendor's up to date transaction stats, write them a note based on the following rules:\n1. If they had a transaction in the last day, write a short congratulations message on their recent sales\n2. If no transaction in the last day, but they had a transaction in the last 30 days, playfully encourage them to sell more.\n3. Always add a silly joke about chickens at the end\nHere are the vendor's stats:\nNumber of Transactions Last Day: {transaction_count_1d}\nNumber of Transactions Last 30 Days: {transaction_count_30d}\nYour response:\"\"\"\nprompt = PromptTemplate.from_template(template)\nclass TectonPromptTemplate(StringPromptTemplate):\n \n def format(self, **kwargs) -> str:\n user_id = kwargs.pop(\"user_id\")\n feature_vector = feature_service.get_online_features(join_keys={\"user_id\": user_id}).to_dict()\n kwargs[\"transaction_count_1d\"] = feature_vector[\"user_transaction_counts.transaction_count_1d_1d\"]", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html"}
+{"id": "d20e2d3f9f23-4", "text": "kwargs[\"transaction_count_30d\"] = feature_vector[\"user_transaction_counts.transaction_count_30d_1d\"]\n return prompt.format(**kwargs)\nprompt_template = TectonPromptTemplate(input_variables=[\"user_id\"])\nprint(prompt_template.format(user_id=\"user_469998441571\"))\nGiven the vendor's up to date transaction stats, write them a note based on the following rules:\n1. If they had a transaction in the last day, write a short congratulations message on their recent sales\n2. If no transaction in the last day, but they had a transaction in the last 30 days, playfully encourage them to sell more.\n3. Always add a silly joke about chickens at the end\nHere are the vendor's stats:\nNumber of Transactions Last Day: 657\nNumber of Transactions Last 30 Days: 20326\nYour response:\nUse in a chain#\nWe can now use this in a chain, successfully creating a chain that achieves personalization backed by the Tecton Feature Platform\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.chains import LLMChain\nchain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template)\nchain.run(\"user_469998441571\")\n'Wow, congratulations on your recent sales! Your business is really soaring like a chicken on a hot air balloon! Keep up the great work!'\nFeatureform#\nFinally, we will use Featureform an open-source and enterprise-grade feature store to run the same example. Featureform allows you to work with your infrastructure like Spark or locally to define your feature transformations.\nInitialize Featureform#\nYou can follow in the instructions in the README to initialize your transformations and features in Featureform.\nimport featureform as ff\nclient = ff.Client(host=\"demo.featureform.com\")\nPrompts#", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html"}
+{"id": "d20e2d3f9f23-5", "text": "client = ff.Client(host=\"demo.featureform.com\")\nPrompts#\nHere we will set up a custom FeatureformPromptTemplate. This prompt template will take in the average amount a user pays per transactions.\nNote that the input to this prompt template is just avg_transaction, since that is the only user defined piece (all other variables are looked up inside the prompt template).\nfrom langchain.prompts import PromptTemplate, StringPromptTemplate\ntemplate = \"\"\"Given the amount a user spends on average per transaction, let them know if they are a high roller. Otherwise, make a silly joke about chickens at the end to make them feel better\nHere are the user's stats:\nAverage Amount per Transaction: ${avg_transcation}\nYour response:\"\"\"\nprompt = PromptTemplate.from_template(template)\nclass FeatureformPromptTemplate(StringPromptTemplate):\n \n def format(self, **kwargs) -> str:\n user_id = kwargs.pop(\"user_id\")\n fpf = client.features([(\"avg_transactions\", \"quickstart\")], {\"user\": user_id})\n return prompt.format(**kwargs)\nprompt_template = FeatureformPrompTemplate(input_variables=[\"user_id\"])\nprint(prompt_template.format(user_id=\"C1410926\"))\nUse in a chain#\nWe can now use this in a chain, successfully creating a chain that achieves personalization backed by the Featureform Feature Platform\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.chains import LLMChain\nchain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template)\nchain.run(\"C1410926\")\nprevious\nHow-To Guides\nnext\nHow to create a custom prompt template\n Contents\n \nFeast\nLoad Feast Store\nPrompts\nUse in a chain\nTecton\nPrerequisites\nDefine and Load Features\nPrompts\nUse in a chain\nFeatureform\nInitialize Featureform", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html"}
+{"id": "d20e2d3f9f23-6", "text": "Define and Load Features\nPrompts\nUse in a chain\nFeatureform\nInitialize Featureform\nPrompts\nUse in a chain\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html"}
+{"id": "ee830c0ee723-0", "text": ".ipynb\n.pdf\nMaximal Marginal Relevance ExampleSelector\nMaximal Marginal Relevance ExampleSelector#\nThe MaxMarginalRelevanceExampleSelector selects examples based on a combination of which examples are most similar to the inputs, while also optimizing for diversity. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs, and then iteratively adding them while penalizing them for closeness to already selected examples.\nfrom langchain.prompts.example_selector import MaxMarginalRelevanceExampleSelector, SemanticSimilarityExampleSelector\nfrom langchain.vectorstores import FAISS\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.prompts import FewShotPromptTemplate, PromptTemplate\nexample_prompt = PromptTemplate(\n input_variables=[\"input\", \"output\"],\n template=\"Input: {input}\\nOutput: {output}\",\n)\n# These are a lot of examples of a pretend task of creating antonyms.\nexamples = [\n {\"input\": \"happy\", \"output\": \"sad\"},\n {\"input\": \"tall\", \"output\": \"short\"},\n {\"input\": \"energetic\", \"output\": \"lethargic\"},\n {\"input\": \"sunny\", \"output\": \"gloomy\"},\n {\"input\": \"windy\", \"output\": \"calm\"},\n]\nexample_selector = MaxMarginalRelevanceExampleSelector.from_examples(\n # This is the list of examples available to select from.\n examples, \n # This is the embedding class used to produce embeddings which are used to measure semantic similarity.\n OpenAIEmbeddings(), \n # This is the VectorStore class that is used to store the embeddings and do a similarity search over.\n FAISS, \n # This is the number of examples to produce.\n k=2\n)", "source": "https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/mmr.html"}
+{"id": "ee830c0ee723-1", "text": "# This is the number of examples to produce.\n k=2\n)\nmmr_prompt = FewShotPromptTemplate(\n # We provide an ExampleSelector instead of examples.\n example_selector=example_selector,\n example_prompt=example_prompt,\n prefix=\"Give the antonym of every input\",\n suffix=\"Input: {adjective}\\nOutput:\", \n input_variables=[\"adjective\"],\n)\n# Input is a feeling, so should select the happy/sad example as the first one\nprint(mmr_prompt.format(adjective=\"worried\"))\nGive the antonym of every input\nInput: happy\nOutput: sad\nInput: windy\nOutput: calm\nInput: worried\nOutput:\n# Let's compare this to what we would just get if we went solely off of similarity,\n# by using SemanticSimilarityExampleSelector instead of MaxMarginalRelevanceExampleSelector.\nexample_selector = SemanticSimilarityExampleSelector.from_examples(\n # This is the list of examples available to select from.\n examples, \n # This is the embedding class used to produce embeddings which are used to measure semantic similarity.\n OpenAIEmbeddings(), \n # This is the VectorStore class that is used to store the embeddings and do a similarity search over.\n FAISS, \n # This is the number of examples to produce.\n k=2\n)\nsimilar_prompt = FewShotPromptTemplate(\n # We provide an ExampleSelector instead of examples.\n example_selector=example_selector,\n example_prompt=example_prompt,\n prefix=\"Give the antonym of every input\",\n suffix=\"Input: {adjective}\\nOutput:\", \n input_variables=[\"adjective\"],\n)\nprint(similar_prompt.format(adjective=\"worried\"))\nGive the antonym of every input\nInput: happy", "source": "https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/mmr.html"}
+{"id": "ee830c0ee723-2", "text": "Give the antonym of every input\nInput: happy\nOutput: sad\nInput: sunny\nOutput: gloomy\nInput: worried\nOutput:\nprevious\nLengthBased ExampleSelector\nnext\nNGram Overlap ExampleSelector\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/mmr.html"}
+{"id": "f49cf0de5acc-0", "text": ".ipynb\n.pdf\nLengthBased ExampleSelector\nLengthBased ExampleSelector#\nThis ExampleSelector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more.\nfrom langchain.prompts import PromptTemplate\nfrom langchain.prompts import FewShotPromptTemplate\nfrom langchain.prompts.example_selector import LengthBasedExampleSelector\n# These are a lot of examples of a pretend task of creating antonyms.\nexamples = [\n {\"input\": \"happy\", \"output\": \"sad\"},\n {\"input\": \"tall\", \"output\": \"short\"},\n {\"input\": \"energetic\", \"output\": \"lethargic\"},\n {\"input\": \"sunny\", \"output\": \"gloomy\"},\n {\"input\": \"windy\", \"output\": \"calm\"},\n]\nexample_prompt = PromptTemplate(\n input_variables=[\"input\", \"output\"],\n template=\"Input: {input}\\nOutput: {output}\",\n)\nexample_selector = LengthBasedExampleSelector(\n # These are the examples it has available to choose from.\n examples=examples, \n # This is the PromptTemplate being used to format the examples.\n example_prompt=example_prompt, \n # This is the maximum length that the formatted examples should be.\n # Length is measured by the get_text_length function below.\n max_length=25,\n # This is the function used to get the length of a string, which is used\n # to determine which examples to include. It is commented out because\n # it is provided as a default value if none is specified.", "source": "https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/length_based.html"}
+{"id": "f49cf0de5acc-1", "text": "# it is provided as a default value if none is specified.\n # get_text_length: Callable[[str], int] = lambda x: len(re.split(\"\\n| \", x))\n)\ndynamic_prompt = FewShotPromptTemplate(\n # We provide an ExampleSelector instead of examples.\n example_selector=example_selector,\n example_prompt=example_prompt,\n prefix=\"Give the antonym of every input\",\n suffix=\"Input: {adjective}\\nOutput:\", \n input_variables=[\"adjective\"],\n)\n# An example with small input, so it selects all examples.\nprint(dynamic_prompt.format(adjective=\"big\"))\nGive the antonym of every input\nInput: happy\nOutput: sad\nInput: tall\nOutput: short\nInput: energetic\nOutput: lethargic\nInput: sunny\nOutput: gloomy\nInput: windy\nOutput: calm\nInput: big\nOutput:\n# An example with long input, so it selects only one example.\nlong_string = \"big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else\"\nprint(dynamic_prompt.format(adjective=long_string))\nGive the antonym of every input\nInput: happy\nOutput: sad\nInput: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else\nOutput:\n# You can add an example to an example selector as well.\nnew_example = {\"input\": \"big\", \"output\": \"small\"}\ndynamic_prompt.example_selector.add_example(new_example)\nprint(dynamic_prompt.format(adjective=\"enthusiastic\"))\nGive the antonym of every input\nInput: happy\nOutput: sad\nInput: tall\nOutput: short\nInput: energetic\nOutput: lethargic\nInput: sunny\nOutput: gloomy\nInput: windy\nOutput: calm", "source": "https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/length_based.html"}
+{"id": "f49cf0de5acc-2", "text": "Input: sunny\nOutput: gloomy\nInput: windy\nOutput: calm\nInput: big\nOutput: small\nInput: enthusiastic\nOutput:\nprevious\nHow to create a custom example selector\nnext\nMaximal Marginal Relevance ExampleSelector\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/length_based.html"}
+{"id": "4bd6513c1b49-0", "text": ".ipynb\n.pdf\nNGram Overlap ExampleSelector\nNGram Overlap ExampleSelector#\nThe NGramOverlapExampleSelector selects and orders examples based on which examples are most similar to the input, according to an ngram overlap score. The ngram overlap score is a float between 0.0 and 1.0, inclusive.\nThe selector allows for a threshold score to be set. Examples with an ngram overlap score less than or equal to the threshold are excluded. The threshold is set to -1.0, by default, so will not exclude any examples, only reorder them. Setting the threshold to 0.0 will exclude examples that have no ngram overlaps with the input.\nfrom langchain.prompts import PromptTemplate\nfrom langchain.prompts.example_selector.ngram_overlap import NGramOverlapExampleSelector\nfrom langchain.prompts import FewShotPromptTemplate, PromptTemplate\nexample_prompt = PromptTemplate(\n input_variables=[\"input\", \"output\"],\n template=\"Input: {input}\\nOutput: {output}\",\n)\n# These are a lot of examples of a pretend task of creating antonyms.\nexamples = [\n {\"input\": \"happy\", \"output\": \"sad\"},\n {\"input\": \"tall\", \"output\": \"short\"},\n {\"input\": \"energetic\", \"output\": \"lethargic\"},\n {\"input\": \"sunny\", \"output\": \"gloomy\"},\n {\"input\": \"windy\", \"output\": \"calm\"},\n]\n# These are examples of a fictional translation task.\nexamples = [\n {\"input\": \"See Spot run.\", \"output\": \"Ver correr a Spot.\"},\n {\"input\": \"My dog barks.\", \"output\": \"Mi perro ladra.\"},\n {\"input\": \"Spot can run.\", \"output\": \"Spot puede correr.\"},", "source": "https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html"}
+{"id": "4bd6513c1b49-1", "text": "{\"input\": \"Spot can run.\", \"output\": \"Spot puede correr.\"},\n]\nexample_prompt = PromptTemplate(\n input_variables=[\"input\", \"output\"],\n template=\"Input: {input}\\nOutput: {output}\",\n)\nexample_selector = NGramOverlapExampleSelector(\n # These are the examples it has available to choose from.\n examples=examples, \n # This is the PromptTemplate being used to format the examples.\n example_prompt=example_prompt, \n # This is the threshold, at which selector stops.\n # It is set to -1.0 by default.\n threshold=-1.0,\n # For negative threshold:\n # Selector sorts examples by ngram overlap score, and excludes none.\n # For threshold greater than 1.0:\n # Selector excludes all examples, and returns an empty list.\n # For threshold equal to 0.0:\n # Selector sorts examples by ngram overlap score,\n # and excludes those with no ngram overlap with input.\n)\ndynamic_prompt = FewShotPromptTemplate(\n # We provide an ExampleSelector instead of examples.\n example_selector=example_selector,\n example_prompt=example_prompt,\n prefix=\"Give the Spanish translation of every input\",\n suffix=\"Input: {sentence}\\nOutput:\", \n input_variables=[\"sentence\"],\n)\n# An example input with large ngram overlap with \"Spot can run.\"\n# and no overlap with \"My dog barks.\"\nprint(dynamic_prompt.format(sentence=\"Spot can run fast.\"))\nGive the Spanish translation of every input\nInput: Spot can run.\nOutput: Spot puede correr.\nInput: See Spot run.\nOutput: Ver correr a Spot.\nInput: My dog barks.", "source": "https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html"}
+{"id": "4bd6513c1b49-2", "text": "Output: Ver correr a Spot.\nInput: My dog barks.\nOutput: Mi perro ladra.\nInput: Spot can run fast.\nOutput:\n# You can add examples to NGramOverlapExampleSelector as well.\nnew_example = {\"input\": \"Spot plays fetch.\", \"output\": \"Spot juega a buscar.\"}\nexample_selector.add_example(new_example)\nprint(dynamic_prompt.format(sentence=\"Spot can run fast.\"))\nGive the Spanish translation of every input\nInput: Spot can run.\nOutput: Spot puede correr.\nInput: See Spot run.\nOutput: Ver correr a Spot.\nInput: Spot plays fetch.\nOutput: Spot juega a buscar.\nInput: My dog barks.\nOutput: Mi perro ladra.\nInput: Spot can run fast.\nOutput:\n# You can set a threshold at which examples are excluded.\n# For example, setting threshold equal to 0.0\n# excludes examples with no ngram overlaps with input.\n# Since \"My dog barks.\" has no ngram overlaps with \"Spot can run fast.\"\n# it is excluded.\nexample_selector.threshold=0.0\nprint(dynamic_prompt.format(sentence=\"Spot can run fast.\"))\nGive the Spanish translation of every input\nInput: Spot can run.\nOutput: Spot puede correr.\nInput: See Spot run.\nOutput: Ver correr a Spot.\nInput: Spot plays fetch.\nOutput: Spot juega a buscar.\nInput: Spot can run fast.\nOutput:\n# Setting small nonzero threshold\nexample_selector.threshold=0.09\nprint(dynamic_prompt.format(sentence=\"Spot can play fetch.\"))\nGive the Spanish translation of every input\nInput: Spot can run.\nOutput: Spot puede correr.\nInput: Spot plays fetch.\nOutput: Spot juega a buscar.", "source": "https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html"}
+{"id": "4bd6513c1b49-3", "text": "Input: Spot plays fetch.\nOutput: Spot juega a buscar.\nInput: Spot can play fetch.\nOutput:\n# Setting threshold greater than 1.0\nexample_selector.threshold=1.0+1e-9\nprint(dynamic_prompt.format(sentence=\"Spot can play fetch.\"))\nGive the Spanish translation of every input\nInput: Spot can play fetch.\nOutput:\nprevious\nMaximal Marginal Relevance ExampleSelector\nnext\nSimilarity ExampleSelector\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html"}
+{"id": "e671198a353e-0", "text": ".ipynb\n.pdf\nSimilarity ExampleSelector\nSimilarity ExampleSelector#\nThe SemanticSimilarityExampleSelector selects examples based on which examples are most similar to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs.\nfrom langchain.prompts.example_selector import SemanticSimilarityExampleSelector\nfrom langchain.vectorstores import Chroma\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.prompts import FewShotPromptTemplate, PromptTemplate\nexample_prompt = PromptTemplate(\n input_variables=[\"input\", \"output\"],\n template=\"Input: {input}\\nOutput: {output}\",\n)\n# These are a lot of examples of a pretend task of creating antonyms.\nexamples = [\n {\"input\": \"happy\", \"output\": \"sad\"},\n {\"input\": \"tall\", \"output\": \"short\"},\n {\"input\": \"energetic\", \"output\": \"lethargic\"},\n {\"input\": \"sunny\", \"output\": \"gloomy\"},\n {\"input\": \"windy\", \"output\": \"calm\"},\n]\nexample_selector = SemanticSimilarityExampleSelector.from_examples(\n # This is the list of examples available to select from.\n examples, \n # This is the embedding class used to produce embeddings which are used to measure semantic similarity.\n OpenAIEmbeddings(), \n # This is the VectorStore class that is used to store the embeddings and do a similarity search over.\n Chroma, \n # This is the number of examples to produce.\n k=1\n)\nsimilar_prompt = FewShotPromptTemplate(\n # We provide an ExampleSelector instead of examples.\n example_selector=example_selector,\n example_prompt=example_prompt,\n prefix=\"Give the antonym of every input\",", "source": "https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/similarity.html"}
+{"id": "e671198a353e-1", "text": "example_prompt=example_prompt,\n prefix=\"Give the antonym of every input\",\n suffix=\"Input: {adjective}\\nOutput:\", \n input_variables=[\"adjective\"],\n)\nRunning Chroma using direct local API.\nUsing DuckDB in-memory for database. Data will be transient.\n# Input is a feeling, so should select the happy/sad example\nprint(similar_prompt.format(adjective=\"worried\"))\nGive the antonym of every input\nInput: happy\nOutput: sad\nInput: worried\nOutput:\n# Input is a measurement, so should select the tall/short example\nprint(similar_prompt.format(adjective=\"fat\"))\nGive the antonym of every input\nInput: happy\nOutput: sad\nInput: fat\nOutput:\n# You can add new examples to the SemanticSimilarityExampleSelector as well\nsimilar_prompt.example_selector.add_example({\"input\": \"enthusiastic\", \"output\": \"apathetic\"})\nprint(similar_prompt.format(adjective=\"joyful\"))\nGive the antonym of every input\nInput: happy\nOutput: sad\nInput: joyful\nOutput:\nprevious\nNGram Overlap ExampleSelector\nnext\nOutput Parsers\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/similarity.html"}
+{"id": "c6749396e008-0", "text": ".md\n.pdf\nHow to create a custom example selector\n Contents \nImplement custom example selector\nUse custom example selector\nHow to create a custom example selector#\nIn this tutorial, we\u2019ll create a custom example selector that selects every alternate example from a given list of examples.\nAn ExampleSelector must implement two methods:\nAn add_example method which takes in an example and adds it into the ExampleSelector\nA select_examples method which takes in input variables (which are meant to be user input) and returns a list of examples to use in the few shot prompt.\nLet\u2019s implement a custom ExampleSelector that just selects two examples at random.\nNote\nTake a look at the current set of example selector implementations supported in LangChain here.\nImplement custom example selector#\nfrom langchain.prompts.example_selector.base import BaseExampleSelector\nfrom typing import Dict, List\nimport numpy as np\nclass CustomExampleSelector(BaseExampleSelector):\n \n def __init__(self, examples: List[Dict[str, str]]):\n self.examples = examples\n \n def add_example(self, example: Dict[str, str]) -> None:\n \"\"\"Add new example to store for a key.\"\"\"\n self.examples.append(example)\n def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:\n \"\"\"Select which examples to use based on the inputs.\"\"\"\n return np.random.choice(self.examples, size=2, replace=False)\nUse custom example selector#\nexamples = [\n {\"foo\": \"1\"},\n {\"foo\": \"2\"},\n {\"foo\": \"3\"}\n]\n# Initialize example selector.\nexample_selector = CustomExampleSelector(examples)\n# Select examples\nexample_selector.select_examples({\"foo\": \"foo\"})\n# -> array([{'foo': '2'}, {'foo': '3'}], dtype=object)", "source": "https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/custom_example_selector.html"}
+{"id": "c6749396e008-1", "text": "# Add new example to the set of examples\nexample_selector.add_example({\"foo\": \"4\"})\nexample_selector.examples\n# -> [{'foo': '1'}, {'foo': '2'}, {'foo': '3'}, {'foo': '4'}]\n# Select examples\nexample_selector.select_examples({\"foo\": \"foo\"})\n# -> array([{'foo': '1'}, {'foo': '4'}], dtype=object)\nprevious\nExample Selectors\nnext\nLengthBased ExampleSelector\n Contents\n \nImplement custom example selector\nUse custom example selector\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/custom_example_selector.html"}
+{"id": "798cf4aac6ff-0", "text": ".ipynb\n.pdf\nPlan and Execute\n Contents \nPlan and Execute\nImports\nTools\nPlanner, Executor, and Agent\nRun Example\nPlan and Execute#\nPlan and execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by BabyAGI and then the \u201cPlan-and-Solve\u201d paper.\nThe planning is almost always done by an LLM.\nThe execution is usually done by a separate agent (equipped with tools).\nImports#\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.experimental.plan_and_execute import PlanAndExecute, load_agent_executor, load_chat_planner\nfrom langchain.llms import OpenAI\nfrom langchain import SerpAPIWrapper\nfrom langchain.agents.tools import Tool\nfrom langchain import LLMMathChain\nTools#\nsearch = SerpAPIWrapper()\nllm = OpenAI(temperature=0)\nllm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)\ntools = [\n Tool(\n name = \"Search\",\n func=search.run,\n description=\"useful for when you need to answer questions about current events\"\n ),\n Tool(\n name=\"Calculator\",\n func=llm_math_chain.run,\n description=\"useful for when you need to answer questions about math\"\n ),\n]\nPlanner, Executor, and Agent#\nmodel = ChatOpenAI(temperature=0)\nplanner = load_chat_planner(model)\nexecutor = load_agent_executor(model, tools, verbose=True)\nagent = PlanAndExecute(planner=planner, executor=executor, verbose=True)\nRun Example#\nagent.run(\"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\")", "source": "https://python.langchain.com/en/latest/modules/agents/plan_and_execute.html"}
+{"id": "798cf4aac6ff-1", "text": "> Entering new PlanAndExecute chain...\nsteps=[Step(value=\"Search for Leo DiCaprio's girlfriend on the internet.\"), Step(value='Find her current age.'), Step(value='Raise her current age to the 0.43 power using a calculator or programming language.'), Step(value='Output the result.'), Step(value=\"Given the above steps taken, respond to the user's original question.\\n\\n\")]\n> Entering new AgentExecutor chain...\nAction:\n```\n{\n \"action\": \"Search\",\n \"action_input\": \"Who is Leo DiCaprio's girlfriend?\"\n}\n``` \nObservation: DiCaprio broke up with girlfriend Camila Morrone, 25, in the summer of 2022, after dating for four years. He's since been linked to another famous supermodel \u2013 Gigi Hadid. The power couple were first supposedly an item in September after being spotted getting cozy during a party at New York Fashion Week.\nThought:Based on the previous observation, I can provide the answer to the current objective. \nAction:\n```\n{\n \"action\": \"Final Answer\",\n \"action_input\": \"Leo DiCaprio is currently linked to Gigi Hadid.\"\n}\n```\n> Finished chain.\n*****\nStep: Search for Leo DiCaprio's girlfriend on the internet.\nResponse: Leo DiCaprio is currently linked to Gigi Hadid.\n> Entering new AgentExecutor chain...\nAction:\n```\n{\n \"action\": \"Search\",\n \"action_input\": \"What is Gigi Hadid's current age?\"\n}\n```\nObservation: 28 years\nThought:Previous steps: steps=[(Step(value=\"Search for Leo DiCaprio's girlfriend on the internet.\"), StepResponse(response='Leo DiCaprio is currently linked to Gigi Hadid.'))]", "source": "https://python.langchain.com/en/latest/modules/agents/plan_and_execute.html"}
+{"id": "798cf4aac6ff-2", "text": "Current objective: value='Find her current age.'\nAction:\n```\n{\n \"action\": \"Search\",\n \"action_input\": \"What is Gigi Hadid's current age?\"\n}\n```\nObservation: 28 years\nThought:Previous steps: steps=[(Step(value=\"Search for Leo DiCaprio's girlfriend on the internet.\"), StepResponse(response='Leo DiCaprio is currently linked to Gigi Hadid.')), (Step(value='Find her current age.'), StepResponse(response='28 years'))]\nCurrent objective: None\nAction:\n```\n{\n \"action\": \"Final Answer\",\n \"action_input\": \"Gigi Hadid's current age is 28 years.\"\n}\n```\n> Finished chain.\n*****\nStep: Find her current age.\nResponse: Gigi Hadid's current age is 28 years.\n> Entering new AgentExecutor chain...\nAction:\n```\n{\n \"action\": \"Calculator\",\n \"action_input\": \"28 ** 0.43\"\n}\n```\n> Entering new LLMMathChain chain...\n28 ** 0.43\n```text\n28 ** 0.43\n```\n...numexpr.evaluate(\"28 ** 0.43\")...\nAnswer: 4.1906168361987195\n> Finished chain.\nObservation: Answer: 4.1906168361987195\nThought:The next step is to provide the answer to the user's question.\nAction:\n```\n{\n \"action\": \"Final Answer\",\n \"action_input\": \"Gigi Hadid's current age raised to the 0.43 power is approximately 4.19.\"\n}\n```\n> Finished chain.\n*****\nStep: Raise her current age to the 0.43 power using a calculator or programming language.", "source": "https://python.langchain.com/en/latest/modules/agents/plan_and_execute.html"}
+{"id": "798cf4aac6ff-3", "text": "Response: Gigi Hadid's current age raised to the 0.43 power is approximately 4.19.\n> Entering new AgentExecutor chain...\nAction:\n```\n{\n \"action\": \"Final Answer\",\n \"action_input\": \"The result is approximately 4.19.\"\n}\n```\n> Finished chain.\n*****\nStep: Output the result.\nResponse: The result is approximately 4.19.\n> Entering new AgentExecutor chain...\nAction:\n```\n{\n \"action\": \"Final Answer\",\n \"action_input\": \"Gigi Hadid's current age raised to the 0.43 power is approximately 4.19.\"\n}\n```\n> Finished chain.\n*****\nStep: Given the above steps taken, respond to the user's original question.\nResponse: Gigi Hadid's current age raised to the 0.43 power is approximately 4.19.\n> Finished chain.\n\"Gigi Hadid's current age raised to the 0.43 power is approximately 4.19.\"\nprevious\nHow to add SharedMemory to an Agent and its Tools\nnext\nCallbacks\n Contents\n \nPlan and Execute\nImports\nTools\nPlanner, Executor, and Agent\nRun Example\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/plan_and_execute.html"}
+{"id": "79eaf758522b-0", "text": ".rst\n.pdf\nAgent Executors\nAgent Executors#\nNote\nConceptual Guide\nAgent executors take an agent and tools and use the agent to decide which tools to call and in what order.\nIn this part of the documentation we cover other related functionality to agent executors\nHow to combine agents and vectorstores\nHow to use the async API for Agents\nHow to create ChatGPT Clone\nHandle Parsing Errors\nHow to access intermediate steps\nHow to cap the max number of iterations\nHow to use a timeout for the agent\nHow to add SharedMemory to an Agent and its Tools\nprevious\nVectorstore Agent\nnext\nHow to combine agents and vectorstores\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/agent_executors.html"}
+{"id": "eb05357a424e-0", "text": ".ipynb\n.pdf\nGetting Started\nGetting Started#\nAgents use an LLM to determine which actions to take and in what order.\nAn action can either be using a tool and observing its output, or returning to the user.\nWhen used correctly agents can be extremely powerful. The purpose of this notebook is to show you how to easily use agents through the simplest, highest level API.\nIn order to load agents, you should understand the following concepts:\nTool: A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. The interface for a tool is currently a function that is expected to have a string as an input, with a string as an output.\nLLM: The language model powering the agent.\nAgent: The agent to use. This should be a string that references a support agent class. Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see the documentation for custom agents.\nAgents: For a list of supported agents and their specifications, see here.\nTools: For a list of predefined tools and their specifications, see here.\nfrom langchain.agents import load_tools\nfrom langchain.agents import initialize_agent\nfrom langchain.agents import AgentType\nfrom langchain.llms import OpenAI\nFirst, let\u2019s load the language model we\u2019re going to use to control the agent.\nllm = OpenAI(temperature=0)\nNext, let\u2019s load some tools to use. Note that the llm-math tool uses an LLM, so we need to pass that in.\ntools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\nFinally, let\u2019s initialize an agent with the tools, the language model, and the type of agent we want to use.", "source": "https://python.langchain.com/en/latest/modules/agents/getting_started.html"}
+{"id": "eb05357a424e-1", "text": "agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)\nNow let\u2019s test it out!\nagent.run(\"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\")\n> Entering new AgentExecutor chain...\n I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.\nAction: Search\nAction Input: \"Leo DiCaprio girlfriend\"\nObservation: Camila Morrone\nThought: I need to find out Camila Morrone's age\nAction: Search\nAction Input: \"Camila Morrone age\"\nObservation: 25 years\nThought: I need to calculate 25 raised to the 0.43 power\nAction: Calculator\nAction Input: 25^0.43\nObservation: Answer: 3.991298452658078\nThought: I now know the final answer\nFinal Answer: Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078.\n> Finished chain.\n\"Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078.\"\nprevious\nAgents\nnext\nTools\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/getting_started.html"}
+{"id": "a22e0a0e8373-0", "text": ".rst\n.pdf\nToolkits\nToolkits#\nNote\nConceptual Guide\nThis section of documentation covers agents with toolkits - eg an agent applied to a particular use case.\nSee below for a full list of agent toolkits\nAzure Cognitive Services Toolkit\nCSV Agent\nGmail Toolkit\nJira\nJSON Agent\nOpenAPI agents\nNatural Language APIs\nPandas Dataframe Agent\nPlayWright Browser Toolkit\nPowerBI Dataset Agent\nPython Agent\nSpark Dataframe Agent\nSpark SQL Agent\nSQL Database Agent\nVectorstore Agent\nprevious\nStructured Tool Chat Agent\nnext\nAzure Cognitive Services Toolkit\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/toolkits.html"}
+{"id": "a7be8239db86-0", "text": ".rst\n.pdf\nAgents\nAgents#\nNote\nConceptual Guide\nIn this part of the documentation we cover the different types of agents, disregarding which specific tools they are used with.\nFor a high level overview of the different types of agents, see the below documentation.\nAgent Types\nFor documentation on how to create a custom agent, see the below.\nCustom Agent\nCustom LLM Agent\nCustom LLM Agent (with a ChatModel)\nCustom MRKL Agent\nCustom MultiAction Agent\nCustom Agent with Tool Retrieval\nWe also have documentation for an in-depth dive into each agent type.\nConversation Agent (for Chat Models)\nConversation Agent\nMRKL\nMRKL Chat\nReAct\nSelf Ask With Search\nStructured Tool Chat Agent\nprevious\nZapier Natural Language Actions API\nnext\nAgent Types\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/agents.html"}
+{"id": "0af0fd2c3d47-0", "text": ".rst\n.pdf\nTools\nTools#\nNote\nConceptual Guide\nTools are ways that an agent can use to interact with the outside world.\nFor an overview of what a tool is, how to use them, and a full list of examples, please see the getting started documentation\nGetting Started\nNext, we have some examples of customizing and generically working with tools\nDefining Custom Tools\nMulti-Input Tools\nTool Input Schema\nIn this documentation we cover generic tooling functionality (eg how to create your own)\nas well as examples of tools and how to use them.\nApify\nArXiv API Tool\nAWS Lambda API\nShell Tool\nBing Search\nBrave Search\nChatGPT Plugins\nDuckDuckGo Search\nFile System Tools\nGoogle Places\nGoogle Search\nGoogle Serper API\nGradio Tools\nGraphQL tool\nHuggingFace Tools\nHuman as a tool\nIFTTT WebHooks\nMetaphor Search\nCall the API\nUse Metaphor as a tool\nOpenWeatherMap API\nPubMed Tool\nPython REPL\nRequests\nSceneXplain\nSearch Tools\nSearxNG Search API\nSerpAPI\nTwilio\nWikipedia\nWolfram Alpha\nYouTubeSearchTool\nZapier Natural Language Actions API\nExample with SimpleSequentialChain\nprevious\nGetting Started\nnext\nGetting Started\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/tools.html"}
+{"id": "a65ba9180d73-0", "text": ".md\n.pdf\nGetting Started\n Contents \nList of Tools\nGetting Started#\nTools are functions that agents can use to interact with the world.\nThese tools can be generic utilities (e.g. search), other chains, or even other agents.\nCurrently, tools can be loaded with the following snippet:\nfrom langchain.agents import load_tools\ntool_names = [...]\ntools = load_tools(tool_names)\nSome tools (e.g. chains, agents) may require a base LLM to use to initialize them.\nIn that case, you can pass in an LLM as well:\nfrom langchain.agents import load_tools\ntool_names = [...]\nllm = ...\ntools = load_tools(tool_names, llm=llm)\nBelow is a list of all supported tools and relevant information:\nTool Name: The name the LLM refers to the tool by.\nTool Description: The description of the tool that is passed to the LLM.\nNotes: Notes about the tool that are NOT passed to the LLM.\nRequires LLM: Whether this tool requires an LLM to be initialized.\n(Optional) Extra Parameters: What extra parameters are required to initialize this tool.\nList of Tools#\npython_repl\nTool Name: Python REPL\nTool Description: A Python shell. Use this to execute python commands. Input should be a valid python command. If you expect output it should be printed out.\nNotes: Maintains state.\nRequires LLM: No\nserpapi\nTool Name: Search\nTool Description: A search engine. Useful for when you need to answer questions about current events. Input should be a search query.\nNotes: Calls the Serp API and then parses results.\nRequires LLM: No\nwolfram-alpha\nTool Name: Wolfram Alpha", "source": "https://python.langchain.com/en/latest/modules/agents/tools/getting_started.html"}
+{"id": "a65ba9180d73-1", "text": "Requires LLM: No\nwolfram-alpha\nTool Name: Wolfram Alpha\nTool Description: A wolfram alpha search engine. Useful for when you need to answer questions about Math, Science, Technology, Culture, Society and Everyday Life. Input should be a search query.\nNotes: Calls the Wolfram Alpha API and then parses results.\nRequires LLM: No\nExtra Parameters: wolfram_alpha_appid: The Wolfram Alpha app id.\nrequests\nTool Name: Requests\nTool Description: A portal to the internet. Use this when you need to get specific content from a site. Input should be a specific url, and the output will be all the text on that page.\nNotes: Uses the Python requests module.\nRequires LLM: No\nterminal\nTool Name: Terminal\nTool Description: Executes commands in a terminal. Input should be valid commands, and the output will be any output from running that command.\nNotes: Executes commands with subprocess.\nRequires LLM: No\npal-math\nTool Name: PAL-MATH\nTool Description: A language model that is excellent at solving complex word math problems. Input should be a fully worded hard word math problem.\nNotes: Based on this paper.\nRequires LLM: Yes\npal-colored-objects\nTool Name: PAL-COLOR-OBJ\nTool Description: A language model that is wonderful at reasoning about position and the color attributes of objects. Input should be a fully worded hard reasoning problem. Make sure to include all information about the objects AND the final question you want to answer.\nNotes: Based on this paper.\nRequires LLM: Yes\nllm-math\nTool Name: Calculator\nTool Description: Useful for when you need to answer questions about math.\nNotes: An instance of the LLMMath chain.\nRequires LLM: Yes\nopen-meteo-api\nTool Name: Open Meteo API", "source": "https://python.langchain.com/en/latest/modules/agents/tools/getting_started.html"}
+{"id": "a65ba9180d73-2", "text": "Requires LLM: Yes\nopen-meteo-api\nTool Name: Open Meteo API\nTool Description: Useful for when you want to get weather information from the OpenMeteo API. The input should be a question in natural language that this API can answer.\nNotes: A natural language connection to the Open Meteo API (https://api.open-meteo.com/), specifically the /v1/forecast endpoint.\nRequires LLM: Yes\nnews-api\nTool Name: News API\nTool Description: Use this when you want to get information about the top headlines of current news stories. The input should be a question in natural language that this API can answer.\nNotes: A natural language connection to the News API (https://newsapi.org), specifically the /v2/top-headlines endpoint.\nRequires LLM: Yes\nExtra Parameters: news_api_key (your API key to access this endpoint)\ntmdb-api\nTool Name: TMDB API\nTool Description: Useful for when you want to get information from The Movie Database. The input should be a question in natural language that this API can answer.\nNotes: A natural language connection to the TMDB API (https://api.themoviedb.org/3), specifically the /search/movie endpoint.\nRequires LLM: Yes\nExtra Parameters: tmdb_bearer_token (your Bearer Token to access this endpoint - note that this is different from the API key)\ngoogle-search\nTool Name: Search\nTool Description: A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query.\nNotes: Uses the Google Custom Search API\nRequires LLM: No\nExtra Parameters: google_api_key, google_cse_id\nFor more information on this, see this page\nsearx-search\nTool Name: Search", "source": "https://python.langchain.com/en/latest/modules/agents/tools/getting_started.html"}
+{"id": "a65ba9180d73-3", "text": "For more information on this, see this page\nsearx-search\nTool Name: Search\nTool Description: A wrapper around SearxNG meta search engine. Input should be a search query.\nNotes: SearxNG is easy to deploy self-hosted. It is a good privacy friendly alternative to Google Search. Uses the SearxNG API.\nRequires LLM: No\nExtra Parameters: searx_host\ngoogle-serper\nTool Name: Search\nTool Description: A low-cost Google Search API. Useful for when you need to answer questions about current events. Input should be a search query.\nNotes: Calls the serper.dev Google Search API and then parses results.\nRequires LLM: No\nExtra Parameters: serper_api_key\nFor more information on this, see this page\nwikipedia\nTool Name: Wikipedia\nTool Description: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, historical events, or other subjects. Input should be a search query.\nNotes: Uses the wikipedia Python package to call the MediaWiki API and then parses results.\nRequires LLM: No\nExtra Parameters: top_k_results\npodcast-api\nTool Name: Podcast API\nTool Description: Use the Listen Notes Podcast API to search all podcasts or episodes. The input should be a question in natural language that this API can answer.\nNotes: A natural language connection to the Listen Notes Podcast API (https://www.PodcastAPI.com), specifically the /search/ endpoint.\nRequires LLM: Yes\nExtra Parameters: listen_api_key (your api key to access this endpoint)\nopenweathermap-api\nTool Name: OpenWeatherMap\nTool Description: A wrapper around OpenWeatherMap API. Useful for fetching current weather information for a specified location. Input should be a location string (e.g. London,GB).", "source": "https://python.langchain.com/en/latest/modules/agents/tools/getting_started.html"}
+{"id": "a65ba9180d73-4", "text": "Notes: A connection to the OpenWeatherMap API (https://api.openweathermap.org), specifically the /data/2.5/weather endpoint.\nRequires LLM: No\nExtra Parameters: openweathermap_api_key (your API key to access this endpoint)\nsleep\nTool Name: Sleep\nTool Description: Make agent sleep for some time.\nRequires LLM: No\nprevious\nTools\nnext\nDefining Custom Tools\n Contents\n \nList of Tools\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/getting_started.html"}
+{"id": "ce85bdbcb5d7-0", "text": ".ipynb\n.pdf\nTool Input Schema\nTool Input Schema#\nBy default, tools infer the argument schema by inspecting the function signature. For more strict requirements, custom input schema can be specified, along with custom validation logic.\nfrom typing import Any, Dict\nfrom langchain.agents import AgentType, initialize_agent\nfrom langchain.llms import OpenAI\nfrom langchain.tools.requests.tool import RequestsGetTool, TextRequestsWrapper\nfrom pydantic import BaseModel, Field, root_validator\nllm = OpenAI(temperature=0)\n!pip install tldextract > /dev/null\n[notice] A new release of pip is available: 23.0.1 -> 23.1\n[notice] To update, run: pip install --upgrade pip\nimport tldextract\n_APPROVED_DOMAINS = {\n \"langchain\",\n \"wikipedia\",\n}\nclass ToolInputSchema(BaseModel):\n url: str = Field(...)\n \n @root_validator\n def validate_query(cls, values: Dict[str, Any]) -> Dict:\n url = values[\"url\"]\n domain = tldextract.extract(url).domain\n if domain not in _APPROVED_DOMAINS:\n raise ValueError(f\"Domain {domain} is not on the approved list:\"\n f\" {sorted(_APPROVED_DOMAINS)}\")\n return values\n \ntool = RequestsGetTool(args_schema=ToolInputSchema, requests_wrapper=TextRequestsWrapper())\nagent = initialize_agent([tool], llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=False)\n# This will succeed, since there aren't any arguments that will be triggered during validation\nanswer = agent.run(\"What's the main title on langchain.com?\")\nprint(answer)", "source": "https://python.langchain.com/en/latest/modules/agents/tools/tool_input_validation.html"}
+{"id": "ce85bdbcb5d7-1", "text": "print(answer)\nThe main title of langchain.com is \"LANG CHAIN \ud83e\udd9c\ufe0f\ud83d\udd17 Official Home Page\"\nagent.run(\"What's the main title on google.com?\")\n---------------------------------------------------------------------------\nValidationError Traceback (most recent call last)\nCell In[7], line 1\n----> 1 agent.run(\"What's the main title on google.com?\")\nFile ~/code/lc/lckg/langchain/chains/base.py:213, in Chain.run(self, *args, **kwargs)\n 211 if len(args) != 1:\n 212 raise ValueError(\"`run` supports only one positional argument.\")\n--> 213 return self(args[0])[self.output_keys[0]]\n 215 if kwargs and not args:\n 216 return self(kwargs)[self.output_keys[0]]\nFile ~/code/lc/lckg/langchain/chains/base.py:116, in Chain.__call__(self, inputs, return_only_outputs)\n 114 except (KeyboardInterrupt, Exception) as e:\n 115 self.callback_manager.on_chain_error(e, verbose=self.verbose)\n--> 116 raise e\n 117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose)\n 118 return self.prep_outputs(inputs, outputs, return_only_outputs)\nFile ~/code/lc/lckg/langchain/chains/base.py:113, in Chain.__call__(self, inputs, return_only_outputs)\n 107 self.callback_manager.on_chain_start(\n 108 {\"name\": self.__class__.__name__},\n 109 inputs,\n 110 verbose=self.verbose,\n 111 )\n 112 try:\n--> 113 outputs = self._call(inputs)\n 114 except (KeyboardInterrupt, Exception) as e:", "source": "https://python.langchain.com/en/latest/modules/agents/tools/tool_input_validation.html"}
+{"id": "ce85bdbcb5d7-2", "text": "114 except (KeyboardInterrupt, Exception) as e:\n 115 self.callback_manager.on_chain_error(e, verbose=self.verbose)\nFile ~/code/lc/lckg/langchain/agents/agent.py:792, in AgentExecutor._call(self, inputs)\n 790 # We now enter the agent loop (until it returns something).\n 791 while self._should_continue(iterations, time_elapsed):\n--> 792 next_step_output = self._take_next_step(\n 793 name_to_tool_map, color_mapping, inputs, intermediate_steps\n 794 )\n 795 if isinstance(next_step_output, AgentFinish):\n 796 return self._return(next_step_output, intermediate_steps)\nFile ~/code/lc/lckg/langchain/agents/agent.py:695, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps)\n 693 tool_run_kwargs[\"llm_prefix\"] = \"\"\n 694 # We then call the tool on the tool input to get an observation\n--> 695 observation = tool.run(\n 696 agent_action.tool_input,\n 697 verbose=self.verbose,\n 698 color=color,\n 699 **tool_run_kwargs,\n 700 )\n 701 else:\n 702 tool_run_kwargs = self.agent.tool_run_logging_kwargs()\nFile ~/code/lc/lckg/langchain/tools/base.py:110, in BaseTool.run(self, tool_input, verbose, start_color, color, **kwargs)\n 101 def run(\n 102 self,\n 103 tool_input: Union[str, Dict],\n (...)\n 107 **kwargs: Any,\n 108 ) -> str:", "source": "https://python.langchain.com/en/latest/modules/agents/tools/tool_input_validation.html"}
+{"id": "ce85bdbcb5d7-3", "text": "107 **kwargs: Any,\n 108 ) -> str:\n 109 \"\"\"Run the tool.\"\"\"\n--> 110 run_input = self._parse_input(tool_input)\n 111 if not self.verbose and verbose is not None:\n 112 verbose_ = verbose\nFile ~/code/lc/lckg/langchain/tools/base.py:71, in BaseTool._parse_input(self, tool_input)\n 69 if issubclass(input_args, BaseModel):\n 70 key_ = next(iter(input_args.__fields__.keys()))\n---> 71 input_args.parse_obj({key_: tool_input})\n 72 # Passing as a positional argument is more straightforward for\n 73 # backwards compatability\n 74 return tool_input\nFile ~/code/lc/lckg/.venv/lib/python3.11/site-packages/pydantic/main.py:526, in pydantic.main.BaseModel.parse_obj()\nFile ~/code/lc/lckg/.venv/lib/python3.11/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()\nValidationError: 1 validation error for ToolInputSchema\n__root__\n Domain google is not on the approved list: ['langchain', 'wikipedia'] (type=value_error)\nprevious\nMulti-Input Tools\nnext\nApify\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/tool_input_validation.html"}
+{"id": "b7b81808f7e4-0", "text": ".ipynb\n.pdf\nDefining Custom Tools\n Contents \nCompletely New Tools - String Input and Output\nTool dataclass\nSubclassing the BaseTool class\nUsing the tool decorator\nCustom Structured Tools\nStructuredTool dataclass\nSubclassing the BaseTool\nUsing the decorator\nModify existing tools\nDefining the priorities among Tools\nUsing tools to return directly\nHandling Tool Errors\nDefining Custom Tools#\nWhen constructing your own agent, you will need to provide it with a list of Tools that it can use. Besides the actual function that is called, the Tool consists of several components:\nname (str), is required and must be unique within a set of tools provided to an agent\ndescription (str), is optional but recommended, as it is used by an agent to determine tool use\nreturn_direct (bool), defaults to False\nargs_schema (Pydantic BaseModel), is optional but recommended, can be used to provide more information (e.g., few-shot examples) or validation for expected parameters.\nThere are two main ways to define a tool, we will cover both in the example below.\n# Import things that are needed generically\nfrom langchain import LLMMathChain, SerpAPIWrapper\nfrom langchain.agents import AgentType, initialize_agent\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.tools import BaseTool, StructuredTool, Tool, tool\nInitialize the LLM to use for the agent.\nllm = ChatOpenAI(temperature=0)\nCompletely New Tools - String Input and Output#\nThe simplest tools accept a single query string and return a string output. If your tool function requires multiple arguments, you might want to skip down to the StructuredTool section below.\nThere are two ways to do this: either by using the Tool dataclass, or by subclassing the BaseTool class.\nTool dataclass#", "source": "https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html"}
+{"id": "b7b81808f7e4-1", "text": "Tool dataclass#\nThe \u2018Tool\u2019 dataclass wraps functions that accept a single string input and returns a string output.\n# Load the tool configs that are needed.\nsearch = SerpAPIWrapper()\nllm_math_chain = LLMMathChain(llm=llm, verbose=True)\ntools = [\n Tool.from_function(\n func=search.run,\n name = \"Search\",\n description=\"useful for when you need to answer questions about current events\"\n # coroutine= ... <- you can specify an async method if desired as well\n ),\n]\n/Users/wfh/code/lc/lckg/langchain/chains/llm_math/base.py:50: UserWarning: Directly instantiating an LLMMathChain with an llm is deprecated. Please instantiate with llm_chain argument or using the from_llm class method.\n warnings.warn(\nYou can also define a custom `args_schema`` to provide more information about inputs.\nfrom pydantic import BaseModel, Field\nclass CalculatorInput(BaseModel):\n question: str = Field()\n \ntools.append(\n Tool.from_function(\n func=llm_math_chain.run,\n name=\"Calculator\",\n description=\"useful for when you need to answer questions about math\",\n args_schema=CalculatorInput\n # coroutine= ... <- you can specify an async method if desired as well\n )\n)\n# Construct the agent. We will use the default agent type here.\n# See documentation for a full list of options.\nagent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)\nagent.run(\"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\")\n> Entering new AgentExecutor chain...", "source": "https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html"}
+{"id": "b7b81808f7e4-2", "text": "> Entering new AgentExecutor chain...\nI need to find out Leo DiCaprio's girlfriend's name and her age\nAction: Search\nAction Input: \"Leo DiCaprio girlfriend\"\nObservation: After rumours of a romance with Gigi Hadid, the Oscar winner has seemingly moved on. First being linked to the television personality in September 2022, it appears as if his \"age bracket\" has moved up. This follows his rumoured relationship with mere 19-year-old Eden Polani.\nThought:I still need to find out his current girlfriend's name and age\nAction: Search\nAction Input: \"Leo DiCaprio current girlfriend\"\nObservation: Just Jared on Instagram: \u201cLeonardo DiCaprio & girlfriend Camila Morrone couple up for a lunch date!\nThought:Now that I know his girlfriend's name is Camila Morrone, I need to find her current age\nAction: Search\nAction Input: \"Camila Morrone age\"\nObservation: 25 years\nThought:Now that I have her age, I need to calculate her age raised to the 0.43 power\nAction: Calculator\nAction Input: 25^(0.43)\n> Entering new LLMMathChain chain...\n25^(0.43)```text\n25**(0.43)\n```\n...numexpr.evaluate(\"25**(0.43)\")...\nAnswer: 3.991298452658078\n> Finished chain.\nObservation: Answer: 3.991298452658078\nThought:I now know the final answer\nFinal Answer: Camila Morrone's current age raised to the 0.43 power is approximately 3.99.\n> Finished chain.\n\"Camila Morrone's current age raised to the 0.43 power is approximately 3.99.\"\nSubclassing the BaseTool class#", "source": "https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html"}
+{"id": "b7b81808f7e4-3", "text": "Subclassing the BaseTool class#\nYou can also directly subclass BaseTool. This is useful if you want more control over the instance variables or if you want to propagate callbacks to nested chains or other tools.\nfrom typing import Optional, Type\nfrom langchain.callbacks.manager import AsyncCallbackManagerForToolRun, CallbackManagerForToolRun\nclass CustomSearchTool(BaseTool):\n name = \"custom_search\"\n description = \"useful for when you need to answer questions about current events\"\n def _run(self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None) -> str:\n \"\"\"Use the tool.\"\"\"\n return search.run(query)\n \n async def _arun(self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"custom_search does not support async\")\n \nclass CustomCalculatorTool(BaseTool):\n name = \"Calculator\"\n description = \"useful for when you need to answer questions about math\"\n args_schema: Type[BaseModel] = CalculatorInput\n def _run(self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None) -> str:\n \"\"\"Use the tool.\"\"\"\n return llm_math_chain.run(query)\n \n async def _arun(self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"Calculator does not support async\")\ntools = [CustomSearchTool(), CustomCalculatorTool()]\nagent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)", "source": "https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html"}
+{"id": "b7b81808f7e4-4", "text": "agent.run(\"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\")\n> Entering new AgentExecutor chain...\nI need to use custom_search to find out who Leo DiCaprio's girlfriend is, and then use the Calculator to raise her age to the 0.43 power.\nAction: custom_search\nAction Input: \"Leo DiCaprio girlfriend\"\nObservation: After rumours of a romance with Gigi Hadid, the Oscar winner has seemingly moved on. First being linked to the television personality in September 2022, it appears as if his \"age bracket\" has moved up. This follows his rumoured relationship with mere 19-year-old Eden Polani.\nThought:I need to find out the current age of Eden Polani.\nAction: custom_search\nAction Input: \"Eden Polani age\"\nObservation: 19 years old\nThought:Now I can use the Calculator to raise her age to the 0.43 power.\nAction: Calculator\nAction Input: 19 ^ 0.43\n> Entering new LLMMathChain chain...\n19 ^ 0.43```text\n19 ** 0.43\n```\n...numexpr.evaluate(\"19 ** 0.43\")...\nAnswer: 3.547023357958959\n> Finished chain.\nObservation: Answer: 3.547023357958959\nThought:I now know the final answer.\nFinal Answer: 3.547023357958959\n> Finished chain.\n'3.547023357958959'\nUsing the tool decorator#", "source": "https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html"}
+{"id": "b7b81808f7e4-5", "text": "'3.547023357958959'\nUsing the tool decorator#\nTo make it easier to define custom tools, a @tool decorator is provided. This decorator can be used to quickly create a Tool from a simple function. The decorator uses the function name as the tool name by default, but this can be overridden by passing a string as the first argument. Additionally, the decorator will use the function\u2019s docstring as the tool\u2019s description.\nfrom langchain.tools import tool\n@tool\ndef search_api(query: str) -> str:\n \"\"\"Searches the API for the query.\"\"\"\n return f\"Results for query {query}\"\nsearch_api\nYou can also provide arguments like the tool name and whether to return directly.\n@tool(\"search\", return_direct=True)\ndef search_api(query: str) -> str:\n \"\"\"Searches the API for the query.\"\"\"\n return \"Results\"\nsearch_api\nTool(name='search', description='search(query: str) -> str - Searches the API for the query.', args_schema=, return_direct=True, verbose=False, callback_manager=, func=, coroutine=None)\nYou can also provide args_schema to provide more information about the argument\nclass SearchInput(BaseModel):\n query: str = Field(description=\"should be a search query\")\n \n@tool(\"search\", return_direct=True, args_schema=SearchInput)\ndef search_api(query: str) -> str:\n \"\"\"Searches the API for the query.\"\"\"\n return \"Results\"\nsearch_api", "source": "https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html"}
+{"id": "b7b81808f7e4-6", "text": "\"\"\"Searches the API for the query.\"\"\"\n return \"Results\"\nsearch_api\nTool(name='search', description='search(query: str) -> str - Searches the API for the query.', args_schema=, return_direct=True, verbose=False, callback_manager=, func=, coroutine=None)\nCustom Structured Tools#\nIf your functions require more structured arguments, you can use the StructuredTool class directly, or still subclass the BaseTool class.\nStructuredTool dataclass#\nTo dynamically generate a structured tool from a given function, the fastest way to get started is with StructuredTool.from_function().\nimport requests\nfrom langchain.tools import StructuredTool\ndef post_message(url: str, body: dict, parameters: Optional[dict] = None) -> str:\n \"\"\"Sends a POST request to the given url with the given body and parameters.\"\"\"\n result = requests.post(url, json=body, params=parameters)\n return f\"Status: {result.status_code} - {result.text}\"\ntool = StructuredTool.from_function(post_message)\nSubclassing the BaseTool#\nThe BaseTool automatically infers the schema from the _run method\u2019s signature.\nfrom typing import Optional, Type\nfrom langchain.callbacks.manager import AsyncCallbackManagerForToolRun, CallbackManagerForToolRun\n \nclass CustomSearchTool(BaseTool):\n name = \"custom_search\"\n description = \"useful for when you need to answer questions about current events\"", "source": "https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html"}
+{"id": "b7b81808f7e4-7", "text": "description = \"useful for when you need to answer questions about current events\"\n def _run(self, query: str, engine: str = \"google\", gl: str = \"us\", hl: str = \"en\", run_manager: Optional[CallbackManagerForToolRun] = None) -> str:\n \"\"\"Use the tool.\"\"\"\n search_wrapper = SerpAPIWrapper(params={\"engine\": engine, \"gl\": gl, \"hl\": hl})\n return search_wrapper.run(query)\n \n async def _arun(self, query: str, engine: str = \"google\", gl: str = \"us\", hl: str = \"en\", run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"custom_search does not support async\")\n# You can provide a custom args schema to add descriptions or custom validation\nclass SearchSchema(BaseModel):\n query: str = Field(description=\"should be a search query\")\n engine: str = Field(description=\"should be a search engine\")\n gl: str = Field(description=\"should be a country code\")\n hl: str = Field(description=\"should be a language code\")\nclass CustomSearchTool(BaseTool):\n name = \"custom_search\"\n description = \"useful for when you need to answer questions about current events\"\n args_schema: Type[SearchSchema] = SearchSchema\n def _run(self, query: str, engine: str = \"google\", gl: str = \"us\", hl: str = \"en\", run_manager: Optional[CallbackManagerForToolRun] = None) -> str:\n \"\"\"Use the tool.\"\"\"\n search_wrapper = SerpAPIWrapper(params={\"engine\": engine, \"gl\": gl, \"hl\": hl})\n return search_wrapper.run(query)", "source": "https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html"}
+{"id": "b7b81808f7e4-8", "text": "return search_wrapper.run(query)\n \n async def _arun(self, query: str, engine: str = \"google\", gl: str = \"us\", hl: str = \"en\", run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"custom_search does not support async\")\n \n \nUsing the decorator#\nThe tool decorator creates a structured tool automatically if the signature has multiple arguments.\nimport requests\nfrom langchain.tools import tool\n@tool\ndef post_message(url: str, body: dict, parameters: Optional[dict] = None) -> str:\n \"\"\"Sends a POST request to the given url with the given body and parameters.\"\"\"\n result = requests.post(url, json=body, params=parameters)\n return f\"Status: {result.status_code} - {result.text}\"\nModify existing tools#\nNow, we show how to load existing tools and modify them directly. In the example below, we do something really simple and change the Search tool to have the name Google Search.\nfrom langchain.agents import load_tools\ntools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\ntools[0].name = \"Google Search\"\nagent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)\nagent.run(\"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\")\n> Entering new AgentExecutor chain...\nI need to find out Leo DiCaprio's girlfriend's name and her age.\nAction: Google Search\nAction Input: \"Leo DiCaprio girlfriend\"", "source": "https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html"}
+{"id": "b7b81808f7e4-9", "text": "Action: Google Search\nAction Input: \"Leo DiCaprio girlfriend\"\nObservation: After rumours of a romance with Gigi Hadid, the Oscar winner has seemingly moved on. First being linked to the television personality in September 2022, it appears as if his \"age bracket\" has moved up. This follows his rumoured relationship with mere 19-year-old Eden Polani.\nThought:I still need to find out his current girlfriend's name and her age.\nAction: Google Search\nAction Input: \"Leo DiCaprio current girlfriend age\"\nObservation: Leonardo DiCaprio has been linked with 19-year-old model Eden Polani, continuing the rumour that he doesn't date any women over the age of ...\nThought:I need to find out the age of Eden Polani.\nAction: Calculator\nAction Input: 19^(0.43)\nObservation: Answer: 3.547023357958959\nThought:I now know the final answer.\nFinal Answer: The age of Leo DiCaprio's girlfriend raised to the 0.43 power is approximately 3.55.\n> Finished chain.\n\"The age of Leo DiCaprio's girlfriend raised to the 0.43 power is approximately 3.55.\"\nDefining the priorities among Tools#\nWhen you made a Custom tool, you may want the Agent to use the custom tool more than normal tools.\nFor example, you made a custom tool, which gets information on music from your database. When a user wants information on songs, You want the Agent to use the custom tool more than the normal Search tool. But the Agent might prioritize a normal Search tool.\nThis can be accomplished by adding a statement such as Use this more than the normal search if the question is about Music, like 'who is the singer of yesterday?' or 'what is the most popular song in 2022?' to the description.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html"}
+{"id": "b7b81808f7e4-10", "text": "An example is below.\n# Import things that are needed generically\nfrom langchain.agents import initialize_agent, Tool\nfrom langchain.agents import AgentType\nfrom langchain.llms import OpenAI\nfrom langchain import LLMMathChain, SerpAPIWrapper\nsearch = SerpAPIWrapper()\ntools = [\n Tool(\n name = \"Search\",\n func=search.run,\n description=\"useful for when you need to answer questions about current events\"\n ),\n Tool(\n name=\"Music Search\",\n func=lambda x: \"'All I Want For Christmas Is You' by Mariah Carey.\", #Mock Function\n description=\"A Music search engine. Use this more than the normal search if the question is about Music, like 'who is the singer of yesterday?' or 'what is the most popular song in 2022?'\",\n )\n]\nagent = initialize_agent(tools, OpenAI(temperature=0), agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)\nagent.run(\"what is the most famous song of christmas\")\n> Entering new AgentExecutor chain...\n I should use a music search engine to find the answer\nAction: Music Search\nAction Input: most famous song of christmas'All I Want For Christmas Is You' by Mariah Carey. I now know the final answer\nFinal Answer: 'All I Want For Christmas Is You' by Mariah Carey.\n> Finished chain.\n\"'All I Want For Christmas Is You' by Mariah Carey.\"\nUsing tools to return directly#\nOften, it can be desirable to have a tool output returned directly to the user, if it\u2019s called. You can do this easily with LangChain by setting the return_direct flag for a tool to be True.\nllm_math_chain = LLMMathChain(llm=llm)\ntools = [", "source": "https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html"}
+{"id": "b7b81808f7e4-11", "text": "llm_math_chain = LLMMathChain(llm=llm)\ntools = [\n Tool(\n name=\"Calculator\",\n func=llm_math_chain.run,\n description=\"useful for when you need to answer questions about math\",\n return_direct=True\n )\n]\nllm = OpenAI(temperature=0)\nagent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)\nagent.run(\"whats 2**.12\")\n> Entering new AgentExecutor chain...\n I need to calculate this\nAction: Calculator\nAction Input: 2**.12Answer: 1.086734862526058\n> Finished chain.\n'Answer: 1.086734862526058'\nHandling Tool Errors#\nWhen a tool encounters an error and the exception is not caught, the agent will stop executing. If you want the agent to continue execution, you can raise a ToolException and set handle_tool_error accordingly.\nWhen ToolException is thrown, the agent will not stop working, but will handle the exception according to the handle_tool_error variable of the tool, and the processing result will be returned to the agent as observation, and printed in red.\nYou can set handle_tool_error to True, set it a unified string value, or set it as a function. If it\u2019s set as a function, the function should take a ToolException as a parameter and return a str value.\nPlease note that only raising a ToolException won\u2019t be effective. You need to first set the handle_tool_error of the tool because its default value is False.\nfrom langchain.schema import ToolException\nfrom langchain import SerpAPIWrapper\nfrom langchain.agents import AgentType, initialize_agent\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.tools import Tool", "source": "https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html"}
+{"id": "b7b81808f7e4-12", "text": "from langchain.chat_models import ChatOpenAI\nfrom langchain.tools import Tool\nfrom langchain.chat_models import ChatOpenAI\ndef _handle_error(error:ToolException) -> str:\n return \"The following errors occurred during tool execution:\" + error.args[0]+ \"Please try another tool.\"\ndef search_tool1(s: str):raise ToolException(\"The search tool1 is not available.\")\ndef search_tool2(s: str):raise ToolException(\"The search tool2 is not available.\")\nsearch_tool3 = SerpAPIWrapper()\ndescription=\"useful for when you need to answer questions about current events.You should give priority to using it.\"\ntools = [\n Tool.from_function(\n func=search_tool1,\n name=\"Search_tool1\",\n description=description,\n handle_tool_error=True,\n ),\n Tool.from_function(\n func=search_tool2,\n name=\"Search_tool2\",\n description=description,\n handle_tool_error=_handle_error,\n ),\n Tool.from_function(\n func=search_tool3.run,\n name=\"Search_tool3\",\n description=\"useful for when you need to answer questions about current events\",\n ),\n]\nagent = initialize_agent(\n tools,\n ChatOpenAI(temperature=0),\n agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n verbose=True,\n)\nagent.run(\"Who is Leo DiCaprio's girlfriend?\")\n> Entering new AgentExecutor chain...\nI should use Search_tool1 to find recent news articles about Leo DiCaprio's personal life.\nAction: Search_tool1\nAction Input: \"Leo DiCaprio girlfriend\"\nObservation: The search tool1 is not available.\nThought:I should try using Search_tool2 instead.\nAction: Search_tool2", "source": "https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html"}
+{"id": "b7b81808f7e4-13", "text": "Thought:I should try using Search_tool2 instead.\nAction: Search_tool2\nAction Input: \"Leo DiCaprio girlfriend\"\nObservation: The following errors occurred during tool execution:The search tool2 is not available.Please try another tool.\nThought:I should try using Search_tool3 as a last resort.\nAction: Search_tool3\nAction Input: \"Leo DiCaprio girlfriend\"\nObservation: Leonardo DiCaprio and Gigi Hadid were recently spotted at a pre-Oscars party, sparking interest once again in their rumored romance. The Revenant actor and the model first made headlines when they were spotted together at a New York Fashion Week afterparty in September 2022.\nThought:Based on the information from Search_tool3, it seems that Gigi Hadid is currently rumored to be Leo DiCaprio's girlfriend.\nFinal Answer: Gigi Hadid is currently rumored to be Leo DiCaprio's girlfriend.\n> Finished chain.\n\"Gigi Hadid is currently rumored to be Leo DiCaprio's girlfriend.\"\nprevious\nGetting Started\nnext\nMulti-Input Tools\n Contents\n \nCompletely New Tools - String Input and Output\nTool dataclass\nSubclassing the BaseTool class\nUsing the tool decorator\nCustom Structured Tools\nStructuredTool dataclass\nSubclassing the BaseTool\nUsing the decorator\nModify existing tools\nDefining the priorities among Tools\nUsing tools to return directly\nHandling Tool Errors\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html"}
+{"id": "22070b1be88f-0", "text": ".ipynb\n.pdf\nMulti-Input Tools\n Contents \nMulti-Input Tools with a string format\nMulti-Input Tools#\nThis notebook shows how to use a tool that requires multiple inputs with an agent. The recommended way to do so is with the StructuredTool class.\nimport os\nos.environ[\"LANGCHAIN_TRACING\"] = \"true\"\nfrom langchain import OpenAI\nfrom langchain.agents import initialize_agent, AgentType\nllm = OpenAI(temperature=0)\nfrom langchain.tools import StructuredTool\ndef multiplier(a: float, b: float) -> float:\n \"\"\"Multiply the provided floats.\"\"\"\n return a * b\ntool = StructuredTool.from_function(multiplier)\n# Structured tools are compatible with the STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION agent type. \nagent_executor = initialize_agent([tool], llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)\nagent_executor.run(\"What is 3 times 4\")\n> Entering new AgentExecutor chain...\nThought: I need to multiply 3 and 4\nAction:\n```\n{\n \"action\": \"multiplier\",\n \"action_input\": {\"a\": 3, \"b\": 4}\n}\n```\nObservation: 12\nThought: I know what to respond\nAction:\n```\n{\n \"action\": \"Final Answer\",\n \"action_input\": \"3 times 4 is 12\"\n}\n```\n> Finished chain.\n'3 times 4 is 12'\nMulti-Input Tools with a string format#", "source": "https://python.langchain.com/en/latest/modules/agents/tools/multi_input_tool.html"}
+{"id": "22070b1be88f-1", "text": "'3 times 4 is 12'\nMulti-Input Tools with a string format#\nAn alternative to the structured tool would be to use the regular Tool class and accept a single string. The tool would then have to handle the parsing logic to extract the relavent values from the text, which tightly couples the tool representation to the agent prompt. This is still useful if the underlying language model can\u2019t reliabl generate structured schema.\nLet\u2019s take the multiplication function as an example. In order to use this, we will tell the agent to generate the \u201cAction Input\u201d as a comma-separated list of length two. We will then write a thin wrapper that takes a string, splits it into two around a comma, and passes both parsed sides as integers to the multiplication function.\nfrom langchain.llms import OpenAI\nfrom langchain.agents import initialize_agent, Tool\nfrom langchain.agents import AgentType\nHere is the multiplication function, as well as a wrapper to parse a string as input.\ndef multiplier(a, b):\n return a * b\ndef parsing_multiplier(string):\n a, b = string.split(\",\")\n return multiplier(int(a), int(b))\nllm = OpenAI(temperature=0)\ntools = [\n Tool(\n name = \"Multiplier\",\n func=parsing_multiplier,\n description=\"useful for when you need to multiply two numbers together. The input to this tool should be a comma separated list of numbers of length two, representing the two numbers you want to multiply together. For example, `1,2` would be the input if you wanted to multiply 1 by 2.\"\n )\n]\nmrkl = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)\nmrkl.run(\"What is 3 times 4\")\n> Entering new AgentExecutor chain...", "source": "https://python.langchain.com/en/latest/modules/agents/tools/multi_input_tool.html"}
+{"id": "22070b1be88f-2", "text": "> Entering new AgentExecutor chain...\n I need to multiply two numbers\nAction: Multiplier\nAction Input: 3,4\nObservation: 12\nThought: I now know the final answer\nFinal Answer: 3 times 4 is 12\n> Finished chain.\n'3 times 4 is 12'\nprevious\nDefining Custom Tools\nnext\nTool Input Schema\n Contents\n \nMulti-Input Tools with a string format\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/multi_input_tool.html"}
+{"id": "6cadad82d9a6-0", "text": ".ipynb\n.pdf\nSearxNG Search API\n Contents \nCustom Parameters\nObtaining results with metadata\nSearxNG Search API#\nThis notebook goes over how to use a self hosted SearxNG search API to search the web.\nYou can check this link for more informations about Searx API parameters.\nimport pprint\nfrom langchain.utilities import SearxSearchWrapper\nsearch = SearxSearchWrapper(searx_host=\"http://127.0.0.1:8888\")\nFor some engines, if a direct answer is available the warpper will print the answer instead of the full list of search results. You can use the results method of the wrapper if you want to obtain all the results.\nsearch.run(\"What is the capital of France\")\n'Paris is the capital of France, the largest country of Europe with 550 000 km2 (65 millions inhabitants). Paris has 2.234 million inhabitants end 2011. She is the core of Ile de France region (12 million people).'\nCustom Parameters#\nSearxNG supports up to 139 search engines. You can also customize the Searx wrapper with arbitrary named parameters that will be passed to the Searx search API . In the below example we will making a more interesting use of custom search parameters from searx search api.\nIn this example we will be using the engines parameters to query wikipedia\nsearch = SearxSearchWrapper(searx_host=\"http://127.0.0.1:8888\", k=5) # k is for max number of items\nsearch.run(\"large language model \", engines=['wiki'])", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html"}
+{"id": "6cadad82d9a6-1", "text": "search.run(\"large language model \", engines=['wiki'])\n'Large language models (LLMs) represent a major advancement in AI, with the promise of transforming domains through learned knowledge. LLM sizes have been increasing 10X every year for the last few years, and as these models grow in complexity and size, so do their capabilities.\\n\\nGPT-3 can translate language, write essays, generate computer code, and more \u2014 all with limited to no supervision. In July 2020, OpenAI unveiled GPT-3, a language model that was easily the largest known at the time. Put simply, GPT-3 is trained to predict the next word in a sentence, much like how a text message autocomplete feature works.\\n\\nA large language model, or LLM, is a deep learning algorithm that can recognize, summarize, translate, predict and generate text and other content based on knowledge gained from massive datasets. Large language models are among the most successful applications of transformer models.\\n\\nAll of today\u2019s well-known language models\u2014e.g., GPT-3 from OpenAI, PaLM or LaMDA from Google, Galactica or OPT from Meta, Megatron-Turing from Nvidia/Microsoft, Jurassic-1 from AI21 Labs\u2014are...\\n\\nLarge language models (LLMs) such as GPT-3are increasingly being used to generate text. These tools should be used with care, since they can generate content that is biased, non-verifiable, constitutes original research, or violates copyrights.'\nPassing other Searx parameters for searx like language\nsearch = SearxSearchWrapper(searx_host=\"http://127.0.0.1:8888\", k=1)\nsearch.run(\"deep learning\", language='es', engines=['wiki'])", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html"}
+{"id": "6cadad82d9a6-2", "text": "search.run(\"deep learning\", language='es', engines=['wiki'])\n'Aprendizaje profundo (en ingl\u00e9s, deep learning) es un conjunto de algoritmos de aprendizaje autom\u00e1tico (en ingl\u00e9s, machine learning) que intenta modelar abstracciones de alto nivel en datos usando arquitecturas computacionales que admiten transformaciones no lineales m\u00faltiples e iterativas de datos expresados en forma matricial o tensorial. 1'\nObtaining results with metadata#\nIn this example we will be looking for scientific paper using the categories parameter and limiting the results to a time_range (not all engines support the time range option).\nWe also would like to obtain the results in a structured way including metadata. For this we will be using the results method of the wrapper.\nsearch = SearxSearchWrapper(searx_host=\"http://127.0.0.1:8888\")\nresults = search.results(\"Large Language Model prompt\", num_results=5, categories='science', time_range='year')\npprint.pp(results)\n[{'snippet': '\u2026 on natural language instructions, large language models (\u2026 the '\n 'prompt used to steer the model, and most effective prompts \u2026 to '\n 'prompt engineering, we propose Automatic Prompt \u2026',\n 'title': 'Large language models are human-level prompt engineers',\n 'link': 'https://arxiv.org/abs/2211.01910',\n 'engines': ['google scholar'],\n 'category': 'science'},\n {'snippet': '\u2026 Large language models (LLMs) have introduced new possibilities '\n 'for prototyping with AI [18]. Pre-trained on a large amount of '\n 'text data, models \u2026 language instructions called prompts. \u2026',\n 'title': 'Promptchainer: Chaining large language model prompts through '", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html"}
+{"id": "6cadad82d9a6-3", "text": "'title': 'Promptchainer: Chaining large language model prompts through '\n 'visual programming',\n 'link': 'https://dl.acm.org/doi/abs/10.1145/3491101.3519729',\n 'engines': ['google scholar'],\n 'category': 'science'},\n {'snippet': '\u2026 can introspect the large prompt model. We derive the view '\n '\u03d50(X) and the model h0 from T01. However, instead of fully '\n 'fine-tuning T0 during co-training, we focus on soft prompt '\n 'tuning, \u2026',\n 'title': 'Co-training improves prompt-based learning for large language '\n 'models',\n 'link': 'https://proceedings.mlr.press/v162/lang22a.html',\n 'engines': ['google scholar'],\n 'category': 'science'},\n {'snippet': '\u2026 With the success of large language models (LLMs) of code and '\n 'their use as \u2026 prompt design process become important. In this '\n 'work, we propose a framework called Repo-Level Prompt \u2026',\n 'title': 'Repository-level prompt generation for large language models of '\n 'code',\n 'link': 'https://arxiv.org/abs/2206.12839',\n 'engines': ['google scholar'],\n 'category': 'science'},\n {'snippet': '\u2026 Figure 2 | The benefits of different components of a prompt '\n 'for the largest language model (Gopher), as estimated from '\n 'hierarchical logistic regression. Each point estimates the '\n 'unique \u2026',\n 'title': 'Can language models learn from explanations in context?',\n 'link': 'https://arxiv.org/abs/2204.02329',", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html"}
+{"id": "6cadad82d9a6-4", "text": "'link': 'https://arxiv.org/abs/2204.02329',\n 'engines': ['google scholar'],\n 'category': 'science'}]\nGet papers from arxiv\nresults = search.results(\"Large Language Model prompt\", num_results=5, engines=['arxiv'])\npprint.pp(results)\n[{'snippet': 'Thanks to the advanced improvement of large pre-trained language '\n 'models, prompt-based fine-tuning is shown to be effective on a '\n 'variety of downstream tasks. Though many prompting methods have '\n 'been investigated, it remains unknown which type of prompts are '\n 'the most effective among three types of prompts (i.e., '\n 'human-designed prompts, schema prompts and null prompts). In '\n 'this work, we empirically compare the three types of prompts '\n 'under both few-shot and fully-supervised settings. Our '\n 'experimental results show that schema prompts are the most '\n 'effective in general. Besides, the performance gaps tend to '\n 'diminish when the scale of training data grows large.',\n 'title': 'Do Prompts Solve NLP Tasks Using Natural Language?',\n 'link': 'http://arxiv.org/abs/2203.00902v1',\n 'engines': ['arxiv'],\n 'category': 'science'},\n {'snippet': 'Cross-prompt automated essay scoring (AES) requires the system '\n 'to use non target-prompt essays to award scores to a '\n 'target-prompt essay. Since obtaining a large quantity of '\n 'pre-graded essays to a particular prompt is often difficult and '\n 'unrealistic, the task of cross-prompt AES is vital for the '\n 'development of real-world AES systems, yet it remains an '", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html"}
+{"id": "6cadad82d9a6-5", "text": "'development of real-world AES systems, yet it remains an '\n 'under-explored area of research. Models designed for '\n 'prompt-specific AES rely heavily on prompt-specific knowledge '\n 'and perform poorly in the cross-prompt setting, whereas current '\n 'approaches to cross-prompt AES either require a certain quantity '\n 'of labelled target-prompt essays or require a large quantity of '\n 'unlabelled target-prompt essays to perform transfer learning in '\n 'a multi-step manner. To address these issues, we introduce '\n 'Prompt Agnostic Essay Scorer (PAES) for cross-prompt AES. Our '\n 'method requires no access to labelled or unlabelled '\n 'target-prompt data during training and is a single-stage '\n 'approach. PAES is easy to apply in practice and achieves '\n 'state-of-the-art performance on the Automated Student Assessment '\n 'Prize (ASAP) dataset.',\n 'title': 'Prompt Agnostic Essay Scorer: A Domain Generalization Approach to '\n 'Cross-prompt Automated Essay Scoring',\n 'link': 'http://arxiv.org/abs/2008.01441v1',\n 'engines': ['arxiv'],\n 'category': 'science'},\n {'snippet': 'Research on prompting has shown excellent performance with '\n 'little or even no supervised training across many tasks. '\n 'However, prompting for machine translation is still '\n 'under-explored in the literature. We fill this gap by offering a '\n 'systematic study on prompting strategies for translation, '\n 'examining various factors for prompt template and demonstration '\n 'example selection. We further explore the use of monolingual '", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html"}
+{"id": "6cadad82d9a6-6", "text": "'example selection. We further explore the use of monolingual '\n 'data and the feasibility of cross-lingual, cross-domain, and '\n 'sentence-to-document transfer learning in prompting. Extensive '\n 'experiments with GLM-130B (Zeng et al., 2022) as the testbed '\n 'show that 1) the number and the quality of prompt examples '\n 'matter, where using suboptimal examples degenerates translation; '\n '2) several features of prompt examples, such as semantic '\n 'similarity, show significant Spearman correlation with their '\n 'prompting performance; yet, none of the correlations are strong '\n 'enough; 3) using pseudo parallel prompt examples constructed '\n 'from monolingual data via zero-shot prompting could improve '\n 'translation; and 4) improved performance is achievable by '\n 'transferring knowledge from prompt examples selected in other '\n 'settings. We finally provide an analysis on the model outputs '\n 'and discuss several problems that prompting still suffers from.',\n 'title': 'Prompting Large Language Model for Machine Translation: A Case '\n 'Study',\n 'link': 'http://arxiv.org/abs/2301.07069v2',\n 'engines': ['arxiv'],\n 'category': 'science'},\n {'snippet': 'Large language models can perform new tasks in a zero-shot '\n 'fashion, given natural language prompts that specify the desired '\n 'behavior. Such prompts are typically hand engineered, but can '\n 'also be learned with gradient-based methods from labeled data. '\n 'However, it is underexplored what factors make the prompts '\n 'effective, especially when the prompts are natural language. In '", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html"}
+{"id": "6cadad82d9a6-7", "text": "'effective, especially when the prompts are natural language. In '\n 'this paper, we investigate common attributes shared by effective '\n 'prompts. We first propose a human readable prompt tuning method '\n '(F LUENT P ROMPT) based on Langevin dynamics that incorporates a '\n 'fluency constraint to find a diverse distribution of effective '\n 'and fluent prompts. Our analysis reveals that effective prompts '\n 'are topically related to the task domain and calibrate the prior '\n 'probability of label words. Based on these findings, we also '\n 'propose a method for generating prompts using only unlabeled '\n 'data, outperforming strong baselines by an average of 7.0% '\n 'accuracy across three tasks.',\n 'title': \"Toward Human Readable Prompt Tuning: Kubrick's The Shining is a \"\n 'good movie, and a good prompt too?',\n 'link': 'http://arxiv.org/abs/2212.10539v1',\n 'engines': ['arxiv'],\n 'category': 'science'},\n {'snippet': 'Prevailing methods for mapping large generative language models '\n \"to supervised tasks may fail to sufficiently probe models' novel \"\n 'capabilities. Using GPT-3 as a case study, we show that 0-shot '\n 'prompts can significantly outperform few-shot prompts. We '\n 'suggest that the function of few-shot examples in these cases is '\n 'better described as locating an already learned task rather than '\n 'meta-learning. This analysis motivates rethinking the role of '\n 'prompts in controlling and evaluating powerful language models. '\n 'In this work, we discuss methods of prompt programming, '", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html"}
+{"id": "6cadad82d9a6-8", "text": "'In this work, we discuss methods of prompt programming, '\n 'emphasizing the usefulness of considering prompts through the '\n 'lens of natural language. We explore techniques for exploiting '\n 'the capacity of narratives and cultural anchors to encode '\n 'nuanced intentions and techniques for encouraging deconstruction '\n 'of a problem into components before producing a verdict. '\n 'Informed by this more encompassing theory of prompt programming, '\n 'we also introduce the idea of a metaprompt that seeds the model '\n 'to generate its own natural language prompts for a range of '\n 'tasks. Finally, we discuss how these more general methods of '\n 'interacting with language models can be incorporated into '\n 'existing and future benchmarks and practical applications.',\n 'title': 'Prompt Programming for Large Language Models: Beyond the Few-Shot '\n 'Paradigm',\n 'link': 'http://arxiv.org/abs/2102.07350v1',\n 'engines': ['arxiv'],\n 'category': 'science'}]\nIn this example we query for large language models under the it category. We then filter the results that come from github.\nresults = search.results(\"large language model\", num_results = 20, categories='it')\npprint.pp(list(filter(lambda r: r['engines'][0] == 'github', results)))\n[{'snippet': 'Guide to using pre-trained large language models of source code',\n 'title': 'Code-LMs',\n 'link': 'https://github.com/VHellendoorn/Code-LMs',\n 'engines': ['github'],\n 'category': 'it'},\n {'snippet': 'Dramatron uses large language models to generate coherent '\n 'scripts and screenplays.',", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html"}
+{"id": "6cadad82d9a6-9", "text": "'scripts and screenplays.',\n 'title': 'dramatron',\n 'link': 'https://github.com/deepmind/dramatron',\n 'engines': ['github'],\n 'category': 'it'}]\nWe could also directly query for results from github and other source forges.\nresults = search.results(\"large language model\", num_results = 20, engines=['github', 'gitlab'])\npprint.pp(results)\n[{'snippet': \"Implementation of 'A Watermark for Large Language Models' paper \"\n 'by Kirchenbauer & Geiping et. al.',\n 'title': 'Peutlefaire / LMWatermark',\n 'link': 'https://gitlab.com/BrianPulfer/LMWatermark',\n 'engines': ['gitlab'],\n 'category': 'it'},\n {'snippet': 'Guide to using pre-trained large language models of source code',\n 'title': 'Code-LMs',\n 'link': 'https://github.com/VHellendoorn/Code-LMs',\n 'engines': ['github'],\n 'category': 'it'},\n {'snippet': '',\n 'title': 'Simen Burud / Large-scale Language Models for Conversational '\n 'Speech Recognition',\n 'link': 'https://gitlab.com/BrianPulfer',\n 'engines': ['gitlab'],\n 'category': 'it'},\n {'snippet': 'Dramatron uses large language models to generate coherent '\n 'scripts and screenplays.',\n 'title': 'dramatron',\n 'link': 'https://github.com/deepmind/dramatron',\n 'engines': ['github'],\n 'category': 'it'},", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html"}
+{"id": "6cadad82d9a6-10", "text": "'engines': ['github'],\n 'category': 'it'},\n {'snippet': 'Code for loralib, an implementation of \"LoRA: Low-Rank '\n 'Adaptation of Large Language Models\"',\n 'title': 'LoRA',\n 'link': 'https://github.com/microsoft/LoRA',\n 'engines': ['github'],\n 'category': 'it'},\n {'snippet': 'Code for the paper \"Evaluating Large Language Models Trained on '\n 'Code\"',\n 'title': 'human-eval',\n 'link': 'https://github.com/openai/human-eval',\n 'engines': ['github'],\n 'category': 'it'},\n {'snippet': 'A trend starts from \"Chain of Thought Prompting Elicits '\n 'Reasoning in Large Language Models\".',\n 'title': 'Chain-of-ThoughtsPapers',\n 'link': 'https://github.com/Timothyxxx/Chain-of-ThoughtsPapers',\n 'engines': ['github'],\n 'category': 'it'},\n {'snippet': 'Mistral: A strong, northwesterly wind: Framework for transparent '\n 'and accessible large-scale language model training, built with '\n 'Hugging Face \ud83e\udd17 Transformers.',\n 'title': 'mistral',\n 'link': 'https://github.com/stanford-crfm/mistral',\n 'engines': ['github'],\n 'category': 'it'},\n {'snippet': 'A prize for finding tasks that cause large language models to '\n 'show inverse scaling',\n 'title': 'prize',\n 'link': 'https://github.com/inverse-scaling/prize',\n 'engines': ['github'],", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html"}
+{"id": "6cadad82d9a6-11", "text": "'engines': ['github'],\n 'category': 'it'},\n {'snippet': 'Optimus: the first large-scale pre-trained VAE language model',\n 'title': 'Optimus',\n 'link': 'https://github.com/ChunyuanLI/Optimus',\n 'engines': ['github'],\n 'category': 'it'},\n {'snippet': 'Seminar on Large Language Models (COMP790-101 at UNC Chapel '\n 'Hill, Fall 2022)',\n 'title': 'llm-seminar',\n 'link': 'https://github.com/craffel/llm-seminar',\n 'engines': ['github'],\n 'category': 'it'},\n {'snippet': 'A central, open resource for data and tools related to '\n 'chain-of-thought reasoning in large language models. Developed @ '\n 'Samwald research group: https://samwald.info/',\n 'title': 'ThoughtSource',\n 'link': 'https://github.com/OpenBioLink/ThoughtSource',\n 'engines': ['github'],\n 'category': 'it'},\n {'snippet': 'A comprehensive list of papers using large language/multi-modal '\n 'models for Robotics/RL, including papers, codes, and related '\n 'websites',\n 'title': 'Awesome-LLM-Robotics',\n 'link': 'https://github.com/GT-RIPL/Awesome-LLM-Robotics',\n 'engines': ['github'],\n 'category': 'it'},\n {'snippet': 'Tools for curating biomedical training data for large-scale '\n 'language modeling',\n 'title': 'biomedical',\n 'link': 'https://github.com/bigscience-workshop/biomedical',", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html"}
+{"id": "6cadad82d9a6-12", "text": "'link': 'https://github.com/bigscience-workshop/biomedical',\n 'engines': ['github'],\n 'category': 'it'},\n {'snippet': 'ChatGPT @ Home: Large Language Model (LLM) chatbot application, '\n 'written by ChatGPT',\n 'title': 'ChatGPT-at-Home',\n 'link': 'https://github.com/Sentdex/ChatGPT-at-Home',\n 'engines': ['github'],\n 'category': 'it'},\n {'snippet': 'Design and Deploy Large Language Model Apps',\n 'title': 'dust',\n 'link': 'https://github.com/dust-tt/dust',\n 'engines': ['github'],\n 'category': 'it'},\n {'snippet': 'Polyglot: Large Language Models of Well-balanced Competence in '\n 'Multi-languages',\n 'title': 'polyglot',\n 'link': 'https://github.com/EleutherAI/polyglot',\n 'engines': ['github'],\n 'category': 'it'},\n {'snippet': 'Code release for \"Learning Video Representations from Large '\n 'Language Models\"',\n 'title': 'LaViLa',\n 'link': 'https://github.com/facebookresearch/LaViLa',\n 'engines': ['github'],\n 'category': 'it'},\n {'snippet': 'SmoothQuant: Accurate and Efficient Post-Training Quantization '\n 'for Large Language Models',\n 'title': 'smoothquant',\n 'link': 'https://github.com/mit-han-lab/smoothquant',\n 'engines': ['github'],\n 'category': 'it'},", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html"}
+{"id": "6cadad82d9a6-13", "text": "'engines': ['github'],\n 'category': 'it'},\n {'snippet': 'This repository contains the code, data, and models of the paper '\n 'titled \"XL-Sum: Large-Scale Multilingual Abstractive '\n 'Summarization for 44 Languages\" published in Findings of the '\n 'Association for Computational Linguistics: ACL-IJCNLP 2021.',\n 'title': 'xl-sum',\n 'link': 'https://github.com/csebuetnlp/xl-sum',\n 'engines': ['github'],\n 'category': 'it'}]\nprevious\nSearch Tools\nnext\nSerpAPI\n Contents\n \nCustom Parameters\nObtaining results with metadata\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html"}
+{"id": "00a2039b4fd1-0", "text": ".ipynb\n.pdf\nWolfram Alpha\nWolfram Alpha#\nThis notebook goes over how to use the wolfram alpha component.\nFirst, you need to set up your Wolfram Alpha developer account and get your APP ID:\nGo to wolfram alpha and sign up for a developer account here\nCreate an app and get your APP ID\npip install wolframalpha\nThen we will need to set some environment variables:\nSave your APP ID into WOLFRAM_ALPHA_APPID env variable\npip install wolframalpha\nimport os\nos.environ[\"WOLFRAM_ALPHA_APPID\"] = \"\"\nfrom langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper\nwolfram = WolframAlphaAPIWrapper()\nwolfram.run(\"What is 2x+5 = -3x + 7?\")\n'x = 2/5'\nprevious\nWikipedia\nnext\nYouTubeSearchTool\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/wolfram_alpha.html"}
+{"id": "3d6ef38b599e-0", "text": ".ipynb\n.pdf\nSearch Tools\n Contents \nGoogle Serper API Wrapper\nSerpAPI\nGoogleSearchAPIWrapper\nSearxNG Meta Search Engine\nSearch Tools#\nThis notebook shows off usage of various search tools.\nfrom langchain.agents import load_tools\nfrom langchain.agents import initialize_agent\nfrom langchain.agents import AgentType\nfrom langchain.llms import OpenAI\nllm = OpenAI(temperature=0)\nGoogle Serper API Wrapper#\nFirst, let\u2019s try to use the Google Serper API tool.\ntools = load_tools([\"google-serper\"], llm=llm)\nagent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)\nagent.run(\"What is the weather in Pomfret?\")\n> Entering new AgentExecutor chain...\n I should look up the current weather conditions.\nAction: Search\nAction Input: \"weather in Pomfret\"\nObservation: 37\u00b0F\nThought: I now know the current temperature in Pomfret.\nFinal Answer: The current temperature in Pomfret is 37\u00b0F.\n> Finished chain.\n'The current temperature in Pomfret is 37\u00b0F.'\nSerpAPI#\nNow, let\u2019s use the SerpAPI tool.\ntools = load_tools([\"serpapi\"], llm=llm)\nagent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)\nagent.run(\"What is the weather in Pomfret?\")\n> Entering new AgentExecutor chain...\n I need to find out what the current weather is in Pomfret.\nAction: Search\nAction Input: \"weather in Pomfret\"", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/search_tools.html"}
+{"id": "3d6ef38b599e-1", "text": "Action: Search\nAction Input: \"weather in Pomfret\"\nObservation: Partly cloudy skies during the morning hours will give way to cloudy skies with light rain and snow developing in the afternoon. High 42F. Winds WNW at 10 to 15 ...\nThought: I now know the current weather in Pomfret.\nFinal Answer: Partly cloudy skies during the morning hours will give way to cloudy skies with light rain and snow developing in the afternoon. High 42F. Winds WNW at 10 to 15 mph.\n> Finished chain.\n'Partly cloudy skies during the morning hours will give way to cloudy skies with light rain and snow developing in the afternoon. High 42F. Winds WNW at 10 to 15 mph.'\nGoogleSearchAPIWrapper#\nNow, let\u2019s use the official Google Search API Wrapper.\ntools = load_tools([\"google-search\"], llm=llm)\nagent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)\nagent.run(\"What is the weather in Pomfret?\")\n> Entering new AgentExecutor chain...\n I should look up the current weather conditions.\nAction: Google Search\nAction Input: \"weather in Pomfret\"", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/search_tools.html"}
+{"id": "3d6ef38b599e-2", "text": "Action: Google Search\nAction Input: \"weather in Pomfret\"\nObservation: Showers early becoming a steady light rain later in the day. Near record high temperatures. High around 60F. Winds SW at 10 to 15 mph. Chance of rain 60%. Pomfret, CT Weather Forecast, with current conditions, wind, air quality, and what to expect for the next 3 days. Hourly Weather-Pomfret, CT. As of 12:52 am EST. Special Weather Statement +2\u00a0... Hazardous Weather Conditions. Special Weather Statement ... Pomfret CT. Tonight ... National Digital Forecast Database Maximum Temperature Forecast. Pomfret Center Weather Forecasts. Weather Underground provides local & long-range weather forecasts, weatherreports, maps & tropical weather conditions for\u00a0... Pomfret, CT 12 hour by hour weather forecast includes precipitation, temperatures, sky conditions, rain chance, dew-point, relative humidity, wind direction\u00a0... North Pomfret Weather Forecasts. Weather Underground provides local & long-range weather forecasts, weatherreports, maps & tropical weather conditions for\u00a0... Today's Weather - Pomfret, CT. Dec 31, 2022 4:00 PM. Putnam MS. --. Weather forecast icon. Feels like --. Hi --. Lo --. Pomfret, CT temperature trend for the next 14 Days. Find daytime highs and nighttime lows from TheWeatherNetwork.com. Pomfret, MD Weather Forecast Date: 332 PM EST Wed Dec 28 2022. The area/counties/county of: Charles, including the cites of: St. Charles and Waldorf.\nThought: I now know the current weather conditions in Pomfret.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/search_tools.html"}
+{"id": "3d6ef38b599e-3", "text": "Thought: I now know the current weather conditions in Pomfret.\nFinal Answer: Showers early becoming a steady light rain later in the day. Near record high temperatures. High around 60F. Winds SW at 10 to 15 mph. Chance of rain 60%.\n> Finished AgentExecutor chain.\n'Showers early becoming a steady light rain later in the day. Near record high temperatures. High around 60F. Winds SW at 10 to 15 mph. Chance of rain 60%.'\nSearxNG Meta Search Engine#\nHere we will be using a self hosted SearxNG meta search engine.\ntools = load_tools([\"searx-search\"], searx_host=\"http://localhost:8888\", llm=llm)\nagent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)\nagent.run(\"What is the weather in Pomfret\")\n> Entering new AgentExecutor chain...\n I should look up the current weather\nAction: SearX Search\nAction Input: \"weather in Pomfret\"\nObservation: Mainly cloudy with snow showers around in the morning. High around 40F. Winds NNW at 5 to 10 mph. Chance of snow 40%. Snow accumulations less than one inch.\n10 Day Weather - Pomfret, MD As of 1:37 pm EST Today 49\u00b0/ 41\u00b0 52% Mon 27 | Day 49\u00b0 52% SE 14 mph Cloudy with occasional rain showers. High 49F. Winds SE at 10 to 20 mph. Chance of rain 50%....", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/search_tools.html"}
+{"id": "3d6ef38b599e-4", "text": "10 Day Weather - Pomfret, VT As of 3:51 am EST Special Weather Statement Today 39\u00b0/ 32\u00b0 37% Wed 01 | Day 39\u00b0 37% NE 4 mph Cloudy with snow showers developing for the afternoon. High 39F....\nPomfret, CT ; Current Weather. 1:06 AM. 35\u00b0F \u00b7 RealFeel\u00ae 32\u00b0 ; TODAY'S WEATHER FORECAST. 3/3. 44\u00b0Hi. RealFeel\u00ae 50\u00b0 ; TONIGHT'S WEATHER FORECAST. 3/3. 32\u00b0Lo.\nPomfret, MD Forecast Today Hourly Daily Morning 41\u00b0 1% Afternoon 43\u00b0 0% Evening 35\u00b0 3% Overnight 34\u00b0 2% Don't Miss Finally, Here\u2019s Why We Get More Colds and Flu When It\u2019s Cold Coast-To-Coast...\nPomfret, MD Weather Forecast | AccuWeather Current Weather 5:35 PM 35\u00b0 F RealFeel\u00ae 36\u00b0 RealFeel Shade\u2122 36\u00b0 Air Quality Excellent Wind E 3 mph Wind Gusts 5 mph Cloudy More Details WinterCast...\nPomfret, VT Weather Forecast | AccuWeather Current Weather 11:21 AM 23\u00b0 F RealFeel\u00ae 27\u00b0 RealFeel Shade\u2122 25\u00b0 Air Quality Fair Wind ESE 3 mph Wind Gusts 7 mph Cloudy More Details WinterCast...\nPomfret Center, CT Weather Forecast | AccuWeather Daily Current Weather 6:50 PM 39\u00b0 F RealFeel\u00ae 36\u00b0 Air Quality Fair Wind NW 6 mph Wind Gusts 16 mph Mostly clear More Details WinterCast...", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/search_tools.html"}
+{"id": "3d6ef38b599e-5", "text": "12:00 pm \u00b7 Feels Like36\u00b0 \u00b7 WindN 5 mph \u00b7 Humidity43% \u00b7 UV Index3 of 10 \u00b7 Cloud Cover65% \u00b7 Rain Amount0 in ...\nPomfret Center, CT Weather Conditions | Weather Underground star Popular Cities San Francisco, CA 49 \u00b0F Clear Manhattan, NY 37 \u00b0F Fair Schiller Park, IL (60176) warning39 \u00b0F Mostly Cloudy...\nThought: I now know the final answer\nFinal Answer: The current weather in Pomfret is mainly cloudy with snow showers around in the morning. The temperature is around 40F with winds NNW at 5 to 10 mph. Chance of snow is 40%.\n> Finished chain.\n'The current weather in Pomfret is mainly cloudy with snow showers around in the morning. The temperature is around 40F with winds NNW at 5 to 10 mph. Chance of snow is 40%.'\nprevious\nSceneXplain\nnext\nSearxNG Search API\n Contents\n \nGoogle Serper API Wrapper\nSerpAPI\nGoogleSearchAPIWrapper\nSearxNG Meta Search Engine\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/search_tools.html"}
+{"id": "d04c744db019-0", "text": ".ipynb\n.pdf\nHuggingFace Tools\nHuggingFace Tools#\nHuggingface Tools supporting text I/O can be\nloaded directly using the load_huggingface_tool function.\n# Requires transformers>=4.29.0 and huggingface_hub>=0.14.1\n!pip install --upgrade transformers huggingface_hub > /dev/null\nfrom langchain.agents import load_huggingface_tool\ntool = load_huggingface_tool(\"lysandre/hf-model-downloads\")\nprint(f\"{tool.name}: {tool.description}\")\nmodel_download_counter: This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It takes the name of the category (such as text-classification, depth-estimation, etc), and returns the name of the checkpoint\ntool.run(\"text-classification\")\n'facebook/bart-large-mnli'\nprevious\nGraphQL tool\nnext\nHuman as a tool\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/huggingface_tools.html"}
+{"id": "7df0ffe1e0a8-0", "text": ".ipynb\n.pdf\nApify\nApify#\nThis notebook shows how to use the Apify integration for LangChain.\nApify is a cloud platform for web scraping and data extraction,\nwhich provides an ecosystem of more than a thousand\nready-made apps called Actors for various web scraping, crawling, and data extraction use cases.\nFor example, you can use it to extract Google Search results, Instagram and Facebook profiles, products from Amazon or Shopify, Google Maps reviews, etc. etc.\nIn this example, we\u2019ll use the Website Content Crawler Actor,\nwhich can deeply crawl websites such as documentation, knowledge bases, help centers, or blogs,\nand extract text content from the web pages. Then we feed the documents into a vector index and answer questions from it.\n#!pip install apify-client\nFirst, import ApifyWrapper into your source code:\nfrom langchain.document_loaders.base import Document\nfrom langchain.indexes import VectorstoreIndexCreator\nfrom langchain.utilities import ApifyWrapper\nInitialize it using your Apify API token and for the purpose of this example, also with your OpenAI API key:\nimport os\nos.environ[\"OPENAI_API_KEY\"] = \"Your OpenAI API key\"\nos.environ[\"APIFY_API_TOKEN\"] = \"Your Apify API token\"\napify = ApifyWrapper()\nThen run the Actor, wait for it to finish, and fetch its results from the Apify dataset into a LangChain document loader.\nNote that if you already have some results in an Apify dataset, you can load them directly using ApifyDatasetLoader, as shown in this notebook. In that notebook, you\u2019ll also find the explanation of the dataset_mapping_function, which is used to map fields from the Apify dataset records to LangChain Document fields.\nloader = apify.call_actor(\n actor_id=\"apify/website-content-crawler\",", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/apify.html"}
+{"id": "7df0ffe1e0a8-1", "text": "actor_id=\"apify/website-content-crawler\",\n run_input={\"startUrls\": [{\"url\": \"https://python.langchain.com/en/latest/\"}]},\n dataset_mapping_function=lambda item: Document(\n page_content=item[\"text\"] or \"\", metadata={\"source\": item[\"url\"]}\n ),\n)\nInitialize the vector index from the crawled documents:\nindex = VectorstoreIndexCreator().from_loaders([loader])\nAnd finally, query the vector index:\nquery = \"What is LangChain?\"\nresult = index.query_with_sources(query)\nprint(result[\"answer\"])\nprint(result[\"sources\"])\n LangChain is a standard interface through which you can interact with a variety of large language models (LLMs). It provides modules that can be used to build language model applications, and it also provides chains and agents with memory capabilities.\nhttps://python.langchain.com/en/latest/modules/models/llms.html, https://python.langchain.com/en/latest/getting_started/getting_started.html\nprevious\nTool Input Schema\nnext\nArXiv API Tool\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/apify.html"}
+{"id": "f25fbd8b5099-0", "text": ".ipynb\n.pdf\nGraphQL tool\nGraphQL tool#\nThis Jupyter Notebook demonstrates how to use the BaseGraphQLTool component with an Agent.\nGraphQL is a query language for APIs and a runtime for executing those queries against your data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.\nBy including a BaseGraphQLTool in the list of tools provided to an Agent, you can grant your Agent the ability to query data from GraphQL APIs for any purposes you need.\nIn this example, we\u2019ll be using the public Star Wars GraphQL API available at the following endpoint: https://swapi-graphql.netlify.app/.netlify/functions/index.\nFirst, you need to install httpx and gql Python packages.\npip install httpx gql > /dev/null\nNow, let\u2019s create a BaseGraphQLTool instance with the specified Star Wars API endpoint and initialize an Agent with the tool.\nfrom langchain import OpenAI\nfrom langchain.agents import load_tools, initialize_agent, AgentType\nfrom langchain.utilities import GraphQLAPIWrapper\nllm = OpenAI(temperature=0)\ntools = load_tools([\"graphql\"], graphql_endpoint=\"https://swapi-graphql.netlify.app/.netlify/functions/index\", llm=llm)\nagent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)\nNow, we can use the Agent to run queries against the Star Wars GraphQL API. Let\u2019s ask the Agent to list all the Star Wars films and their release dates.\ngraphql_fields = \"\"\"allFilms {\n films {\n title\n director\n releaseDate\n speciesConnection {\n species {\n name\n classification\n homeworld {\n name\n }", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/graphql.html"}
+{"id": "f25fbd8b5099-1", "text": "name\n classification\n homeworld {\n name\n }\n }\n }\n }\n }\n\"\"\"\nsuffix = \"Search for the titles of all the stawars films stored in the graphql database that has this schema \"\nagent.run(suffix + graphql_fields)\n> Entering new AgentExecutor chain...\n I need to query the graphql database to get the titles of all the star wars films\nAction: query_graphql\nAction Input: query { allFilms { films { title } } }\nObservation: \"{\\n \\\"allFilms\\\": {\\n \\\"films\\\": [\\n {\\n \\\"title\\\": \\\"A New Hope\\\"\\n },\\n {\\n \\\"title\\\": \\\"The Empire Strikes Back\\\"\\n },\\n {\\n \\\"title\\\": \\\"Return of the Jedi\\\"\\n },\\n {\\n \\\"title\\\": \\\"The Phantom Menace\\\"\\n },\\n {\\n \\\"title\\\": \\\"Attack of the Clones\\\"\\n },\\n {\\n \\\"title\\\": \\\"Revenge of the Sith\\\"\\n }\\n ]\\n }\\n}\"\nThought: I now know the titles of all the star wars films\nFinal Answer: The titles of all the star wars films are: A New Hope, The Empire Strikes Back, Return of the Jedi, The Phantom Menace, Attack of the Clones, and Revenge of the Sith.\n> Finished chain.\n'The titles of all the star wars films are: A New Hope, The Empire Strikes Back, Return of the Jedi, The Phantom Menace, Attack of the Clones, and Revenge of the Sith.'\nprevious\nGradio Tools\nnext\nHuggingFace Tools\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/graphql.html"}
+{"id": "f25fbd8b5099-2", "text": "By Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/graphql.html"}
+{"id": "14575a11db8c-0", "text": ".ipynb\n.pdf\nMetaphor Search\n Contents \nMetaphor Search\nCall the API\nUse Metaphor as a tool\nMetaphor Search#\nThis notebook goes over how to use Metaphor search.\nFirst, you need to set up the proper API keys and environment variables. Request an API key [here](Sign up for early access here).\nThen enter your API key as an environment variable.\nimport os\nos.environ[\"METAPHOR_API_KEY\"] = \"\"\nfrom langchain.utilities import MetaphorSearchAPIWrapper\nsearch = MetaphorSearchAPIWrapper()\nCall the API#\nresults takes in a Metaphor-optimized search query and a number of results (up to 500). It returns a list of results with title, url, author, and creation date.\nsearch.results(\"The best blog post about AI safety is definitely this: \", 10)", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/metaphor_search.html"}
+{"id": "14575a11db8c-1", "text": "{'results': [{'url': 'https://www.anthropic.com/index/core-views-on-ai-safety', 'title': 'Core Views on AI Safety: When, Why, What, and How', 'dateCreated': '2023-03-08', 'author': None, 'score': 0.1998831331729889}, {'url': 'https://aisafety.wordpress.com/', 'title': 'Extinction Risk from Artificial Intelligence', 'dateCreated': '2013-10-08', 'author': None, 'score': 0.19801370799541473}, {'url': 'https://www.lesswrong.com/posts/WhNxG4r774bK32GcH/the-simple-picture-on-ai-safety', 'title': 'The simple picture on AI safety - LessWrong', 'dateCreated': '2018-05-27', 'author': 'Alex Flint', 'score': 0.19735534489154816}, {'url': 'https://slatestarcodex.com/2015/05/29/no-time-like-the-present-for-ai-safety-work/', 'title': 'No Time Like The Present For AI Safety Work', 'dateCreated': '2015-05-29', 'author': None, 'score': 0.19408763945102692}, {'url': 'https://www.lesswrong.com/posts/5BJvusxdwNXYQ4L9L/so-you-want-to-save-the-world', 'title': 'So You Want to Save the World - LessWrong', 'dateCreated': '2012-01-01', 'author': 'Lukeprog', 'score': 0.18853715062141418}, {'url': 'https://openai.com/blog/planning-for-agi-and-beyond', 'title': 'Planning for AGI and beyond', 'dateCreated':", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/metaphor_search.html"}
+{"id": "14575a11db8c-2", "text": "'title': 'Planning for AGI and beyond', 'dateCreated': '2023-02-24', 'author': 'Authors', 'score': 0.18665121495723724}, {'url': 'https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html', 'title': 'The Artificial Intelligence Revolution: Part 1 - Wait But Why', 'dateCreated': '2015-01-22', 'author': 'Tim Urban', 'score': 0.18604731559753418}, {'url': 'https://forum.effectivealtruism.org/posts/uGDCaPFaPkuxAowmH/anthropic-core-views-on-ai-safety-when-why-what-and-how', 'title': 'Anthropic: Core Views on AI Safety: When, Why, What, and How - EA Forum', 'dateCreated': '2023-03-09', 'author': 'Jonmenaster', 'score': 0.18415069580078125}, {'url': 'https://www.lesswrong.com/posts/xBrpph9knzWdtMWeQ/the-proof-of-doom', 'title': 'The Proof of Doom - LessWrong', 'dateCreated': '2022-03-09', 'author': 'Johnlawrenceaspden', 'score': 0.18159329891204834}, {'url': 'https://intelligence.org/why-ai-safety/', 'title': 'Why AI Safety? - Machine Intelligence Research Institute', 'dateCreated': '2017-03-01', 'author': None, 'score': 0.1814115345478058}]}", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/metaphor_search.html"}
+{"id": "14575a11db8c-3", "text": "[{'title': 'Core Views on AI Safety: When, Why, What, and How',\n 'url': 'https://www.anthropic.com/index/core-views-on-ai-safety',\n 'author': None,\n 'date_created': '2023-03-08'},\n {'title': 'Extinction Risk from Artificial Intelligence',\n 'url': 'https://aisafety.wordpress.com/',\n 'author': None,\n 'date_created': '2013-10-08'},\n {'title': 'The simple picture on AI safety - LessWrong',\n 'url': 'https://www.lesswrong.com/posts/WhNxG4r774bK32GcH/the-simple-picture-on-ai-safety',\n 'author': 'Alex Flint',\n 'date_created': '2018-05-27'},\n {'title': 'No Time Like The Present For AI Safety Work',\n 'url': 'https://slatestarcodex.com/2015/05/29/no-time-like-the-present-for-ai-safety-work/',\n 'author': None,\n 'date_created': '2015-05-29'},\n {'title': 'So You Want to Save the World - LessWrong',\n 'url': 'https://www.lesswrong.com/posts/5BJvusxdwNXYQ4L9L/so-you-want-to-save-the-world',\n 'author': 'Lukeprog',\n 'date_created': '2012-01-01'},\n {'title': 'Planning for AGI and beyond',\n 'url': 'https://openai.com/blog/planning-for-agi-and-beyond',\n 'author': 'Authors',\n 'date_created': '2023-02-24'},", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/metaphor_search.html"}
+{"id": "14575a11db8c-4", "text": "'date_created': '2023-02-24'},\n {'title': 'The Artificial Intelligence Revolution: Part 1 - Wait But Why',\n 'url': 'https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html',\n 'author': 'Tim Urban',\n 'date_created': '2015-01-22'},\n {'title': 'Anthropic: Core Views on AI Safety: When, Why, What, and How - EA Forum',\n 'url': 'https://forum.effectivealtruism.org/posts/uGDCaPFaPkuxAowmH/anthropic-core-views-on-ai-safety-when-why-what-and-how',\n 'author': 'Jonmenaster',\n 'date_created': '2023-03-09'},\n {'title': 'The Proof of Doom - LessWrong',\n 'url': 'https://www.lesswrong.com/posts/xBrpph9knzWdtMWeQ/the-proof-of-doom',\n 'author': 'Johnlawrenceaspden',\n 'date_created': '2022-03-09'},\n {'title': 'Why AI Safety? - Machine Intelligence Research Institute',\n 'url': 'https://intelligence.org/why-ai-safety/',\n 'author': None,\n 'date_created': '2017-03-01'}]\nUse Metaphor as a tool#\nMetaphor can be used as a tool that gets URLs that other tools such as browsing tools.\nfrom langchain.agents.agent_toolkits import PlayWrightBrowserToolkit\nfrom langchain.tools.playwright.utils import (\n create_async_playwright_browser,# A synchronous browser is available, though it isn't compatible with jupyter.\n)\nasync_browser = create_async_playwright_browser()", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/metaphor_search.html"}
+{"id": "14575a11db8c-5", "text": ")\nasync_browser = create_async_playwright_browser()\ntoolkit = PlayWrightBrowserToolkit.from_browser(async_browser=async_browser)\ntools = toolkit.get_tools()\ntools_by_name = {tool.name: tool for tool in tools}\nprint(tools_by_name.keys())\nnavigate_tool = tools_by_name[\"navigate_browser\"]\nextract_text = tools_by_name[\"extract_text\"]\nfrom langchain.agents import initialize_agent, AgentType\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.tools import MetaphorSearchResults\nllm = ChatOpenAI(model_name=\"gpt-4\", temperature=0.7)\nmetaphor_tool = MetaphorSearchResults(api_wrapper=search)\nagent_chain = initialize_agent([metaphor_tool, extract_text, navigate_tool], llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)\nagent_chain.run(\"find me an interesting tweet about AI safety using Metaphor, then tell me the first sentence in the post. Do not finish until able to retrieve the first sentence.\")\n> Entering new AgentExecutor chain...\nThought: I need to find a tweet about AI safety using Metaphor Search.\nAction:\n```\n{\n \"action\": \"Metaphor Search Results JSON\",\n \"action_input\": {\n \"query\": \"interesting tweet AI safety\",\n \"num_results\": 1\n }\n}\n```\n{'results': [{'url': 'https://safe.ai/', 'title': 'Center for AI Safety', 'dateCreated': '2022-01-01', 'author': None, 'score': 0.18083244562149048}]}", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/metaphor_search.html"}
+{"id": "14575a11db8c-6", "text": "Observation: [{'title': 'Center for AI Safety', 'url': 'https://safe.ai/', 'author': None, 'date_created': '2022-01-01'}]\nThought:I need to navigate to the URL provided in the search results to find the tweet.\n> Finished chain.\n'I need to navigate to the URL provided in the search results to find the tweet.'\nprevious\nIFTTT WebHooks\nnext\nOpenWeatherMap API\n Contents\n \nMetaphor Search\nCall the API\nUse Metaphor as a tool\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/metaphor_search.html"}
+{"id": "3daf91967447-0", "text": ".ipynb\n.pdf\nWikipedia\nWikipedia#\nWikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.\nFirst, you need to install wikipedia python package.\n!pip install wikipedia\nfrom langchain.utilities import WikipediaAPIWrapper\nwikipedia = WikipediaAPIWrapper()\nwikipedia.run('HUNTER X HUNTER')", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/wikipedia.html"}
+{"id": "3daf91967447-1", "text": "'Page: Hunter \u00d7 Hunter\\nSummary: Hunter \u00d7 Hunter (stylized as HUNTER\u00d7HUNTER and pronounced \"hunter hunter\") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\\'s sh\u014dnen manga magazine Weekly Sh\u014dnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tank\u014dbon volumes as of November 2022. The story focuses on a young boy named Gon Freecss who discovers that his father, who left him at a young age, is actually a world-renowned Hunter, a licensed professional who specializes in fantastical pursuits such as locating rare or unidentified animal species, treasure hunting, surveying unexplored enclaves, or hunting down lawless individuals. Gon departs on a journey to become a Hunter and eventually find his father. Along the way, Gon meets various other Hunters and encounters the paranormal.\\nHunter \u00d7 Hunter was adapted into a 62-episode anime television series produced by Nippon Animation and directed by Kazuhiro Furuhashi, which ran on Fuji Television from October 1999 to March 2001. Three separate original video animations (OVAs) totaling 30 episodes were subsequently produced by Nippon Animation and released in Japan from 2002 to 2004. A second anime television series by Madhouse aired on Nippon Television from October 2011 to September 2014, totaling 148 episodes, with two animated theatrical films released in 2013. There are also numerous audio albums, video games, musicals, and other media based on Hunter \u00d7 Hunter.\\nThe manga has been translated into English and released in North America by Viz Media since April 2005. Both television series have been also licensed by Viz Media, with the first series having aired on the Funimation Channel in 2009", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/wikipedia.html"}
+{"id": "3daf91967447-2", "text": "by Viz Media, with the first series having aired on the Funimation Channel in 2009 and the second series broadcast on Adult Swim\\'s Toonami programming block from April 2016 to June 2019.\\nHunter \u00d7 Hunter has been a huge critical and financial success and has become one of the best-selling manga series of all time, having over 84 million copies in circulation by July 2022.\\n\\nPage: Hunter \u00d7 Hunter (2011 TV series)\\nSummary: Hunter \u00d7 Hunter is an anime television series that aired from 2011 to 2014 based on Yoshihiro Togashi\\'s manga series Hunter \u00d7 Hunter. The story begins with a young boy named Gon Freecss, who one day discovers that the father who he thought was dead, is in fact alive and well. He learns that his father, Ging, is a legendary \"Hunter\", an individual who has proven themselves an elite member of humanity. Despite the fact that Ging left his son with his relatives in order to pursue his own dreams, Gon becomes determined to follow in his father\\'s footsteps, pass the rigorous \"Hunter Examination\", and eventually find his father to become a Hunter in his own right.\\nThis new Hunter \u00d7 Hunter anime was announced on July 24, 2011. It is a complete reboot of the anime adaptation starting from the beginning of the manga, with no connections to the first anime from 1999. Produced by Nippon TV, VAP, Shueisha and Madhouse, the series is directed by Hiroshi K\u014djina, with Atsushi Maekawa and Tsutomu Kamishiro handling series composition, Takahiro Yoshimatsu designing the characters and Yoshihisa Hirano composing the music. Instead of having the old cast reprise their roles for the new adaptation, the series features an entirely new cast to voice the characters. The new series premiered airing weekly on Nippon TV and the nationwide", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/wikipedia.html"}
+{"id": "3daf91967447-3", "text": "cast to voice the characters. The new series premiered airing weekly on Nippon TV and the nationwide Nippon News Network from October 2, 2011. The series started to be collected in both DVD and Blu-ray format on January 25, 2012. Viz Media has licensed the anime for a DVD/Blu-ray release in North America with an English dub. On television, the series began airing on Adult Swim\\'s Toonami programming block on April 17, 2016, and ended on June 23, 2019.The anime series\\' opening theme is alternated between the song \"Departure!\" and an alternate version titled \"Departure! -Second Version-\" both sung by Galneryus\\' vocalist Masatoshi Ono. Five pieces of music were used as the ending theme; \"Just Awake\" by the Japanese band Fear, and Loathing in Las Vegas in episodes 1 to 26, \"Hunting for Your Dream\" by Galneryus in episodes 27 to 58, \"Reason\" sung by Japanese duo Yuzu in episodes 59 to 75, \"Nagareboshi Kirari\" also sung by Yuzu from episode 76 to 98, which was originally from the anime film adaptation, Hunter \u00d7 Hunter: Phantom Rouge, and \"Hy\u014dri Ittai\" by Yuzu featuring Hyadain from episode 99 to 146, which was also used in the film Hunter \u00d7 Hunter: The Last Mission. The background music and soundtrack for the series was composed by Yoshihisa Hirano.\\n\\n\\n\\nPage: List of Hunter \u00d7 Hunter characters\\nSummary: The Hunter \u00d7 Hunter manga series, created by Yoshihiro Togashi, features an extensive cast of characters. It takes place in a fictional universe where licensed specialists known as Hunters travel the world taking on special jobs ranging from treasure hunting to assassination. The story initially focuses on Gon Freecss and", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/wikipedia.html"}
+{"id": "3daf91967447-4", "text": "on special jobs ranging from treasure hunting to assassination. The story initially focuses on Gon Freecss and his quest to become a Hunter in order to find his father, Ging, who is himself a famous Hunter. On the way, Gon meets and becomes close friends with Killua Zoldyck, Kurapika and Leorio Paradinight.\\nAlthough most characters are human, most possess superhuman strength and/or supernatural abilities due to Nen, the ability to control one\\'s own life energy or aura. The world of the series also includes fantastical beasts such as the Chimera Ants or the Five great calamities.'", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/wikipedia.html"}
+{"id": "3daf91967447-5", "text": "previous\nTwilio\nnext\nWolfram Alpha\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/wikipedia.html"}
+{"id": "166c9d5b3d01-0", "text": ".ipynb\n.pdf\nBing Search\n Contents \nNumber of results\nMetadata Results\nBing Search#\nThis notebook goes over how to use the bing search component.\nFirst, you need to set up the proper API keys and environment variables. To set it up, follow the instructions found here.\nThen we will need to set some environment variables.\nimport os\nos.environ[\"BING_SUBSCRIPTION_KEY\"] = \"\"\nos.environ[\"BING_SEARCH_URL\"] = \"\"\nfrom langchain.utilities import BingSearchAPIWrapper\nsearch = BingSearchAPIWrapper()\nsearch.run(\"python\")", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/bing_search.html"}
+{"id": "166c9d5b3d01-1", "text": "'Thanks to the flexibility of Python and the powerful ecosystem of packages, the Azure CLI supports features such as autocompletion (in shells that support it), persistent credentials, JMESPath result parsing, lazy initialization, network-less unit tests, and more. Building an open-source and cross-platform Azure CLI with Python by Dan Taylor. Python releases by version number: Release version Release date Click for more. Python 3.11.1 Dec. 6, 2022 Download Release Notes. Python 3.10.9 Dec. 6, 2022 Download Release Notes. Python 3.9.16 Dec. 6, 2022 Download Release Notes. Python 3.8.16 Dec. 6, 2022 Download Release Notes. Python 3.7.16 Dec. 6, 2022 Download Release Notes. In this lesson, we will look at the += operator in Python and see how it works with several simple examples.. The operator \u2018+=\u2019 is a shorthand for the addition assignment operator.It adds two values and assigns the sum to a variable (left operand). W3Schools offers free online tutorials, references and exercises in all the major languages of the web. Covering popular subjects like HTML, CSS, JavaScript, Python, SQL, Java, and many, many more. This tutorial introduces the reader informally to the basic concepts and features of the Python language and system. It helps to have a Python interpreter handy for hands-on experience, but all examples are self-contained, so the tutorial can be read off-line as well. For a description of standard objects", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/bing_search.html"}
+{"id": "166c9d5b3d01-2", "text": "self-contained, so the tutorial can be read off-line as well. For a description of standard objects and modules, see The Python Standard ... Python is a general-purpose, versatile, and powerful programming language. It's a great first language because Python code is concise and easy to read. Whatever you want to do, python can do it. From web development to machine learning to data science, Python is the language for you. To install Python using the Microsoft Store: Go to your Start menu (lower left Windows icon), type "Microsoft Store", select the link to open the store. Once the store is open, select Search from the upper-right menu and enter "Python". Select which version of Python you would like to use from the results under Apps. Under the \u201cPython Releases for Mac OS X\u201d heading, click the link for the Latest Python 3 Release - Python 3.x.x. As of this writing, the latest version was Python 3.8.4. Scroll to the bottom and click macOS 64-bit installer to start the download. When the installer is finished downloading, move on to the next step. Step 2: Run the Installer'", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/bing_search.html"}
+{"id": "166c9d5b3d01-3", "text": "Number of results#\nYou can use the k parameter to set the number of results\nsearch = BingSearchAPIWrapper(k=1)\nsearch.run(\"python\")\n'Thanks to the flexibility of Python and the powerful ecosystem of packages, the Azure CLI supports features such as autocompletion (in shells that support it), persistent credentials, JMESPath result parsing, lazy initialization, network-less unit tests, and more. Building an open-source and cross-platform Azure CLI with Python by Dan Taylor.'\nMetadata Results#\nRun query through BingSearch and return snippet, title, and link metadata.\nSnippet: The description of the result.\nTitle: The title of the result.\nLink: The link to the result.\nsearch = BingSearchAPIWrapper()\nsearch.results(\"apples\", 5)\n[{'snippet': 'Lady Alice. Pink Lady apples aren\u2019t the only lady in the apple family. Lady Alice apples were discovered growing, thanks to bees pollinating, in Washington. They are smaller and slightly more stout in appearance than other varieties. Their skin color appears to have red and yellow stripes running from stem to butt.',\n 'title': '25 Types of Apples - Jessica Gavin',\n 'link': 'https://www.jessicagavin.com/types-of-apples/'},\n {'snippet': 'Apples can do a lot for you, thanks to plant chemicals called flavonoids. And they have pectin, a fiber that breaks down in your gut. If you take off the apple\u2019s skin before eating it, you won ...',\n 'title': 'Apples: Nutrition & Health Benefits - WebMD',\n 'link': 'https://www.webmd.com/food-recipes/benefits-apples'},", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/bing_search.html"}
+{"id": "166c9d5b3d01-4", "text": "{'snippet': 'Apples boast many vitamins and minerals, though not in high amounts. However, apples are usually a good source of vitamin C. Vitamin C. Also called ascorbic acid, this vitamin is a common ...',\n 'title': 'Apples 101: Nutrition Facts and Health Benefits',\n 'link': 'https://www.healthline.com/nutrition/foods/apples'},\n {'snippet': 'Weight management. The fibers in apples can slow digestion, helping one to feel greater satisfaction after eating. After following three large prospective cohorts of 133,468 men and women for 24 years, researchers found that higher intakes of fiber-rich fruits with a low glycemic load, particularly apples and pears, were associated with the least amount of weight gain over time.',\n 'title': 'Apples | The Nutrition Source | Harvard T.H. Chan School of Public Health',\n 'link': 'https://www.hsph.harvard.edu/nutritionsource/food-features/apples/'}]\nprevious\nShell Tool\nnext\nBrave Search\n Contents\n \nNumber of results\nMetadata Results\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/bing_search.html"}
+{"id": "e847eb4b9d57-0", "text": ".ipynb\n.pdf\nOpenWeatherMap API\n Contents \nUse the wrapper\nUse the tool\nOpenWeatherMap API#\nThis notebook goes over how to use the OpenWeatherMap component to fetch weather information.\nFirst, you need to sign up for an OpenWeatherMap API key:\nGo to OpenWeatherMap and sign up for an API key here\npip install pyowm\nThen we will need to set some environment variables:\nSave your API KEY into OPENWEATHERMAP_API_KEY env variable\nUse the wrapper#\nfrom langchain.utilities import OpenWeatherMapAPIWrapper\nimport os\nos.environ[\"OPENWEATHERMAP_API_KEY\"] = \"\"\nweather = OpenWeatherMapAPIWrapper()\nweather_data = weather.run(\"London,GB\")\nprint(weather_data)\nIn London,GB, the current weather is as follows:\nDetailed status: broken clouds\nWind speed: 2.57 m/s, direction: 240\u00b0\nHumidity: 55%\nTemperature: \n - Current: 20.12\u00b0C\n - High: 21.75\u00b0C\n - Low: 18.68\u00b0C\n - Feels like: 19.62\u00b0C\nRain: {}\nHeat index: None\nCloud cover: 75%\nUse the tool#\nfrom langchain.llms import OpenAI\nfrom langchain.agents import load_tools, initialize_agent, AgentType\nimport os\nos.environ[\"OPENAI_API_KEY\"] = \"\"\nos.environ[\"OPENWEATHERMAP_API_KEY\"] = \"\"\nllm = OpenAI(temperature=0)\ntools = load_tools([\"openweathermap-api\"], llm)\nagent_chain = initialize_agent(\n tools=tools,\n llm=llm,\n agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n verbose=True\n)", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/openweathermap.html"}
+{"id": "e847eb4b9d57-1", "text": "agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n verbose=True\n)\nagent_chain.run(\"What's the weather like in London?\")\n> Entering new AgentExecutor chain...\n I need to find out the current weather in London.\nAction: OpenWeatherMap\nAction Input: London,GB\nObservation: In London,GB, the current weather is as follows:\nDetailed status: broken clouds\nWind speed: 2.57 m/s, direction: 240\u00b0\nHumidity: 56%\nTemperature: \n - Current: 20.11\u00b0C\n - High: 21.75\u00b0C\n - Low: 18.68\u00b0C\n - Feels like: 19.64\u00b0C\nRain: {}\nHeat index: None\nCloud cover: 75%\nThought: I now know the current weather in London.\nFinal Answer: The current weather in London is broken clouds, with a wind speed of 2.57 m/s, direction 240\u00b0, humidity of 56%, temperature of 20.11\u00b0C, high of 21.75\u00b0C, low of 18.68\u00b0C, and a heat index of None.\n> Finished chain.\n'The current weather in London is broken clouds, with a wind speed of 2.57 m/s, direction 240\u00b0, humidity of 56%, temperature of 20.11\u00b0C, high of 21.75\u00b0C, low of 18.68\u00b0C, and a heat index of None.'\nprevious\nMetaphor Search\nnext\nPubMed Tool\n Contents\n \nUse the wrapper\nUse the tool\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/openweathermap.html"}
+{"id": "3bbcf171f181-0", "text": ".ipynb\n.pdf\nSerpAPI\n Contents \nCustom Parameters\nSerpAPI#\nThis notebook goes over how to use the SerpAPI component to search the web.\nfrom langchain.utilities import SerpAPIWrapper\nsearch = SerpAPIWrapper()\nsearch.run(\"Obama's first name?\")\n'Barack Hussein Obama II'\nCustom Parameters#\nYou can also customize the SerpAPI wrapper with arbitrary parameters. For example, in the below example we will use bing instead of google.\nparams = {\n \"engine\": \"bing\",\n \"gl\": \"us\",\n \"hl\": \"en\",\n}\nsearch = SerpAPIWrapper(params=params)\nsearch.run(\"Obama's first name?\")\n'Barack Hussein Obama II is an American politician who served as the 44th president of the United States from 2009 to 2017. A member of the Democratic Party, Obama was the first African-American presi\u2026New content will be added above the current area of focus upon selectionBarack Hussein Obama II is an American politician who served as the 44th president of the United States from 2009 to 2017. A member of the Democratic Party, Obama was the first African-American president of the United States. He previously served as a U.S. senator from Illinois from 2005 to 2008 and as an Illinois state senator from 1997 to 2004, and previously worked as a civil rights lawyer before entering politics.Wikipediabarackobama.com'\nfrom langchain.agents import Tool\n# You can create the tool to pass to an agent\nrepl_tool = Tool(\n name=\"python_repl\",", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/serpapi.html"}
+{"id": "3bbcf171f181-1", "text": "repl_tool = Tool(\n name=\"python_repl\",\n description=\"A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.\",\n func=search.run,\n)\nprevious\nSearxNG Search API\nnext\nTwilio\n Contents\n \nCustom Parameters\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/serpapi.html"}
+{"id": "f82d14cfb42d-0", "text": ".ipynb\n.pdf\nIFTTT WebHooks\n Contents \nCreating a webhook\nConfiguring the \u201cIf This\u201d\nConfiguring the \u201cThen That\u201d\nFinishing up\nIFTTT WebHooks#\nThis notebook shows how to use IFTTT Webhooks.\nFrom https://github.com/SidU/teams-langchain-js/wiki/Connecting-IFTTT-Services.\nCreating a webhook#\nGo to https://ifttt.com/create\nConfiguring the \u201cIf This\u201d#\nClick on the \u201cIf This\u201d button in the IFTTT interface.\nSearch for \u201cWebhooks\u201d in the search bar.\nChoose the first option for \u201cReceive a web request with a JSON payload.\u201d\nChoose an Event Name that is specific to the service you plan to connect to.\nThis will make it easier for you to manage the webhook URL.\nFor example, if you\u2019re connecting to Spotify, you could use \u201cSpotify\u201d as your\nEvent Name.\nClick the \u201cCreate Trigger\u201d button to save your settings and create your webhook.\nConfiguring the \u201cThen That\u201d#\nTap on the \u201cThen That\u201d button in the IFTTT interface.\nSearch for the service you want to connect, such as Spotify.\nChoose an action from the service, such as \u201cAdd track to a playlist\u201d.\nConfigure the action by specifying the necessary details, such as the playlist name,\ne.g., \u201cSongs from AI\u201d.\nReference the JSON Payload received by the Webhook in your action. For the Spotify\nscenario, choose \u201c{{JsonPayload}}\u201d as your search query.\nTap the \u201cCreate Action\u201d button to save your action settings.\nOnce you have finished configuring your action, click the \u201cFinish\u201d button to\ncomplete the setup.\nCongratulations! You have successfully connected the Webhook to the desired\nservice, and you\u2019re ready to start receiving data and triggering actions \ud83c\udf89", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/ifttt.html"}
+{"id": "f82d14cfb42d-1", "text": "service, and you\u2019re ready to start receiving data and triggering actions \ud83c\udf89\nFinishing up#\nTo get your webhook URL go to https://ifttt.com/maker_webhooks/settings\nCopy the IFTTT key value from there. The URL is of the form\nhttps://maker.ifttt.com/use/YOUR_IFTTT_KEY. Grab the YOUR_IFTTT_KEY value.\nfrom langchain.tools.ifttt import IFTTTWebhook\nimport os\nkey = os.environ[\"IFTTTKey\"]\nurl = f\"https://maker.ifttt.com/trigger/spotify/json/with/key/{key}\"\ntool = IFTTTWebhook(name=\"Spotify\", description=\"Add a song to spotify playlist\", url=url)\ntool.run(\"taylor swift\")\n\"Congratulations! You've fired the spotify JSON event\"\nprevious\nHuman as a tool\nnext\nMetaphor Search\n Contents\n \nCreating a webhook\nConfiguring the \u201cIf This\u201d\nConfiguring the \u201cThen That\u201d\nFinishing up\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/ifttt.html"}
+{"id": "5822eb36f7e2-0", "text": ".ipynb\n.pdf\nGoogle Places\nGoogle Places#\nThis notebook goes through how to use Google Places API\n#!pip install googlemaps\nimport os\nos.environ[\"GPLACES_API_KEY\"] = \"\"\nfrom langchain.tools import GooglePlacesTool\nplaces = GooglePlacesTool()\nplaces.run(\"al fornos\")\n\"1. Delfina Restaurant\\nAddress: 3621 18th St, San Francisco, CA 94110, USA\\nPhone: (415) 552-4055\\nWebsite: https://www.delfinasf.com/\\n\\n\\n2. Piccolo Forno\\nAddress: 725 Columbus Ave, San Francisco, CA 94133, USA\\nPhone: (415) 757-0087\\nWebsite: https://piccolo-forno-sf.com/\\n\\n\\n3. L'Osteria del Forno\\nAddress: 519 Columbus Ave, San Francisco, CA 94133, USA\\nPhone: (415) 982-1124\\nWebsite: Unknown\\n\\n\\n4. Il Fornaio\\nAddress: 1265 Battery St, San Francisco, CA 94111, USA\\nPhone: (415) 986-0100\\nWebsite: https://www.ilfornaio.com/\\n\\n\"\nprevious\nFile System Tools\nnext\nGoogle Search\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/google_places.html"}
+{"id": "dc9ee2c40fdb-0", "text": ".ipynb\n.pdf\nShell Tool\n Contents \nUse with Agents\nShell Tool#\nGiving agents access to the shell is powerful (though risky outside a sandboxed environment).\nThe LLM can use it to execute any shell commands. A common use case for this is letting the LLM interact with your local file system.\nfrom langchain.tools import ShellTool\nshell_tool = ShellTool()\nprint(shell_tool.run({\"commands\": [\"echo 'Hello World!'\", \"time\"]}))\nHello World!\nreal\t0m0.000s\nuser\t0m0.000s\nsys\t0m0.000s\n/Users/wfh/code/lc/lckg/langchain/tools/shell/tool.py:34: UserWarning: The shell tool has no safeguards by default. Use at your own risk.\n warnings.warn(\nUse with Agents#\nAs with all tools, these can be given to an agent to accomplish more complex tasks. Let\u2019s have the agent fetch some links from a web page.\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.agents import initialize_agent\nfrom langchain.agents import AgentType\nllm = ChatOpenAI(temperature=0)\nshell_tool.description = shell_tool.description + f\"args {shell_tool.args}\".replace(\"{\", \"{{\").replace(\"}\", \"}}\")\nself_ask_with_search = initialize_agent([shell_tool], llm, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)\nself_ask_with_search.run(\"Download the langchain.com webpage and grep for all urls. Return only a sorted list of them. Be sure to use double quotes.\")\n> Entering new AgentExecutor chain...\nQuestion: What is the task?\nThought: We need to download the langchain.com webpage and extract all the URLs from it. Then we need to sort the URLs and return them.\nAction:\n```", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/bash.html"}
+{"id": "dc9ee2c40fdb-1", "text": "Action:\n```\n{\n \"action\": \"shell\",\n \"action_input\": {\n \"commands\": [\n \"curl -s https://langchain.com | grep -o 'http[s]*://[^\\\" ]*' | sort\"\n ]\n }\n}\n```\n/Users/wfh/code/lc/lckg/langchain/tools/shell/tool.py:34: UserWarning: The shell tool has no safeguards by default. Use at your own risk.\n warnings.warn(\nObservation: https://blog.langchain.dev/\nhttps://discord.gg/6adMQxSpJS\nhttps://docs.langchain.com/docs/\nhttps://github.com/hwchase17/chat-langchain\nhttps://github.com/hwchase17/langchain\nhttps://github.com/hwchase17/langchainjs\nhttps://github.com/sullivan-sean/chat-langchainjs\nhttps://js.langchain.com/docs/\nhttps://python.langchain.com/en/latest/\nhttps://twitter.com/langchainai\nThought:The URLs have been successfully extracted and sorted. We can return the list of URLs as the final answer.\nFinal Answer: [\"https://blog.langchain.dev/\", \"https://discord.gg/6adMQxSpJS\", \"https://docs.langchain.com/docs/\", \"https://github.com/hwchase17/chat-langchain\", \"https://github.com/hwchase17/langchain\", \"https://github.com/hwchase17/langchainjs\", \"https://github.com/sullivan-sean/chat-langchainjs\", \"https://js.langchain.com/docs/\", \"https://python.langchain.com/en/latest/\", \"https://twitter.com/langchainai\"]\n> Finished chain.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/bash.html"}
+{"id": "dc9ee2c40fdb-2", "text": "> Finished chain.\n'[\"https://blog.langchain.dev/\", \"https://discord.gg/6adMQxSpJS\", \"https://docs.langchain.com/docs/\", \"https://github.com/hwchase17/chat-langchain\", \"https://github.com/hwchase17/langchain\", \"https://github.com/hwchase17/langchainjs\", \"https://github.com/sullivan-sean/chat-langchainjs\", \"https://js.langchain.com/docs/\", \"https://python.langchain.com/en/latest/\", \"https://twitter.com/langchainai\"]'\nprevious\nAWS Lambda API\nnext\nBing Search\n Contents\n \nUse with Agents\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/bash.html"}
+{"id": "76c3574f986e-0", "text": ".ipynb\n.pdf\nRequests\n Contents \nInside the tool\nRequests#\nThe web contains a lot of information that LLMs do not have access to. In order to easily let LLMs interact with that information, we provide a wrapper around the Python Requests module that takes in a URL and fetches data from that URL.\nfrom langchain.agents import load_tools\nrequests_tools = load_tools([\"requests_all\"])\nrequests_tools\n[RequestsGetTool(name='requests_get', description='A portal to the internet. Use this when you need to get specific content from a website. Input should be a url (i.e. https://www.google.com). The output will be the text response of the GET request.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, requests_wrapper=TextRequestsWrapper(headers=None, aiosession=None)),\n RequestsPostTool(name='requests_post', description='Use this when you want to POST to a website.\\n Input should be a json string with two keys: \"url\" and \"data\".\\n The value of \"url\" should be a string, and the value of \"data\" should be a dictionary of \\n key-value pairs you want to POST to the url.\\n Be careful to always use double quotes for strings in the json string\\n The output will be the text response of the POST request.\\n ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, requests_wrapper=TextRequestsWrapper(headers=None, aiosession=None)),", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html"}
+{"id": "76c3574f986e-1", "text": "RequestsPatchTool(name='requests_patch', description='Use this when you want to PATCH to a website.\\n Input should be a json string with two keys: \"url\" and \"data\".\\n The value of \"url\" should be a string, and the value of \"data\" should be a dictionary of \\n key-value pairs you want to PATCH to the url.\\n Be careful to always use double quotes for strings in the json string\\n The output will be the text response of the PATCH request.\\n ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, requests_wrapper=TextRequestsWrapper(headers=None, aiosession=None)),\n RequestsPutTool(name='requests_put', description='Use this when you want to PUT to a website.\\n Input should be a json string with two keys: \"url\" and \"data\".\\n The value of \"url\" should be a string, and the value of \"data\" should be a dictionary of \\n key-value pairs you want to PUT to the url.\\n Be careful to always use double quotes for strings in the json string.\\n The output will be the text response of the PUT request.\\n ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, requests_wrapper=TextRequestsWrapper(headers=None, aiosession=None)),\n RequestsDeleteTool(name='requests_delete', description='A portal to the internet. Use this when you need to make a DELETE request to a URL. Input should be a specific url, and the output will be the text response of the DELETE request.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, requests_wrapper=TextRequestsWrapper(headers=None, aiosession=None))]\nInside the tool#\nEach requests tool contains a requests wrapper. You can work with these wrappers directly below", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html"}
+{"id": "76c3574f986e-2", "text": "Each requests tool contains a requests wrapper. You can work with these wrappers directly below\n# Each tool wrapps a requests wrapper\nrequests_tools[0].requests_wrapper\nTextRequestsWrapper(headers=None, aiosession=None)\nfrom langchain.utilities import TextRequestsWrapper\nrequests = TextRequestsWrapper()\nrequests.get(\"https://www.google.com\")", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html"}
+{"id": "76c3574f986e-3", "text": "'GoogleWeb History", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html"}
+{"id": "76c3574f986e-14", "text": "class=gb4>Web History | Settings | Sign in
'", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html"}
+{"id": "76c3574f986e-21", "text": "previous\nPython REPL\nnext\nSceneXplain\n Contents\n \nInside the tool\nBy Harrison Chase\n \n \u00a9 Copyright 2023, Harrison Chase.\n \n Last updated on Jun 11, 2023.", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html"}
+{"id": "372a537fe629-0", "text": ".ipynb\n.pdf\nFile System Tools\n Contents \nThe FileManagementToolkit\nSelecting File System Tools\nFile System Tools#\nLangChain provides tools for interacting with a local file system out of the box. This notebook walks through some of them.\nNote: these tools are not recommended for use outside a sandboxed environment!\nFirst, we\u2019ll import the tools.\nfrom langchain.tools.file_management import (\n ReadFileTool,\n CopyFileTool,\n DeleteFileTool,\n MoveFileTool,\n WriteFileTool,\n ListDirectoryTool,\n)\nfrom langchain.agents.agent_toolkits import FileManagementToolkit\nfrom tempfile import TemporaryDirectory\n# We'll make a temporary directory to avoid clutter\nworking_directory = TemporaryDirectory()\nThe FileManagementToolkit#\nIf you want to provide all the file tooling to your agent, it\u2019s easy to do so with the toolkit. We\u2019ll pass the temporary directory in as a root directory as a workspace for the LLM.\nIt\u2019s recommended to always pass in a root directory, since without one, it\u2019s easy for the LLM to pollute the working directory, and without one, there isn\u2019t any validation against\nstraightforward prompt injection.\ntoolkit = FileManagementToolkit(root_dir=str(working_directory.name)) # If you don't provide a root_dir, operations will default to the current working directory\ntoolkit.get_tools()", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/filesystem.html"}
+{"id": "372a537fe629-1", "text": "toolkit.get_tools()\n[CopyFileTool(name='copy_file', description='Create a copy of a file in a specified location', args_schema=, return_direct=False, verbose=False, callback_manager=, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'),\n DeleteFileTool(name='file_delete', description='Delete a file', args_schema=, return_direct=False, verbose=False, callback_manager=, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'),\n FileSearchTool(name='file_search', description='Recursively search for files in a subdirectory that match the regex pattern', args_schema=, return_direct=False, verbose=False, callback_manager=, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'),", "source": "https://python.langchain.com/en/latest/modules/agents/tools/examples/filesystem.html"}
+{"id": "372a537fe629-2", "text": "MoveFileTool(name='move_file', description='Move or rename a file from one location to another', args_schema=, return_direct=False, verbose=False, callback_manager=, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'),\n ReadFileTool(name='read_file', description='Read file from disk', args_schema=