id
stringlengths 14
15
| text
stringlengths 23
2.21k
| source
stringlengths 52
97
|
---|---|---|
9ae0edef2a4c-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraChat Over Documents with VectaraVectara Text GenerationVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerVectaraOn this pageVectaraWhat is Vectara?Vectara Overview:Vectara is developer-first API platform for building GenAI applicationsTo use Vectara - first sign up and create an account. Then create a corpus and an API key for indexing and searching.You can use Vectara's indexing API to add documents into Vectara's indexYou can use Vectara's Search API to query Vectara's index (which also supports Hybrid search implicitly).You can use Vectara's integration with LangChain as a Vector store or using the Retriever abstraction.Installation and Setup​To use Vectara with LangChain no special installation steps are required. You just have to provide your customer_id, corpus ID, and an API key created within the Vectara console to enable indexing and searching.Alternatively these can be provided as environment variablesexport VECTARA_CUSTOMER_ID="your_customer_id"export VECTARA_CORPUS_ID="your_corpus_id"export VECTARA_API_KEY="your-vectara-api-key"Usage​VectorStore​There exists a wrapper around the Vectara platform, allowing you to use it as a vectorstore, whether for semantic search or example selection.To import this vectorstore:from langchain.vectorstores import VectaraTo create an instance of the Vectara vectorstore:vectara = Vectara( vectara_customer_id=customer_id, | https://python.langchain.com/docs/integrations/providers/vectara/ |
9ae0edef2a4c-3 | = Vectara( vectara_customer_id=customer_id, vectara_corpus_id=corpus_id, vectara_api_key=api_key)The customer_id, corpus_id and api_key are optional, and if they are not supplied will be read from the environment variables VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY, respectively.After you have the vectorstore, you can add_texts or add_documents as per the standard VectorStore interface, for example:vectara.add_texts(["to be or not to be", "that is the question"])Since Vectara supports file-upload, we also added the ability to upload files (PDF, TXT, HTML, PPT, DOC, etc) directly as file. When using this method, the file is uploaded directly to the Vectara backend, processed and chunked optimally there, so you don't have to use the LangChain document loader or chunking mechanism.As an example:vectara.add_files(["path/to/file1.pdf", "path/to/file2.pdf",...])To query the vectorstore, you can use the similarity_search method (or similarity_search_with_score), which takes a query string and returns a list of results:results = vectara.similarity_score("what is LangChain?")similarity_search_with_score also supports the following additional arguments:k: number of results to return (defaults to 5)lambda_val: the lexical matching factor for hybrid search (defaults to 0.025)filter: a filter to apply to the results (default None)n_sentence_context: number of sentences to include before/after the actual matching segment when returning results. This defaults to 0 so as to return the exact text segment that matches, but can be used with other values e.g. 2 or 3 to return adjacent text segments.The results are returned as a list of relevant documents, and a relevance score of each | https://python.langchain.com/docs/integrations/providers/vectara/ |
9ae0edef2a4c-4 | adjacent text segments.The results are returned as a list of relevant documents, and a relevance score of each document.For a more detailed examples of using the Vectara wrapper, see one of these two sample notebooks:Chat Over Documents with VectaraVectara Text GenerationPreviousUnstructuredNextChat Over Documents with VectaraInstallation and SetupUsageVectorStoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/vectara/ |
c3ee6ccc7536-0 | Page Not Found | 🦜�🔗 Langchain
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKPage Not FoundWe could not find what you were looking for.Please contact the owner of the site that linked you to the original URL and let them know their link is broken.CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/vectara/vectara_chat.html |
7a5bb613297f-0 | Chat Over Documents with Vectara | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/vectara/vectara_chat |
7a5bb613297f-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/vectara/vectara_chat |
7a5bb613297f-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraChat Over Documents with VectaraVectara Text GenerationVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerVectaraChat Over Documents with VectaraOn this pageChat Over Documents with VectaraThis notebook is based on the chat_vector_db notebook, but using Vectara as the vector database.import osfrom langchain.vectorstores import Vectarafrom langchain.vectorstores.vectara import VectaraRetrieverfrom langchain.llms import OpenAIfrom langchain.chains import ConversationalRetrievalChainLoad in documents. You can replace this with a loader for whatever type of data you wantfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()We now split the documents, create embeddings for them, and put them in a vectorstore. This allows us to do semantic search over them.vectorstore = Vectara.from_documents(documents, embedding=None)We can now create a memory object, which is neccessary to track the inputs/outputs and hold a conversation.from langchain.memory import ConversationBufferMemorymemory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)We now initialize the ConversationalRetrievalChainopenai_api_key = os.environ["OPENAI_API_KEY"]llm = OpenAI(openai_api_key=openai_api_key, temperature=0)retriever = vectorstore.as_retriever(lambda_val=0.025, k=5, filter=None)d = retriever.get_relevant_documents( "What did the president say about | https://python.langchain.com/docs/integrations/providers/vectara/vectara_chat |
7a5bb613297f-3 | = retriever.get_relevant_documents( "What did the president say about Ketanji Brown Jackson")qa = ConversationalRetrievalChain.from_llm(llm, retriever, memory=memory)query = "What did the president say about Ketanji Brown Jackson"result = qa({"question": query})result["answer"] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence."query = "Did he mention who she suceeded"result = qa({"question": query})result["answer"] ' Justice Stephen Breyer'Pass in chat history​In the above example, we used a Memory object to track chat history. We can also just pass it in explicitly. In order to do this, we need to initialize a chain without any memory object.qa = ConversationalRetrievalChain.from_llm( OpenAI(temperature=0), vectorstore.as_retriever())Here's an example of asking a question with no chat historychat_history = []query = "What did the president say about Ketanji Brown Jackson"result = qa({"question": query, "chat_history": chat_history})result["answer"] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence."Here's an example of asking a question with some chat historychat_history = [(query, result["answer"])]query = "Did he mention who she suceeded"result = qa({"question": query, "chat_history": chat_history})result["answer"] ' Justice Stephen Breyer'Return Source Documents​You can also easily return source documents from the ConversationalRetrievalChain. This is useful for when you want to inspect | https://python.langchain.com/docs/integrations/providers/vectara/vectara_chat |
7a5bb613297f-4 | source documents from the ConversationalRetrievalChain. This is useful for when you want to inspect what documents were returned.qa = ConversationalRetrievalChain.from_llm( llm, vectorstore.as_retriever(), return_source_documents=True)chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = qa({"question": query, "chat_history": chat_history})result["source_documents"][0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})ConversationalRetrievalChain with search_distance​If you are using a vector store that supports filtering by search distance, you can add a threshold value parameter.vectordbkwargs = {"search_distance": 0.9}qa = ConversationalRetrievalChain.from_llm( OpenAI(temperature=0), vectorstore.as_retriever(), return_source_documents=True)chat_history = []query = "What did the president say about Ketanji Brown | https://python.langchain.com/docs/integrations/providers/vectara/vectara_chat |
7a5bb613297f-5 | = []query = "What did the president say about Ketanji Brown Jackson"result = qa( {"question": query, "chat_history": chat_history, "vectordbkwargs": vectordbkwargs})print(result["answer"]) The president said that Ketanji Brown Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.ConversationalRetrievalChain with map_reduce​We can also use different types of combine document chains with the ConversationalRetrievalChain chain.from langchain.chains import LLMChainfrom langchain.chains.question_answering import load_qa_chainfrom langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPTquestion_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)doc_chain = load_qa_chain(llm, chain_type="map_reduce")chain = ConversationalRetrievalChain( retriever=vectorstore.as_retriever(), question_generator=question_generator, combine_docs_chain=doc_chain,)chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = chain({"question": query, "chat_history": chat_history})result["answer"] " The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, who he described as one of the nation's top legal minds, to continue Justice Breyer's legacy of excellence."ConversationalRetrievalChain with Question Answering with sources​You can also use this chain with the question answering with sources chain.from langchain.chains.qa_with_sources import load_qa_with_sources_chainquestion_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)doc_chain = | https://python.langchain.com/docs/integrations/providers/vectara/vectara_chat |
7a5bb613297f-6 | LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)doc_chain = load_qa_with_sources_chain(llm, chain_type="map_reduce")chain = ConversationalRetrievalChain( retriever=vectorstore.as_retriever(), question_generator=question_generator, combine_docs_chain=doc_chain,)chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = chain({"question": query, "chat_history": chat_history})result["answer"] " The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, who he described as one of the nation's top legal minds, and that she will continue Justice Breyer's legacy of excellence.\nSOURCES: ../../../state_of_the_union.txt"ConversationalRetrievalChain with streaming to stdout​Output from the chain will be streamed to stdout token by token in this example.from langchain.chains.llm import LLMChainfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerfrom langchain.chains.conversational_retrieval.prompts import ( CONDENSE_QUESTION_PROMPT, QA_PROMPT,)from langchain.chains.question_answering import load_qa_chain# Construct a ConversationalRetrievalChain with a streaming llm for combine docs# and a separate, non-streaming llm for question generationllm = OpenAI(temperature=0, openai_api_key=openai_api_key)streaming_llm = OpenAI( streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0, openai_api_key=openai_api_key,)question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)doc_chain = load_qa_chain(streaming_llm, | https://python.langchain.com/docs/integrations/providers/vectara/vectara_chat |
7a5bb613297f-7 | prompt=CONDENSE_QUESTION_PROMPT)doc_chain = load_qa_chain(streaming_llm, chain_type="stuff", prompt=QA_PROMPT)qa = ConversationalRetrievalChain( retriever=vectorstore.as_retriever(), combine_docs_chain=doc_chain, question_generator=question_generator,)chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = qa({"question": query, "chat_history": chat_history}) The president said that Ketanji Brown Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.chat_history = [(query, result["answer"])]query = "Did he mention who she suceeded"result = qa({"question": query, "chat_history": chat_history}) Justice Stephen Breyerget_chat_history Function​You can also specify a get_chat_history function, which can be used to format the chat_history string.def get_chat_history(inputs) -> str: res = [] for human, ai in inputs: res.append(f"Human:{human}\nAI:{ai}") return "\n".join(res)qa = ConversationalRetrievalChain.from_llm( llm, vectorstore.as_retriever(), get_chat_history=get_chat_history)chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = qa({"question": query, "chat_history": chat_history})result["answer"] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence."PreviousVectaraNextVectara Text GenerationPass in chat historyReturn Source DocumentsConversationalRetrievalChain with | https://python.langchain.com/docs/integrations/providers/vectara/vectara_chat |
7a5bb613297f-8 | Text GenerationPass in chat historyReturn Source DocumentsConversationalRetrievalChain with search_distanceConversationalRetrievalChain with map_reduceConversationalRetrievalChain with Question Answering with sourcesConversationalRetrievalChain with streaming to stdoutget_chat_history FunctionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/vectara/vectara_chat |
f8b137a0ee94-0 | Vectara Text Generation | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/vectara/vectara_text_generation |
f8b137a0ee94-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/vectara/vectara_text_generation |
f8b137a0ee94-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraChat Over Documents with VectaraVectara Text GenerationVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerVectaraVectara Text GenerationOn this pageVectara Text GenerationThis notebook is based on text generation notebook and adapted to Vectara.Prepare Data​First, we prepare the data. For this example, we fetch a documentation site that consists of markdown files hosted on Github and split them into small enough Documents.import osfrom langchain.llms import OpenAIfrom langchain.docstore.document import Documentimport requestsfrom langchain.vectorstores import Vectarafrom langchain.text_splitter import CharacterTextSplitterfrom langchain.prompts import PromptTemplateimport pathlibimport subprocessimport tempfiledef get_github_docs(repo_owner, repo_name): with tempfile.TemporaryDirectory() as d: subprocess.check_call( f"git clone --depth 1 https://github.com/{repo_owner}/{repo_name}.git .", cwd=d, shell=True, ) git_sha = ( subprocess.check_output("git rev-parse HEAD", shell=True, cwd=d) .decode("utf-8") .strip() | https://python.langchain.com/docs/integrations/providers/vectara/vectara_text_generation |
f8b137a0ee94-3 | .strip() ) repo_path = pathlib.Path(d) markdown_files = list(repo_path.glob("*/*.md")) + list( repo_path.glob("*/*.mdx") ) for markdown_file in markdown_files: with open(markdown_file, "r") as f: relative_path = markdown_file.relative_to(repo_path) github_url = f"https://github.com/{repo_owner}/{repo_name}/blob/{git_sha}/{relative_path}" yield Document(page_content=f.read(), metadata={"source": github_url})sources = get_github_docs("yirenlu92", "deno-manual-forked")source_chunks = []splitter = CharacterTextSplitter(separator=" ", chunk_size=1024, chunk_overlap=0)for source in sources: for chunk in splitter.split_text(source.page_content): source_chunks.append(chunk) Cloning into '.'...Set Up Vector DB​Now that we have the documentation content in chunks, let's put all this information in a vector index for easy retrieval.import ossearch_index = Vectara.from_texts(source_chunks, embedding=None)Set Up LLM Chain with Custom Prompt​Next, let's set up a simple LLM chain but give it a custom prompt for blog post generation. Note that the custom prompt is parameterized and takes two inputs: context, which will be the documents fetched from the | https://python.langchain.com/docs/integrations/providers/vectara/vectara_text_generation |
f8b137a0ee94-4 | custom prompt is parameterized and takes two inputs: context, which will be the documents fetched from the vector search, and topic, which is given by the user.from langchain.chains import LLMChainprompt_template = """Use the context below to write a 400 word blog post about the topic below: Context: {context} Topic: {topic} Blog post:"""PROMPT = PromptTemplate(template=prompt_template, input_variables=["context", "topic"])llm = OpenAI(openai_api_key=os.environ["OPENAI_API_KEY"], temperature=0)chain = LLMChain(llm=llm, prompt=PROMPT)Generate Text​Finally, we write a function to apply our inputs to the chain. The function takes an input parameter topic. We find the documents in the vector index that correspond to that topic, and use them as additional context in our simple LLM chain.def generate_blog_post(topic): docs = search_index.similarity_search(topic, k=4) inputs = [{"context": doc.page_content, "topic": topic} for doc in docs] print(chain.apply(inputs))generate_blog_post("environment variables") [{'text': '\n\nEnvironment variables are a powerful tool for managing configuration settings in your applications. They allow you to store and access values from anywhere in your code, making it easier to keep your codebase organized and maintainable.\n\nHowever, there are times when you may want to use environment variables specifically for a single command. This is where shell variables come in. Shell variables are similar to environment variables, but they won\'t be exported to spawned commands. They are defined with the following syntax:\n\n```sh\nVAR_NAME=value\n```\n\nFor example, if you wanted to use a shell variable instead of an environment variable in a command, you could do something like | https://python.langchain.com/docs/integrations/providers/vectara/vectara_text_generation |
f8b137a0ee94-5 | wanted to use a shell variable instead of an environment variable in a command, you could do something like this:\n\n```sh\nVAR=hello && echo $VAR && deno eval "console.log(\'Deno: \' + Deno.env.get(\'VAR\'))"\n```\n\nThis would output the following:\n\n```\nhello\nDeno: undefined\n```\n\nShell variables can be useful when you want to re-use a value, but don\'t want it available in any spawned processes.\n\nAnother way to use environment variables is through pipelines. Pipelines provide a way to pipe the'}, {'text': '\n\nEnvironment variables are a great way to store and access sensitive information in your applications. They are also useful for configuring applications and managing different environments. In Deno, there are two ways to use environment variables: the built-in `Deno.env` and the `.env` file.\n\nThe `Deno.env` is a built-in feature of the Deno runtime that allows you to set and get environment variables. It has getter and setter methods that you can use to access and set environment variables. For example, you can set the `FIREBASE_API_KEY` and `FIREBASE_AUTH_DOMAIN` environment variables like this:\n\n```ts\nDeno.env.set("FIREBASE_API_KEY", "examplekey123");\nDeno.env.set("FIREBASE_AUTH_DOMAIN", "firebasedomain.com");\n\nconsole.log(Deno.env.get("FIREBASE_API_KEY")); // examplekey123\nconsole.log(Deno.env.get("FIREBASE_AUTH_DOMAIN")); // firebasedomain'}, {'text': "\n\nEnvironment variables are a powerful tool for managing configuration and settings in your applications. They allow you to store and access values that can be used in your code, and they can be set and changed without having to modify your code.\n\nIn Deno, | https://python.langchain.com/docs/integrations/providers/vectara/vectara_text_generation |
f8b137a0ee94-6 | and they can be set and changed without having to modify your code.\n\nIn Deno, environment variables are defined using the `export` command. For example, to set a variable called `VAR_NAME` to the value `value`, you would use the following command:\n\n```sh\nexport VAR_NAME=value\n```\n\nYou can then access the value of the environment variable in your code using the `Deno.env.get()` method. For example, if you wanted to log the value of the `VAR_NAME` variable, you could use the following code:\n\n```js\nconsole.log(Deno.env.get('VAR_NAME'));\n```\n\nYou can also set environment variables for a single command. To do this, you can list the environment variables before the command, like so:\n\n```\nVAR=hello VAR2=bye deno run main.ts\n```\n\nThis will set the environment variables `VAR` and `V"}, {'text': "\n\nEnvironment variables are a powerful tool for managing settings and configuration in your applications. They can be used to store information such as user preferences, application settings, and even passwords. In this blog post, we'll discuss how to make Deno scripts executable with a hashbang (shebang).\n\nA hashbang is a line of code that is placed at the beginning of a script. It tells the system which interpreter to use when running the script. In the case of Deno, the hashbang should be `#!/usr/bin/env -S deno run --allow-env`. This tells the system to use the Deno interpreter and to allow the script to access environment variables.\n\nOnce the hashbang is in place, you may need to give the script execution permissions. On Linux, this can be done with the command `sudo chmod +x hashbang.ts`. After that, you can execute the script by calling it like any other command: | https://python.langchain.com/docs/integrations/providers/vectara/vectara_text_generation |
f8b137a0ee94-7 | hashbang.ts`. After that, you can execute the script by calling it like any other command: `./hashbang.ts`.\n\nIn the example program, we give the context permission to access the environment variables and print the Deno installation path. This is done by using the `Deno.env.get()` function, which returns the value of the specified environment"}]PreviousChat Over Documents with VectaraNextVespaPrepare DataSet Up Vector DBSet Up LLM Chain with Custom PromptGenerate TextCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/vectara/vectara_text_generation |
73fb5891dd23-0 | Page Not Found | 🦜�🔗 Langchain
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKPage Not FoundWe could not find what you were looking for.Please contact the owner of the site that linked you to the original URL and let them know their link is broken.CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/vectara/vectara_text_generation.html |
dd83da7d4d94-0 | Metal | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/metal |
dd83da7d4d94-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/metal |
dd83da7d4d94-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerMetalOn this pageMetalThis page covers how to use Metal within LangChain.What is Metal?​Metal is a managed retrieval & memory platform built for production. Easily index your data into Metal and run semantic search and retrieval on it.Quick start​Get started by creating a Metal account.Then, you can easily take advantage of the MetalRetriever class to start retrieving your data for semantic search, prompting context, etc. This class takes a Metal instance and a dictionary of parameters to pass to the Metal API.from langchain.retrievers import MetalRetrieverfrom metal_sdk.metal import Metalmetal = Metal("API_KEY", "CLIENT_ID", "INDEX_ID");retriever = MetalRetriever(metal, params={"limit": 2})docs = retriever.get_relevant_documents("search term")PreviousMediaWikiDumpNextMicrosoft OneDriveWhat is Metal?Quick startCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/metal |
854978d5b471-0 | Confluence | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/confluence |
854978d5b471-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/confluence |
854978d5b471-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerConfluenceOn this pageConfluenceConfluence is a wiki collaboration platform that saves and organizes all of the project-related material. Confluence is a knowledge base that primarily handles content management activities. Installation and Setup​pip install atlassian-python-apiWe need to set up username/api_key or Oauth2 login. | https://python.langchain.com/docs/integrations/providers/confluence |
854978d5b471-3 | See instructions.Document Loader​See a usage example.from langchain.document_loaders import ConfluenceLoaderPreviousCometNextC TransformersInstallation and SetupDocument LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/confluence |
11ea2be7efc3-0 | Marqo | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/marqo |
11ea2be7efc3-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/marqo |
11ea2be7efc3-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerMarqoOn this pageMarqoThis page covers how to use the Marqo ecosystem within LangChain.What is Marqo?​Marqo is a tensor search engine that uses embeddings stored in in-memory HNSW indexes to achieve cutting edge search speeds. Marqo can scale to hundred-million document indexes with horizontal index sharding and allows for async and non-blocking data upload and search. Marqo uses the latest machine learning models from PyTorch, Huggingface, OpenAI and more. You can start with a pre-configured model or bring your own. The built in ONNX support and conversion allows for faster inference and higher throughput on both CPU and GPU.Because Marqo include its own inference your documents can have a mix of text and images, you can bring Marqo indexes with data from your other systems into the langchain ecosystem without having to worry about your embeddings being compatible. Deployment of Marqo is flexible, you can get started yourself with our docker image or contact us about our managed cloud offering!To run Marqo locally with our docker image, see our getting started.Installation and Setup​Install the Python SDK with pip install marqoWrappers​VectorStore​There exists a wrapper around Marqo indexes, allowing you to use them within the vectorstore framework. Marqo lets you select from a range of models for generating embeddings and exposes some preprocessing configurations.The | https://python.langchain.com/docs/integrations/providers/marqo |
11ea2be7efc3-3 | Marqo lets you select from a range of models for generating embeddings and exposes some preprocessing configurations.The Marqo vectorstore can also work with existing multimodel indexes where your documents have a mix of images and text, for more information refer to our documentation. Note that instaniating the Marqo vectorstore with an existing multimodal index will disable the ability to add any new documents to it via the langchain vectorstore add_texts method.To import this vectorstore:from langchain.vectorstores import MarqoFor a more detailed walkthrough of the Marqo wrapper and some of its unique features, see this notebookPreviousLlama.cppNextMediaWikiDumpWhat is Marqo?Installation and SetupWrappersVectorStoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/marqo |
88ed83adbb67-0 | Zilliz | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/zilliz |
88ed83adbb67-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/zilliz |
88ed83adbb67-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerZillizOn this pageZillizZilliz Cloud is a fully managed service on cloud for LF AI Milvus®,Installation and Setup​Install the Python SDK:pip install pymilvusVectorstore​A wrapper around Zilliz indexes allows you to use it as a vectorstore, | https://python.langchain.com/docs/integrations/providers/zilliz |
88ed83adbb67-3 | whether for semantic search or example selection.from langchain.vectorstores import MilvusFor a more detailed walkthrough of the Miluvs wrapper, see this notebookPreviousZepInstallation and SetupVectorstoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/zilliz |
40b46baa049f-0 | Clarifai | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/clarifai |
40b46baa049f-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/clarifai |
40b46baa049f-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerClarifaiOn this pageClarifaiClarifai is one of first deep learning platforms having been founded in 2013. Clarifai provides an AI platform with the full AI lifecycle for data exploration, data labeling, model training, evaluation and inference around images, video, text and audio data. In the LangChain ecosystem, as far as we're aware, Clarifai is the only provider that supports LLMs, embeddings and a vector store in one production scale platform, making it an excellent choice to operationalize your LangChain implementations.Installation and Setup​Install the Python SDK:pip install clarifaiSign-up for a Clarifai account, then get a personal access token to access the Clarifai API from your security settings and set it as an environment variable (CLARIFAI_PAT).Models​Clarifai provides 1,000s of AI models for many different use cases. You can explore them here to find the one most suited for your use case. These models include those created by other providers such as OpenAI, Anthropic, Cohere, AI21, etc. as well as state of the art from open source such as Falcon, InstructorXL, etc. so that you build the best in AI into your products. You'll find these organized by the creator's user_id and into projects we call applications denoted by their app_id. Those IDs will be needed in additional to the model_id and optionally the | https://python.langchain.com/docs/integrations/providers/clarifai |
40b46baa049f-3 | by their app_id. Those IDs will be needed in additional to the model_id and optionally the version_id, so make note of all these IDs once you found the best model for your use case!Also note that given there are many models for images, video, text and audio understanding, you can build some interested AI agents that utilize the variety of AI models as experts to understand those data types.LLMs​To find the selection of LLMs in the Clarifai platform you can select the text to text model type here.from langchain.llms import Clarifaillm = Clarifai(pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)For more details, the docs on the Clarifai LLM wrapper provide a detailed walkthrough.Text Embedding Models​To find the selection of text embeddings models in the Clarifai platform you can select the text to embedding model type here.There is a Clarifai Embedding model in LangChain, which you can access with:from langchain.embeddings import ClarifaiEmbeddingsembeddings = ClarifaiEmbeddings(pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)For more details, the docs on the Clarifai Embeddings wrapper provide a detailed walthrough.Vectorstore​Clarifai's vector DB was launched in 2016 and has been optimized to support live search queries. With workflows in the Clarifai platform, you data is automatically indexed by am embedding model and optionally other models as well to index that information in the DB for search. You can query the DB not only via the vectors but also filter by metadata matches, other AI predicted concepts, and even do geo-coordinate search. Simply create an application, select the appropriate base workflow for your type of data, and upload it (through the | https://python.langchain.com/docs/integrations/providers/clarifai |
40b46baa049f-4 | an application, select the appropriate base workflow for your type of data, and upload it (through the API as documented here or the UIs at clarifai.com).You an also add data directly from LangChain as well, and the auto-indexing will take place for you. You'll notice this is a little different than other vectorstores where you need to provde an embedding model in their constructor and have LangChain coordinate getting the embeddings from text and writing those to the index. Not only is it more convenient, but it's much more scalable to use Clarifai's distributed cloud to do all the index in the background.from langchain.vectorstores import Clarifaiclarifai_vector_db = Clarifai.from_texts(user_id=USER_ID, app_id=APP_ID, texts=texts, pat=CLARIFAI_PAT, number_of_docs=NUMBER_OF_DOCS, metadatas = metadatas)For more details, the docs on the Clarifai vector store provide a detailed walthrough.PreviousChromaNextClearMLInstallation and SetupModelsLLMsText Embedding ModelsVectorstoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/clarifai |
f76084d55e39-0 | Microsoft PowerPoint | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/microsoft_powerpoint |
f76084d55e39-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/microsoft_powerpoint |
f76084d55e39-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerMicrosoft PowerPointOn this pageMicrosoft PowerPointMicrosoft PowerPoint is a presentation program by Microsoft.Installation and Setup​There isn't any special setup for it.Document Loader​See a usage example.from langchain.document_loaders import UnstructuredPowerPointLoaderPreviousMicrosoft OneDriveNextMicrosoft WordInstallation and SetupDocument LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/microsoft_powerpoint |
b06826bd576d-0 | MyScale | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/myscale |
b06826bd576d-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/myscale |
b06826bd576d-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerMyScaleOn this pageMyScaleThis page covers how to use MyScale vector database within LangChain. | https://python.langchain.com/docs/integrations/providers/myscale |
b06826bd576d-3 | It is broken into two parts: installation and setup, and then references to specific MyScale wrappers.With MyScale, you can manage both structured and unstructured (vectorized) data, and perform joint queries and analytics on both types of data using SQL. Plus, MyScale's cloud-native OLAP architecture, built on top of ClickHouse, enables lightning-fast data processing even on massive datasets.Introduction​Overview to MyScale and High performance vector searchYou can now register on our SaaS and start a cluster now!If you are also interested in how we managed to integrate SQL and vector, please refer to this document for further syntax reference.We also deliver with live demo on huggingface! Please checkout our huggingface space! They search millions of vector within a blink!Installation and Setup​Install the Python SDK with pip install clickhouse-connectSetting up environments​There are two ways to set up parameters for myscale index.Environment VariablesBefore you run the app, please set the environment variable with export:
export MYSCALE_HOST='<your-endpoints-url>' MYSCALE_PORT=<your-endpoints-port> MYSCALE_USERNAME=<your-username> MYSCALE_PASSWORD=<your-password> ...You can easily find your account, password and other info on our SaaS. For details please refer to this document | https://python.langchain.com/docs/integrations/providers/myscale |
b06826bd576d-4 | Every attributes under MyScaleSettings can be set with prefix MYSCALE_ and is case insensitive.Create MyScaleSettings object with parameters```pythonfrom langchain.vectorstores import MyScale, MyScaleSettingsconfig = MyScaleSetting(host="<your-backend-url>", port=8443, ...)index = MyScale(embedding_function, config)index.add_documents(...)```Wrappers​supported functions:add_textsadd_documentsfrom_textsfrom_documentssimilarity_searchasimilarity_searchsimilarity_search_by_vectorasimilarity_search_by_vectorsimilarity_search_with_relevance_scoresVectorStore​There exists a wrapper around MyScale database, allowing you to use it as a vectorstore,
whether for semantic search or similar example retrieval.To import this vectorstore:from langchain.vectorstores import MyScaleFor a more detailed walkthrough of the MyScale wrapper, see this notebookPreviousMotherduckNextNLPCloudIntroductionInstallation and SetupSetting up environmentsWrappersVectorStoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/myscale |
d514df04d5fb-0 | scikit-learn | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/sklearn |
d514df04d5fb-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/sklearn |
d514df04d5fb-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerscikit-learnOn this pagescikit-learnscikit-learn is an open source collection of machine learning algorithms, | https://python.langchain.com/docs/integrations/providers/sklearn |
d514df04d5fb-3 | including some implementations of the k nearest neighbors. SKLearnVectorStore wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format.Installation and Setup​Install the Python package with pip install scikit-learnVector Store​SKLearnVectorStore provides a simple wrapper around the nearest neighbor implementation in the
scikit-learn package, allowing you to use it as a vectorstore.To import this vectorstore:from langchain.vectorstores import SKLearnVectorStoreFor a more detailed walkthrough of the SKLearnVectorStore wrapper, see this notebook.PreviousSingleStoreDBNextSlackInstallation and SetupVector StoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/sklearn |
78aa2075f533-0 | MLflow | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/mlflow_tracking |
78aa2075f533-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/mlflow_tracking |
78aa2075f533-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerMLflowMLflowThis notebook goes over how to track your LangChain experiments into your MLflow Serverpip install azureml-mlflowpip install pandaspip install textstatpip install spacypip install openaipip install google-search-resultspython -m spacy download en_core_web_smimport osos.environ["MLFLOW_TRACKING_URI"] = ""os.environ["OPENAI_API_KEY"] = ""os.environ["SERPAPI_API_KEY"] = ""from langchain.callbacks import MlflowCallbackHandlerfrom langchain.llms import OpenAI"""Main function.This function is used to try the callback handler.Scenarios:1. OpenAI LLM2. Chain with multiple SubChains on multiple generations3. Agent with Tools"""mlflow_callback = MlflowCallbackHandler()llm = OpenAI( model_name="gpt-3.5-turbo", temperature=0, callbacks=[mlflow_callback], verbose=True)# SCENARIO 1 - LLMllm_result = llm.generate(["Tell me a joke"])mlflow_callback.flush_tracker(llm)from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChain# SCENARIO 2 - Chaintemplate = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.Title: {title}Playwright: This is a synopsis for the above play:"""prompt_template = PromptTemplate(input_variables=["title"], | https://python.langchain.com/docs/integrations/providers/mlflow_tracking |
78aa2075f533-3 | This is a synopsis for the above play:"""prompt_template = PromptTemplate(input_variables=["title"], template=template)synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=[mlflow_callback])test_prompts = [ { "title": "documentary about good video games that push the boundary of game design" },]synopsis_chain.apply(test_prompts)mlflow_callback.flush_tracker(synopsis_chain)from langchain.agents import initialize_agent, load_toolsfrom langchain.agents import AgentType# SCENARIO 3 - Agent with Toolstools = load_tools(["serpapi", "llm-math"], llm=llm, callbacks=[mlflow_callback])agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callbacks=[mlflow_callback], verbose=True,)agent.run( "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")mlflow_callback.flush_tracker(agent, finish=True)PreviousMLflow AI GatewayNextModalCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/mlflow_tracking |
75f0ee41f7ad-0 | Chaindesk | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/chaindesk |
75f0ee41f7ad-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/chaindesk |
75f0ee41f7ad-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerChaindeskOn this pageChaindeskChaindesk is an open source document retrieval platform that helps to connect your personal data with Large Language Models.Installation and Setup​We need to sign up for Chaindesk, create a datastore, add some data and get your datastore api endpoint url. | https://python.langchain.com/docs/integrations/providers/chaindesk |
75f0ee41f7ad-3 | We need the API Key.Retriever​See a usage example.from langchain.retrievers import ChaindeskRetrieverPreviousCerebriumAINextChromaInstallation and SetupRetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/chaindesk |
f9be61bda5a5-0 | Google Serper | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/google_serper |
f9be61bda5a5-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/google_serper |
f9be61bda5a5-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerGoogle SerperOn this pageGoogle SerperThis page covers how to use the Serper Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search. | https://python.langchain.com/docs/integrations/providers/google_serper |
f9be61bda5a5-3 | It is broken into two parts: setup, and then references to the specific Google Serper wrapper.Setup​Go to serper.dev to sign up for a free accountGet the api key and set it as an environment variable (SERPER_API_KEY)Wrappers​Utility​There exists a GoogleSerperAPIWrapper utility which wraps this API. To import this utility:from langchain.utilities import GoogleSerperAPIWrapperYou can use it as part of a Self Ask chain:from langchain.utilities import GoogleSerperAPIWrapperfrom langchain.llms.openai import OpenAIfrom langchain.agents import initialize_agent, Toolfrom langchain.agents import AgentTypeimport osos.environ["SERPER_API_KEY"] = ""os.environ['OPENAI_API_KEY'] = ""llm = OpenAI(temperature=0)search = GoogleSerperAPIWrapper()tools = [ Tool( name="Intermediate Answer", func=search.run, description="useful for when you need to ask with search" )]self_ask_with_search = initialize_agent(tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True)self_ask_with_search.run("What is the hometown of the reigning men's U.S. Open champion?")Output​Entering new AgentExecutor chain... Yes.Follow up: Who is the reigning men's U.S. Open champion?Intermediate answer: Current champions Carlos Alcaraz, 2022 men's singles champion.Follow up: Where is Carlos Alcaraz from?Intermediate answer: El Palmar, SpainSo the final answer is: El Palmar, Spain> Finished chain.'El Palmar, Spain'For a more detailed walkthrough of this wrapper, see this notebook.Tool​You can also easily load this wrapper as a Tool (to use with an Agent). | https://python.langchain.com/docs/integrations/providers/google_serper |
f9be61bda5a5-4 | You can do this with:from langchain.agents import load_toolstools = load_tools(["google-serper"])For more information on tools, see this page.PreviousGoogle SearchNextGooseAISetupWrappersUtilityToolCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/google_serper |
79ff80563687-0 | Petals | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/petals |
79ff80563687-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/petals |
79ff80563687-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerPetalsOn this pagePetalsThis page covers how to use the Petals ecosystem within LangChain. | https://python.langchain.com/docs/integrations/providers/petals |
79ff80563687-3 | It is broken into two parts: installation and setup, and then references to specific Petals wrappers.Installation and Setup​Install with pip install petalsGet a Hugging Face api key and set it as an environment variable (HUGGINGFACE_API_KEY)Wrappers​LLM​There exists an Petals LLM wrapper, which you can access with from langchain.llms import PetalsPreviousOpenWeatherMapNextPGVectorInstallation and SetupWrappersLLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/petals |
bbccd67b2031-0 | MLflow AI Gateway | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/mlflow_ai_gateway |
bbccd67b2031-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/mlflow_ai_gateway |
bbccd67b2031-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerMLflow AI GatewayOn this pageMLflow AI GatewayThe MLflow AI Gateway service is a powerful tool designed to streamline the usage and management of various large language model (LLM) providers, such as OpenAI and Anthropic, within an organization. It offers a high-level interface that simplifies the interaction with these services by providing a unified endpoint to handle specific LLM related requests. See the MLflow AI Gateway documentation for more details.Installation and Setup​Install mlflow with MLflow AI Gateway dependencies:pip install 'mlflow[gateway]'Set the OpenAI API key as an environment variable:export OPENAI_API_KEY=...Create a configuration file:routes: - name: completions route_type: llm/v1/completions model: provider: openai name: text-davinci-003 config: openai_api_key: $OPENAI_API_KEY - name: embeddings route_type: llm/v1/embeddings model: provider: openai name: text-embedding-ada-002 config: openai_api_key: $OPENAI_API_KEYStart the Gateway server:mlflow gateway start --config-path /path/to/config.yamlCompletions Example​import | https://python.langchain.com/docs/integrations/providers/mlflow_ai_gateway |
bbccd67b2031-3 | gateway start --config-path /path/to/config.yamlCompletions Example​import mlflowfrom langchain import LLMChain, PromptTemplatefrom langchain.llms import MlflowAIGatewaygateway = MlflowAIGateway( gateway_uri="http://127.0.0.1:5000", route="completions", params={ "temperature": 0.0, "top_p": 0.1, },)llm_chain = LLMChain( llm=gateway, prompt=PromptTemplate( input_variables=["adjective"], template="Tell me a {adjective} joke", ),)result = llm_chain.run(adjective="funny")print(result)with mlflow.start_run(): model_info = mlflow.langchain.log_model(chain, "model")model = mlflow.pyfunc.load_model(model_info.model_uri)print(model.predict([{"adjective": "funny"}]))Embeddings Example​from langchain.embeddings import MlflowAIGatewayEmbeddingsembeddings = MlflowAIGatewayEmbeddings( gateway_uri="http://127.0.0.1:5000", route="embeddings",)print(embeddings.embed_query("hello"))print(embeddings.embed_documents(["hello"]))Chat Example​from langchain.chat_models import ChatMLflowAIGatewayfrom langchain.schema import HumanMessage, SystemMessagechat = ChatMLflowAIGateway( gateway_uri="http://127.0.0.1:5000", route="chat", params={ | https://python.langchain.com/docs/integrations/providers/mlflow_ai_gateway |
bbccd67b2031-4 | route="chat", params={ "temperature": 0.1 })messages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage( content="Translate this sentence from English to French: I love programming." ),]print(chat(messages))Databricks MLflow AI Gateway​Databricks MLflow AI Gateway is in private preview. | https://python.langchain.com/docs/integrations/providers/mlflow_ai_gateway |
bbccd67b2031-5 | Please contact a Databricks representative to enroll in the preview.from langchain import LLMChain, PromptTemplatefrom langchain.llms import MlflowAIGatewaygateway = MlflowAIGateway( gateway_uri="databricks", route="completions",)llm_chain = LLMChain( llm=gateway, prompt=PromptTemplate( input_variables=["adjective"], template="Tell me a {adjective} joke", ),)result = llm_chain.run(adjective="funny")print(result)PreviousMilvusNextMLflowInstallation and SetupCompletions ExampleEmbeddings ExampleChat ExampleDatabricks MLflow AI GatewayCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/mlflow_ai_gateway |
28ea3acafca9-0 | AI21 Labs | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/ai21 |
28ea3acafca9-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/ai21 |
28ea3acafca9-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerAI21 LabsOn this pageAI21 LabsThis page covers how to use the AI21 ecosystem within LangChain. | https://python.langchain.com/docs/integrations/providers/ai21 |
28ea3acafca9-3 | It is broken into two parts: installation and setup, and then references to specific AI21 wrappers.Installation and Setup​Get an AI21 api key and set it as an environment variable (AI21_API_KEY)Wrappers​LLM​There exists an AI21 LLM wrapper, which you can access with from langchain.llms import AI21PreviousWandB TracingNextAimInstallation and SetupWrappersLLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/ai21 |
20453c6226a1-0 | Runhouse | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/runhouse |
20453c6226a1-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/runhouse |
20453c6226a1-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerRunhouseOn this pageRunhouseThis page covers how to use the Runhouse ecosystem within LangChain. | https://python.langchain.com/docs/integrations/providers/runhouse |
20453c6226a1-3 | It is broken into three parts: installation and setup, LLMs, and Embeddings.Installation and Setup​Install the Python SDK with pip install runhouseIf you'd like to use on-demand cluster, check your cloud credentials with sky checkSelf-hosted LLMs​For a basic self-hosted LLM, you can use the SelfHostedHuggingFaceLLM class. For more
custom LLMs, you can use the SelfHostedPipeline parent class.from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLMFor a more detailed walkthrough of the Self-hosted LLMs, see this notebookSelf-hosted Embeddings​There are several ways to use self-hosted embeddings with LangChain via Runhouse.For a basic self-hosted embedding from a Hugging Face Transformers model, you can use
the SelfHostedEmbedding class.from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLMFor a more detailed walkthrough of the Self-hosted Embeddings, see this notebookPreviousRocksetNextRWKV-4Installation and SetupSelf-hosted LLMsSelf-hosted EmbeddingsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/runhouse |
aac5aba29cf4-0 | Databricks | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/databricks |
aac5aba29cf4-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/databricks |
aac5aba29cf4-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerDatabricksOn this pageDatabricksThis notebook covers how to connect to the Databricks runtimes and Databricks SQL using the SQLDatabase wrapper of LangChain. | https://python.langchain.com/docs/integrations/providers/databricks |
aac5aba29cf4-3 | It is broken into 3 parts: installation and setup, connecting to Databricks, and examples.Installation and Setup​pip install databricks-sql-connectorConnecting to Databricks​You can connect to Databricks runtimes and Databricks SQL using the SQLDatabase.from_databricks() method.Syntax​SQLDatabase.from_databricks( catalog: str, schema: str, host: Optional[str] = None, api_token: Optional[str] = None, warehouse_id: Optional[str] = None, cluster_id: Optional[str] = None, engine_args: Optional[dict] = None, **kwargs: Any)Required Parameters​catalog: The catalog name in the Databricks database.schema: The schema name in the catalog.Optional Parameters​There following parameters are optional. When executing the method in a Databricks notebook, you don't need to provide them in most of the cases.host: The Databricks workspace hostname, excluding 'https://' part. Defaults to 'DATABRICKS_HOST' environment variable or current workspace if in a Databricks notebook.api_token: The Databricks personal access token for accessing the Databricks SQL warehouse or the cluster. Defaults to 'DATABRICKS_TOKEN' environment variable or a temporary one is generated if in a Databricks notebook.warehouse_id: The warehouse ID in the Databricks SQL.cluster_id: The cluster ID in the Databricks Runtime. If running in a Databricks notebook and both 'warehouse_id' and 'cluster_id' are None, it uses the ID of the cluster the notebook is attached to.engine_args: The arguments to be used when connecting Databricks.**kwargs: Additional keyword arguments for the SQLDatabase.from_uri | https://python.langchain.com/docs/integrations/providers/databricks |
aac5aba29cf4-4 | be used when connecting Databricks.**kwargs: Additional keyword arguments for the SQLDatabase.from_uri method.Examples​# Connecting to Databricks with SQLDatabase wrapperfrom langchain import SQLDatabasedb = SQLDatabase.from_databricks(catalog="samples", schema="nyctaxi")# Creating a OpenAI Chat LLM wrapperfrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI(temperature=0, model_name="gpt-4")SQL Chain example​This example demonstrates the use of the SQL Chain for answering a question over a Databricks database.from langchain import SQLDatabaseChaindb_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)db_chain.run( "What is the average duration of taxi rides that start between midnight and 6am?") > Entering new SQLDatabaseChain chain... What is the average duration of taxi rides that start between midnight and 6am? SQLQuery:SELECT AVG(UNIX_TIMESTAMP(tpep_dropoff_datetime) - UNIX_TIMESTAMP(tpep_pickup_datetime)) as avg_duration FROM trips WHERE HOUR(tpep_pickup_datetime) >= 0 AND HOUR(tpep_pickup_datetime) < 6 SQLResult: [(987.8122786304605,)] Answer:The average duration of taxi rides that start between midnight and 6am is 987.81 seconds. > Finished chain. 'The average duration of taxi rides that start between midnight and 6am is 987.81 seconds.'SQL Database Agent example​This example demonstrates the use of the SQL Database Agent for answering questions over a Databricks database.from langchain.agents import create_sql_agentfrom langchain.agents.agent_toolkits import | https://python.langchain.com/docs/integrations/providers/databricks |
aac5aba29cf4-5 | database.from langchain.agents import create_sql_agentfrom langchain.agents.agent_toolkits import SQLDatabaseToolkittoolkit = SQLDatabaseToolkit(db=db, llm=llm)agent = create_sql_agent(llm=llm, toolkit=toolkit, verbose=True)agent.run("What is the longest trip distance and how long did it take?") > Entering new AgentExecutor chain... Action: list_tables_sql_db Action Input: Observation: trips Thought:I should check the schema of the trips table to see if it has the necessary columns for trip distance and duration. Action: schema_sql_db Action Input: trips Observation: CREATE TABLE trips ( tpep_pickup_datetime TIMESTAMP, tpep_dropoff_datetime TIMESTAMP, trip_distance FLOAT, fare_amount FLOAT, pickup_zip INT, dropoff_zip INT ) USING DELTA /* 3 rows from trips table: tpep_pickup_datetime tpep_dropoff_datetime trip_distance fare_amount pickup_zip dropoff_zip 2016-02-14 16:52:13+00:00 2016-02-14 17:16:04+00:00 4.94 19.0 10282 10171 2016-02-04 18:44:19+00:00 2016-02-04 | https://python.langchain.com/docs/integrations/providers/databricks |
aac5aba29cf4-6 | 18:44:19+00:00 2016-02-04 18:46:00+00:00 0.28 3.5 10110 10110 2016-02-17 17:13:57+00:00 2016-02-17 17:17:55+00:00 0.7 5.0 10103 10023 */ Thought:The trips table has the necessary columns for trip distance and duration. I will write a query to find the longest trip distance and its duration. Action: query_checker_sql_db Action Input: SELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1 Observation: SELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1 Thought:The query is correct. I will now execute it to find the longest trip distance and its duration. Action: query_sql_db Action Input: SELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1 Observation: [(30.6, '0 00:43:31.000000000')] Thought:I now know the final answer. Final Answer: The longest trip distance is 30.6 miles and it took 43 minutes and 31 seconds. > Finished chain. 'The longest trip distance is 30.6 miles and it took 43 minutes and 31 seconds.'PreviousC TransformersNextDatadog TracingInstallation and SetupConnecting to | https://python.langchain.com/docs/integrations/providers/databricks |
aac5aba29cf4-7 | minutes and 31 seconds.'PreviousC TransformersNextDatadog TracingInstallation and SetupConnecting to DatabricksSyntaxRequired ParametersOptional ParametersExamplesSQL Chain exampleSQL Database Agent exampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/databricks |
7dacead2ab70-0 | OpenLLM | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/openllm |
7dacead2ab70-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/openllm |
7dacead2ab70-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerOpenLLMOn this pageOpenLLMThis page demonstrates how to use OpenLLM | https://python.langchain.com/docs/integrations/providers/openllm |
7dacead2ab70-3 | with LangChain.OpenLLM is an open platform for operating large language models (LLMs) in
production. It enables developers to easily run inference with any open-source
LLMs, deploy to the cloud or on-premises, and build powerful AI apps.Installation and Setup​Install the OpenLLM package via PyPI:pip install openllmLLM​OpenLLM supports a wide range of open-source LLMs as well as serving users' own
fine-tuned LLMs. Use openllm model command to see all available models that
are pre-optimized for OpenLLM.Wrappers​There is a OpenLLM Wrapper which supports loading LLM in-process or accessing a
remote OpenLLM server:from langchain.llms import OpenLLMWrapper for OpenLLM server​This wrapper supports connecting to an OpenLLM server via HTTP or gRPC. The
OpenLLM server can run either locally or on the cloud.To try it out locally, start an OpenLLM server:openllm start flan-t5Wrapper usage:from langchain.llms import OpenLLMllm = OpenLLM(server_url='http://localhost:3000')llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")Wrapper for Local Inference​You can also use the OpenLLM wrapper to load LLM in current Python process for
running inference.from langchain.llms import OpenLLMllm = OpenLLM(model_name="dolly-v2", model_id='databricks/dolly-v2-7b')llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")Usage​For a more detailed walkthrough of the OpenLLM Wrapper, see the | https://python.langchain.com/docs/integrations/providers/openllm |
7dacead2ab70-4 | example notebookPreviousOpenAINextOpenSearchInstallation and SetupLLMWrappersWrapper for OpenLLM serverWrapper for Local InferenceUsageCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/openllm |
1f7be2484794-0 | ModelScope | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/modelscope |
1f7be2484794-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/modelscope |
Subsets and Splits