id
stringlengths 14
15
| text
stringlengths 23
2.21k
| source
stringlengths 52
97
|
---|---|---|
1f7be2484794-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerModelScopeOn this pageModelScopeThis page covers how to use the modelscope ecosystem within LangChain. | https://python.langchain.com/docs/integrations/providers/modelscope |
1f7be2484794-3 | It is broken into two parts: installation and setup, and then references to specific modelscope wrappers.Installation and Setup​Install the Python SDK with pip install modelscopeWrappers​Embeddings​There exists a modelscope Embeddings wrapper, which you can access with from langchain.embeddings import ModelScopeEmbeddingsFor a more detailed walkthrough of this, see this notebookPreviousModalNextModern TreasuryInstallation and SetupWrappersEmbeddingsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/modelscope |
2e6271471403-0 | Arthur | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/arthur_tracking |
2e6271471403-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/arthur_tracking |
2e6271471403-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerArthurArthurArthur is a model monitoring and observability platform.The following guide shows how to run a registered chat LLM with the Arthur callback handler to automatically log model inferences to Arthur.If you do not have a model currently onboarded to Arthur, visit our onboarding guide for generative text models. For more information about how to use the Arthur SDK, visit our docs.from langchain.callbacks import ArthurCallbackHandlerfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerfrom langchain.chat_models import ChatOpenAIfrom langchain.schema import HumanMessagePlace Arthur credentials herearthur_url = "https://app.arthur.ai"arthur_login = "your-arthur-login-username-here"arthur_model_id = "your-arthur-model-id-here"Create Langchain LLM with Arthur callback handlerdef make_langchain_chat_llm(chat_model=): return ChatOpenAI( streaming=True, temperature=0.1, callbacks=[ StreamingStdOutCallbackHandler(), ArthurCallbackHandler.from_credentials( arthur_model_id, arthur_url=arthur_url, | https://python.langchain.com/docs/integrations/providers/arthur_tracking |
2e6271471403-3 | arthur_login=arthur_login) ])chatgpt = make_langchain_chat_llm() Please enter password for admin: ········Running the chat LLM with this run function will save the chat history in an ongoing list so that the conversation can reference earlier messages and log each response to the Arthur platform. You can view the history of this model's inferences on your model dashboard page.Enter q to quit the run loopdef run(llm): history = [] while True: user_input = input("\n>>> input >>>\n>>>: ") if user_input == "q": break history.append(HumanMessage(content=user_input)) history.append(llm(history))run(chatgpt) >>> input >>> >>>: What is a callback handler? A callback handler, also known as a callback function or callback method, is a piece of code that is executed in response to a specific event or condition. It is commonly used in programming languages that support event-driven or asynchronous programming paradigms. The purpose of a callback handler is to provide a way for developers to define custom behavior that should be executed when a certain event occurs. Instead of waiting for a result or blocking the execution, the program registers a callback function and continues with other tasks. When the event is triggered, the callback function is invoked, allowing the program to respond accordingly. Callback handlers are commonly used in various scenarios, such as handling user input, responding to network requests, | https://python.langchain.com/docs/integrations/providers/arthur_tracking |
2e6271471403-4 | Callback handlers are commonly used in various scenarios, such as handling user input, responding to network requests, processing asynchronous operations, and implementing event-driven architectures. They provide a flexible and modular way to handle events and decouple different components of a system. >>> input >>> >>>: What do I need to do to get the full benefits of this To get the full benefits of using a callback handler, you should consider the following: 1. Understand the event or condition: Identify the specific event or condition that you want to respond to with a callback handler. This could be user input, network requests, or any other asynchronous operation. 2. Define the callback function: Create a function that will be executed when the event or condition occurs. This function should contain the desired behavior or actions you want to take in response to the event. 3. Register the callback function: Depending on the programming language or framework you are using, you may need to register or attach the callback function to the appropriate event or condition. This ensures that the callback function is invoked when the event occurs. 4. Handle the callback: Implement the necessary logic within the callback function to handle the event or condition. This could involve updating the user interface, processing data, making further requests, or triggering other actions. 5. Consider error handling: It's important to handle any potential errors or exceptions that may occur within the callback function. This ensures that your program can gracefully handle unexpected situations and prevent crashes or undesired behavior. 6. Maintain code readability and modularity: As your codebase grows, it's crucial to keep your callback handlers organized and maintainable. Consider using design patterns or architectural principles to structure your code in a modular and scalable way. | https://python.langchain.com/docs/integrations/providers/arthur_tracking |
2e6271471403-5 | Consider using design patterns or architectural principles to structure your code in a modular and scalable way. By following these steps, you can leverage the benefits of callback handlers, such as asynchronous and event-driven programming, improved responsiveness, and modular code design. >>> input >>> >>>: qPreviousArgillaNextArxivCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/arthur_tracking |
6c6df17727d9-0 | Facebook Chat | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/facebook_chat |
6c6df17727d9-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/facebook_chat |
6c6df17727d9-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerFacebook ChatOn this pageFacebook ChatMessenger is an American proprietary instant messaging app and | https://python.langchain.com/docs/integrations/providers/facebook_chat |
6c6df17727d9-3 | platform developed by Meta Platforms. Originally developed as Facebook Chat in 2008, the company revamped its
messaging service in 2010.Installation and Setup​First, you need to install pandas python package.pip install pandasDocument Loader​See a usage example.from langchain.document_loaders import FacebookChatLoaderPreviousEverNoteNextFigmaInstallation and SetupDocument LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/facebook_chat |
02af9f27f182-0 | Hugging Face | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/huggingface |
02af9f27f182-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/huggingface |
02af9f27f182-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerHugging FaceOn this pageHugging FaceThis page covers how to use the Hugging Face ecosystem (including the Hugging Face Hub) within LangChain. | https://python.langchain.com/docs/integrations/providers/huggingface |
02af9f27f182-3 | It is broken into two parts: installation and setup, and then references to specific Hugging Face wrappers.Installation and Setup​If you want to work with the Hugging Face Hub:Install the Hub client library with pip install huggingface_hubCreate a Hugging Face account (it's free!)Create an access token and set it as an environment variable (HUGGINGFACEHUB_API_TOKEN)If you want work with the Hugging Face Python libraries:Install pip install transformers for working with models and tokenizersInstall pip install datasets for working with datasetsWrappers​LLM​There exists two Hugging Face LLM wrappers, one for a local pipeline and one for a model hosted on Hugging Face Hub.
Note that these wrappers only work for models that support the following tasks: text2text-generation, text-generationTo use the local pipeline wrapper:from langchain.llms import HuggingFacePipelineTo use a the wrapper for a model hosted on Hugging Face Hub:from langchain.llms import HuggingFaceHubFor a more detailed walkthrough of the Hugging Face Hub wrapper, see this notebookEmbeddings​There exists two Hugging Face Embeddings wrappers, one for a local model and one for a model hosted on Hugging Face Hub.
Note that these wrappers only work for sentence-transformers models.To use the local pipeline wrapper:from langchain.embeddings import HuggingFaceEmbeddingsTo use a the wrapper for a model hosted on Hugging Face Hub:from langchain.embeddings import HuggingFaceHubEmbeddingsFor a more detailed walkthrough of this, see this notebookTokenizer​There are several places you can use tokenizers available through the transformers package. | https://python.langchain.com/docs/integrations/providers/huggingface |
02af9f27f182-4 | By default, it is used to count tokens for all LLMs.You can also use it to count tokens when splitting documents with from langchain.text_splitter import CharacterTextSplitterCharacterTextSplitter.from_huggingface_tokenizer(...)For a more detailed walkthrough of this, see this notebookDatasets​The Hugging Face Hub has lots of great datasets that can be used to evaluate your LLM chains.For a detailed walkthrough of how to use them to do so, see this notebookPreviousHologresNextiFixitInstallation and SetupWrappersLLMEmbeddingsTokenizerDatasetsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/huggingface |
1e3a42dafa16-0 | Alibaba Cloud Opensearch | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/alibabacloud_opensearch |
1e3a42dafa16-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/alibabacloud_opensearch |
1e3a42dafa16-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerAlibaba Cloud OpensearchOn this pageAlibaba Cloud OpensearchAlibaba Cloud Opensearch OpenSearch is a one-stop platform to develop intelligent search services. OpenSearch was built based on the large-scale distributed search engine developed by Alibaba. OpenSearch serves more than 500 business cases in Alibaba Group and thousands of Alibaba Cloud customers. OpenSearch helps develop search services in different search scenarios, including e-commerce, O2O, multimedia, the content industry, communities and forums, and big data query in enterprises.OpenSearch helps you develop high quality, maintenance-free, and high performance intelligent search services to provide your users with high search efficiency and accuracy. OpenSearch provides the vector search feature. In specific scenarios, especially test question search and image search scenarios, you can use the vector search feature together with the multimodal search feature to improve the accuracy of search results. This topic describes the syntax and usage notes of vector indexes.Purchase an instance and configure it​Purchase OpenSearch Vector Search Edition from Alibaba Cloud and configure the instance according to the help documentation.Alibaba Cloud Opensearch Vector Store Wrappers​supported functions:add_textsadd_documentsfrom_textsfrom_documentssimilarity_searchasimilarity_searchsimilarity_search_by_vectorasimilarity_search_by_vectorsimilarity_search_with_relevance_scoresFor a more detailed walk through of the Alibaba Cloud OpenSearch wrapper, see this notebookIf you encounter any problems during use, please feel free to contact | https://python.langchain.com/docs/integrations/providers/alibabacloud_opensearch |
1e3a42dafa16-3 | OpenSearch wrapper, see this notebookIf you encounter any problems during use, please feel free to contact [email protected] , and we will do our best to provide you with assistance and support.PreviousAleph AlphaNextAmazon API GatewayPurchase an instance and configure itAlibaba Cloud Opensearch Vector Store WrappersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/alibabacloud_opensearch |
e4afa2b47bc4-0 | Slack | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/slack |
e4afa2b47bc4-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/slack |
e4afa2b47bc4-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerSlackOn this pageSlackSlack is an instant messaging program.Installation and Setup​There isn't any special setup for it.Document Loader​See a usage example.from langchain.document_loaders import SlackDirectoryLoaderPreviousscikit-learnNextspaCyInstallation and SetupDocument LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/slack |
1d51f606763b-0 | AWS S3 Directory | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/aws_s3 |
1d51f606763b-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/aws_s3 |
1d51f606763b-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerAWS S3 DirectoryOn this pageAWS S3 DirectoryAmazon Simple Storage Service (Amazon S3) is an object storage service.AWS S3 DirectoryAWS S3 BucketsInstallation and Setup​pip install boto3Document Loader​See a usage example for S3DirectoryLoader.See a usage example for S3FileLoader.from langchain.document_loaders import S3DirectoryLoader, S3FileLoaderPreviousAwaDBNextAZLyricsInstallation and SetupDocument LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/aws_s3 |
3b409de490bb-0 | SearxNG Search API | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/searx |
3b409de490bb-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/searx |
3b409de490bb-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerSearxNG Search APIOn this pageSearxNG Search APIThis page covers how to use the SearxNG search API within LangChain. | https://python.langchain.com/docs/integrations/providers/searx |
3b409de490bb-3 | It is broken into two parts: installation and setup, and then references to the specific SearxNG API wrapper.Installation and Setup​While it is possible to utilize the wrapper in conjunction with public searx
instances these instances frequently do not permit API
access (see note on output format below) and have limitations on the frequency
of requests. It is recommended to opt for a self-hosted instance instead.Self Hosted Instance:​See this page for installation instructions.When you install SearxNG, the only active output format by default is the HTML format. | https://python.langchain.com/docs/integrations/providers/searx |
3b409de490bb-4 | You need to activate the json format to use the API. This can be done by adding the following line to the settings.yml file:search: formats: - html - jsonYou can make sure that the API is working by issuing a curl request to the API endpoint:curl -kLX GET --data-urlencode q='langchain' -d format=json http://localhost:8888This should return a JSON object with the results.Wrappers​Utility​To use the wrapper we need to pass the host of the SearxNG instance to the wrapper with:1. the named parameter `searx_host` when creating the instance.2. exporting the environment variable `SEARXNG_HOST`.You can use the wrapper to get results from a SearxNG instance. from langchain.utilities import SearxSearchWrappers = SearxSearchWrapper(searx_host="http://localhost:8888")s.run("what is a large language model?")Tool​You can also load this wrapper as a Tool (to use with an Agent).You can do this with:from langchain.agents import load_toolstools = load_tools(["searx-search"], searx_host="http://localhost:8888", engines=["github"])Note that we could optionally pass custom engines to use.If you want to obtain results with metadata as json you can use:tools = load_tools(["searx-search-results-json"], searx_host="http://localhost:8888", num_results=5)Quickly creating tools​This examples showcases a quick way to create multiple tools from the same | https://python.langchain.com/docs/integrations/providers/searx |
3b409de490bb-5 | wrapper.from langchain.tools.searx_search.tool import SearxSearchResultswrapper = SearxSearchWrapper(searx_host="**")github_tool = SearxSearchResults(name="Github", wrapper=wrapper, kwargs = { "engines": ["github"], })arxiv_tool = SearxSearchResults(name="Arxiv", wrapper=wrapper, kwargs = { "engines": ["arxiv"] })For more information on tools, see this page.PreviousSageMaker EndpointNextSerpAPIInstallation and SetupSelf Hosted Instance:WrappersUtilityToolCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/searx |
ac956373ed42-0 | Telegram | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/telegram |
ac956373ed42-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/telegram |
ac956373ed42-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerTelegramOn this pageTelegramTelegram Messenger is a globally accessible freemium, cross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optional end-to-end encrypted chats and video calling, VoIP, file sharing and several other features.Installation and Setup​See setup instructions.Document Loader​See a usage example.from langchain.document_loaders import TelegramChatFileLoaderfrom langchain.document_loaders import TelegramChatApiLoaderPreviousTairNextTigrisInstallation and SetupDocument LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/telegram |
dcafba31856a-0 | Google Cloud Storage | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/google_cloud_storage |
dcafba31856a-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/google_cloud_storage |
dcafba31856a-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerGoogle Cloud StorageOn this pageGoogle Cloud StorageGoogle Cloud Storage is a managed service for storing unstructured data.Installation and Setup​First, you need to install google-cloud-bigquery python package.pip install google-cloud-storageDocument Loader​There are two loaders for the Google Cloud Storage: the Directory and the File loaders.See a usage example.from langchain.document_loaders import GCSDirectoryLoaderSee a usage example.from langchain.document_loaders import GCSFileLoaderPreviousGoogle BigQueryNextGoogle DriveInstallation and SetupDocument LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/google_cloud_storage |
0ab31927fa96-0 | Hacker News | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/hacker_news |
0ab31927fa96-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/hacker_news |
0ab31927fa96-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerHacker NewsOn this pageHacker NewsHacker News (sometimes abbreviated as HN) is a social news | https://python.langchain.com/docs/integrations/providers/hacker_news |
0ab31927fa96-3 | website focusing on computer science and entrepreneurship. It is run by the investment fund and startup
incubator Y Combinator. In general, content that can be submitted is defined as "anything that gratifies
one's intellectual curiosity."Installation and Setup​There isn't any special setup for it.Document Loader​See a usage example.from langchain.document_loaders import HNLoaderPreviousGutenbergNextHazy ResearchInstallation and SetupDocument LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/hacker_news |
73e01fccfbf9-0 | Prediction Guard | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/predictionguard |
73e01fccfbf9-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/predictionguard |
73e01fccfbf9-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerPrediction GuardOn this pagePrediction GuardThis page covers how to use the Prediction Guard ecosystem within LangChain. | https://python.langchain.com/docs/integrations/providers/predictionguard |
73e01fccfbf9-3 | It is broken into two parts: installation and setup, and then references to specific Prediction Guard wrappers.Installation and Setup​Install the Python SDK with pip install predictionguardGet an Prediction Guard access token (as described here) and set it as an environment variable (PREDICTIONGUARD_TOKEN)LLM Wrapper​There exists a Prediction Guard LLM wrapper, which you can access with from langchain.llms import PredictionGuardYou can provide the name of the Prediction Guard model as an argument when initializing the LLM:pgllm = PredictionGuard(model="MPT-7B-Instruct")You can also provide your access token directly as an argument:pgllm = PredictionGuard(model="MPT-7B-Instruct", token="<your access token>")Finally, you can provide an "output" argument that is used to structure/ control the output of the LLM:pgllm = PredictionGuard(model="MPT-7B-Instruct", output={"type": "boolean"})Example usage​Basic usage of the controlled or guarded LLM wrapper:import osimport predictionguard as pgfrom langchain.llms import PredictionGuardfrom langchain import PromptTemplate, LLMChain# Your Prediction Guard API key. Get one at predictionguard.comos.environ["PREDICTIONGUARD_TOKEN"] = "<your Prediction Guard access token>"# Define a prompt templatetemplate = """Respond to the following query based on the context.Context: EVERY comment, DM + email suggestion has led us to this EXCITING announcement! � We have officially added TWO new candle subscription box options! 📦Exclusive Candle Box - $80 Monthly Candle Box - $45 (NEW!)Scent of The Month Box - $28 (NEW!)Head to stories to get ALLL the deets on each box! 👆 BONUS: Save 50% | https://python.langchain.com/docs/integrations/providers/predictionguard |
73e01fccfbf9-4 | the deets on each box! 👆 BONUS: Save 50% on your first box with code 50OFF! �Query: {query}Result: """prompt = PromptTemplate(template=template, input_variables=["query"])# With "guarding" or controlling the output of the LLM. See the # Prediction Guard docs (https://docs.predictionguard.com) to learn how to # control the output with integer, float, boolean, JSON, and other types and# structures.pgllm = PredictionGuard(model="MPT-7B-Instruct", output={ "type": "categorical", "categories": [ "product announcement", "apology", "relational" ] | https://python.langchain.com/docs/integrations/providers/predictionguard |
73e01fccfbf9-5 | })pgllm(prompt.format(query="What kind of post is this?"))Basic LLM Chaining with the Prediction Guard wrapper:import osfrom langchain import PromptTemplate, LLMChainfrom langchain.llms import PredictionGuard# Optional, add your OpenAI API Key. This is optional, as Prediction Guard allows# you to access all the latest open access models (see https://docs.predictionguard.com)os.environ["OPENAI_API_KEY"] = "<your OpenAI api key>"# Your Prediction Guard API key. Get one at predictionguard.comos.environ["PREDICTIONGUARD_TOKEN"] = "<your Prediction Guard access token>"pgllm = PredictionGuard(model="OpenAI-text-davinci-003")template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.predict(question=question)PreviousPredibaseNextPromptLayerInstallation and SetupLLM WrapperExample usageCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/predictionguard |
2e1384b5b08b-0 | Beam | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/beam |
2e1384b5b08b-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/beam |
2e1384b5b08b-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerBeamOn this pageBeamThis page covers how to use Beam within LangChain. | https://python.langchain.com/docs/integrations/providers/beam |
2e1384b5b08b-3 | It is broken into two parts: installation and setup, and then references to specific Beam wrappers.Installation and Setup​Create an accountInstall the Beam CLI with curl https://raw.githubusercontent.com/slai-labs/get-beam/main/get-beam.sh -sSfL | shRegister API keys with beam configureSet environment variables (BEAM_CLIENT_ID) and (BEAM_CLIENT_SECRET)Install the Beam SDK pip install beam-sdkWrappers​LLM​There exists a Beam LLM wrapper, which you can access withfrom langchain.llms.beam import BeamDefine your Beam app.​This is the environment you’ll be developing against once you start the app.
It's also used to define the maximum response length from the model.llm = Beam(model_name="gpt2", name="langchain-gpt2-test", cpu=8, memory="32Gi", gpu="A10G", python_version="python3.8", python_packages=[ "diffusers[torch]>=0.10", "transformers", "torch", "pillow", "accelerate", "safetensors", "xformers",], max_length="50", verbose=False)Deploy your Beam app​Once defined, you can deploy your Beam app by calling your model's _deploy() method.llm._deploy()Call your Beam app​Once a beam model is deployed, it can be called by callying your model's _call() method. | https://python.langchain.com/docs/integrations/providers/beam |
2e1384b5b08b-4 | This returns the GPT2 text response to your prompt.response = llm._call("Running machine learning on a remote GPU")An example script which deploys the model and calls it would be:from langchain.llms.beam import Beamimport timellm = Beam(model_name="gpt2", name="langchain-gpt2-test", cpu=8, memory="32Gi", gpu="A10G", python_version="python3.8", python_packages=[ "diffusers[torch]>=0.10", "transformers", "torch", "pillow", "accelerate", "safetensors", "xformers",], max_length="50", verbose=False)llm._deploy()response = llm._call("Running machine learning on a remote GPU")print(response)PreviousBasetenNextBedrockInstallation and SetupWrappersLLMDefine your Beam app.Deploy your Beam appCall your Beam appCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/beam |
55dd16dd25a2-0 | WhatsApp | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/whatsapp |
55dd16dd25a2-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/whatsapp |
55dd16dd25a2-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerWhatsAppOn this pageWhatsAppWhatsApp (also called WhatsApp Messenger) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content.Installation and Setup​There isn't any special setup for it.Document Loader​See a usage example.from langchain.document_loaders import WhatsAppChatLoaderPreviousWeaviateNextWhyLabsInstallation and SetupDocument LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/whatsapp |
6b8d884cc266-0 | Yeager.ai | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/yeagerai |
6b8d884cc266-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/yeagerai |
6b8d884cc266-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerYeager.aiOn this pageYeager.aiThis page covers how to use Yeager.ai to generate LangChain tools and agents.What is Yeager.ai?​Yeager.ai is an ecosystem designed to simplify the process of creating AI agents and tools. It features yAgents, a No-code LangChain Agent Builder, which enables users to build, test, and deploy AI solutions with ease. Leveraging the LangChain framework, yAgents allows seamless integration with various language models and resources, making it suitable for developers, researchers, and AI enthusiasts across diverse applications.yAgents​Low code generative agent designed to help you build, prototype, and deploy Langchain tools with ease. How to use?​pip install yeagerai-agentyeagerai-agentGo to http://127.0.0.1:7860This will install the necessary dependencies and set up yAgents on your system. After the first run, yAgents will create a .env file where you can input your OpenAI API key. You can do the same directly from the Gradio interface under the tab "Settings".OPENAI_API_KEY=<your_openai_api_key_here>We recommend using GPT-4,. However, the tool can also work with GPT-3 if the problem is broken down sufficiently.Creating and Executing Tools with yAgents​yAgents makes it easy to create and execute AI-powered tools. Here's a brief overview of the process:Create a | https://python.langchain.com/docs/integrations/providers/yeagerai |
6b8d884cc266-3 | easy to create and execute AI-powered tools. Here's a brief overview of the process:Create a tool: To create a tool, provide a natural language prompt to yAgents. The prompt should clearly describe the tool's purpose and functionality. For example: | https://python.langchain.com/docs/integrations/providers/yeagerai |
6b8d884cc266-4 | create a tool that returns the n-th prime numberLoad the tool into the toolkit: To load a tool into yAgents, simply provide a command to yAgents that says so. For example:
load the tool that you just created it into your toolkitExecute the tool: To run a tool or agent, simply provide a command to yAgents that includes the name of the tool and any required parameters. For example:
generate the 50th prime numberYou can see a video of how it works here.As you become more familiar with yAgents, you can create more advanced tools and agents to automate your work and enhance your productivity.For more information, see yAgents' Github or our docsPreviousWriterNextYouTubeWhat is Yeager.ai?yAgentsHow to use?Creating and Executing Tools with yAgentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/yeagerai |
210174bff40f-0 | StochasticAI | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/stochasticai |
210174bff40f-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/stochasticai |
210174bff40f-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerStochasticAIOn this pageStochasticAIThis page covers how to use the StochasticAI ecosystem within LangChain. | https://python.langchain.com/docs/integrations/providers/stochasticai |
210174bff40f-3 | It is broken into two parts: installation and setup, and then references to specific StochasticAI wrappers.Installation and Setup​Install with pip install stochasticxGet an StochasticAI api key and set it as an environment variable (STOCHASTICAI_API_KEY)Wrappers​LLM​There exists an StochasticAI LLM wrapper, which you can access with from langchain.llms import StochasticAIPreviousStarRocksNextStripeInstallation and SetupWrappersLLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/stochasticai |
0503bcc0bb0d-0 | RWKV-4 | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/rwkv |
0503bcc0bb0d-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/rwkv |
0503bcc0bb0d-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerRWKV-4On this pageRWKV-4This page covers how to use the RWKV-4 wrapper within LangChain. | https://python.langchain.com/docs/integrations/providers/rwkv |
0503bcc0bb0d-3 | It is broken into two parts: installation and setup, and then usage with an example.Installation and Setup​Install the Python package with pip install rwkvInstall the tokenizer Python package with pip install tokenizerDownload a RWKV model and place it in your desired directoryDownload the tokens fileUsage​RWKV​To use the RWKV wrapper, you need to provide the path to the pre-trained model file and the tokenizer's configuration.from langchain.llms import RWKV# Test the model```pythondef generate_prompt(instruction, input=None): if input: return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.# Instruction:{instruction}# Input:{input}# Response:""" else: return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.# Instruction:{instruction}# Response:"""model = RWKV(model="./models/RWKV-4-Raven-3B-v7-Eng-20230404-ctx4096.pth", strategy="cpu fp32", tokens_path="./rwkv/20B_tokenizer.json")response = model(generate_prompt("Once upon a time, "))Model File​You can find links to model file downloads at the RWKV-4-Raven repository.Rwkv-4 models -> recommended VRAM​RWKV VRAMModel | 8bit | bf16/fp16 | fp3214B | 16GB | 28GB | >50GB7B | 8GB | 14GB | 28GB3B | 2.8GB| 6GB | | https://python.langchain.com/docs/integrations/providers/rwkv |
0503bcc0bb0d-4 | | 2.8GB| 6GB | 12GB1b5 | 1.3GB| 3GB | 6GBSee the rwkv pip page for more information about strategies, including streaming and cuda support.PreviousRunhouseNextSageMaker EndpointInstallation and SetupUsageRWKVModel FileRwkv-4 models -> recommended VRAMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/rwkv |
6c2e2b598e19-0 | Argilla | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/argilla |
6c2e2b598e19-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/argilla |
6c2e2b598e19-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerArgillaOn this pageArgillaArgilla is an open-source data curation platform for LLMs. | https://python.langchain.com/docs/integrations/providers/argilla |
6c2e2b598e19-3 | Using Argilla, everyone can build robust language models through faster data curation
using both human and machine feedback. We provide support for each step in the MLOps cycle,
from data labeling to model monitoring.Installation and Setup​First, you'll need to install the argilla Python package as follows:pip install argilla --upgradeIf you already have an Argilla Server running, then you're good to go; but if
you don't, follow the next steps to install it.If you don't you can refer to Argilla - 🚀 Quickstart to deploy Argilla either on HuggingFace Spaces, locally, or on a server.Tracking​See a usage example of ArgillaCallbackHandler.from langchain.callbacks import ArgillaCallbackHandlerPreviousArangoDBNextArthurInstallation and SetupTrackingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/argilla |
7488686d174b-0 | Wikipedia | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/wikipedia |
7488686d174b-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/wikipedia |
7488686d174b-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerWikipediaOn this pageWikipediaWikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.Installation and Setup​pip install wikipediaDocument Loader​See a usage example.from langchain.document_loaders import WikipediaLoaderRetriever​See a usage example.from langchain.retrievers import WikipediaRetrieverPreviousWhyLabsNextWolfram AlphaInstallation and SetupDocument LoaderRetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/wikipedia |
f8dc4fa869a9-0 | Grobid | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/grobid |
f8dc4fa869a9-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/grobid |
f8dc4fa869a9-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerGrobidOn this pageGrobidThis page covers how to use the Grobid to parse articles for LangChain. | https://python.langchain.com/docs/integrations/providers/grobid |
f8dc4fa869a9-3 | It is separated into two parts: installation and running the serverInstallation and Setup​#Ensure You have Java installed
!apt-get install -y openjdk-11-jdk -q
!update-alternatives --set java /usr/lib/jvm/java-11-openjdk-amd64/bin/java#Clone and install the Grobid Repo
import os
!git clone https://github.com/kermitt2/grobid.git
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-11-openjdk-amd64"
os.chdir('grobid')
!./gradlew clean install#Run the server,
get_ipython().system_raw('nohup ./gradlew run > grobid.log 2>&1 &')You can now use the GrobidParser to produce documentsfrom langchain.document_loaders.parsers import GrobidParserfrom langchain.document_loaders.generic import GenericLoader#Produce chunks from article paragraphsloader = GenericLoader.from_filesystem( "/Users/31treehaus/Desktop/Papers/", glob="*", suffixes=[".pdf"], parser= GrobidParser(segment_sentences=False))docs = loader.load()#Produce chunks from article sentencesloader = GenericLoader.from_filesystem( "/Users/31treehaus/Desktop/Papers/", glob="*", suffixes=[".pdf"], parser= GrobidParser(segment_sentences=True))docs = loader.load()Chunk metadata will include bboxes although these are a bit funky to parse, see https://grobid.readthedocs.io/en/latest/Coordinates-in-PDF/PreviousGraphsignalNextGutenbergInstallation and SetupCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/grobid |
f65c8491ec0e-0 | Predibase | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/predibase |
f65c8491ec0e-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/predibase |
f65c8491ec0e-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerPredibaseOn this pagePredibaseLearn how to use LangChain with models on Predibase. Setup​Create a Predibase account and API key.Install the Predibase Python client with pip install predibaseUse your API key to authenticateLLM​Predibase integrates with LangChain by implementing LLM module. You can see a short example below or a full notebook under LLM > Integrations > Predibase. import osos.environ["PREDIBASE_API_TOKEN"] = "{PREDIBASE_API_TOKEN}"from langchain.llms import Predibasemodel = Predibase(model = 'vicuna-13b', predibase_api_key=os.environ.get('PREDIBASE_API_TOKEN'))response = model("Can you recommend me a nice dry wine?")print(response)Previouslogging_tracing_portkeyNextPrediction GuardSetupLLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/predibase |
c3835e1f165a-0 | Ray Serve | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/ray_serve |
c3835e1f165a-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/ray_serve |
c3835e1f165a-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerRay ServeOn this pageRay ServeRay Serve is a scalable model serving library for building online inference APIs. Serve is particularly well suited for system composition, enabling you to build a complex inference service consisting of multiple chains and business logic all in Python code. Goal of this notebook​This notebook shows a simple example of how to deploy an OpenAI chain into production. You can extend it to deploy your own self-hosted models where you can easily define amount of hardware resources (GPUs and CPUs) needed to run your model in production efficiently. Read more about available options including autoscaling in the Ray Serve documentation.Setup Ray Serve​Install ray with pip install ray[serve]. General Skeleton​The general skeleton for deploying a service is the following:# 0: Import ray serve and request from starlettefrom ray import servefrom starlette.requests import Request# 1: Define a Ray Serve [email protected] LLMServe: def __init__(self) -> None: # All the initialization code goes here pass async def __call__(self, request: Request) -> str: # You can parse the request here # and return a response return "Hello World"# 2: Bind the model to deploymentdeployment = LLMServe.bind()# 3: Run | https://python.langchain.com/docs/integrations/providers/ray_serve |
c3835e1f165a-3 | 2: Bind the model to deploymentdeployment = LLMServe.bind()# 3: Run the deploymentserve.api.run(deployment)# Shutdown the deploymentserve.api.shutdown()Example of deploying and OpenAI chain with custom prompts​Get an OpenAI API key from here. By running the following code, you will be asked to provide your API key.from langchain.llms import OpenAIfrom langchain import PromptTemplate, LLMChainfrom getpass import getpassOPENAI_API_KEY = getpass()@serve.deploymentclass DeployLLM: def __init__(self): # We initialize the LLM, template and the chain here llm = OpenAI(openai_api_key=OPENAI_API_KEY) template = "Question: {question}\n\nAnswer: Let's think step by step." prompt = PromptTemplate(template=template, input_variables=["question"]) self.chain = LLMChain(llm=llm, prompt=prompt) def _run_chain(self, text: str): return self.chain(text) async def __call__(self, request: Request): # 1. Parse the request text = request.query_params["text"] # 2. Run the chain resp = self._run_chain(text) # 3. Return the response return resp["text"]Now we can bind the deployment.# Bind the model to deploymentdeployment = DeployLLM.bind()We can assign the port number and host when we want to run the deployment. # Example port numberPORT_NUMBER = | https://python.langchain.com/docs/integrations/providers/ray_serve |
c3835e1f165a-4 | the port number and host when we want to run the deployment. # Example port numberPORT_NUMBER = 8282# Run the deploymentserve.api.run(deployment, port=PORT_NUMBER)Now that service is deployed on port localhost:8282 we can send a post request to get the results back.import requeststext = "What NFL team won the Super Bowl in the year Justin Beiber was born?"response = requests.post(f"http://localhost:{PORT_NUMBER}/?text={text}")print(response.content.decode())PreviousQdrantNextRebuffGoal of this notebookSetup Ray ServeGeneral SkeletonExample of deploying and OpenAI chain with custom promptsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/ray_serve |
0ab66dd386c3-0 | Comet | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/comet_tracking |
0ab66dd386c3-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/comet_tracking |
0ab66dd386c3-2 | EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerCometOn this pageCometIn this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with Comet. Example Project: Comet with LangChainInstall Comet and Dependencies​import sys{sys.executable} -m spacy download en_core_web_smInitialize Comet and Set your Credentials​You can grab your Comet API Key here or click the link after initializing Cometimport comet_mlcomet_ml.init(project_name="comet-example-langchain")Set OpenAI and SerpAPI credentials​You will need an OpenAI API Key and a SerpAPI API Key to run the following examplesimport osos.environ["OPENAI_API_KEY"] = "..."# os.environ["OPENAI_ORGANIZATION"] = "..."os.environ["SERPAPI_API_KEY"] = "..."Scenario 1: Using just an LLM​from datetime import datetimefrom langchain.callbacks import CometCallbackHandler, StdOutCallbackHandlerfrom langchain.llms import OpenAIcomet_callback = CometCallbackHandler( project_name="comet-example-langchain", complexity_metrics=True, stream_logs=True, tags=["llm"], visualizations=["dep"],)callbacks = [StdOutCallbackHandler(), comet_callback]llm = OpenAI(temperature=0.9, callbacks=callbacks, verbose=True)llm_result = llm.generate(["Tell me a | https://python.langchain.com/docs/integrations/providers/comet_tracking |
0ab66dd386c3-3 | callbacks=callbacks, verbose=True)llm_result = llm.generate(["Tell me a joke", "Tell me a poem", "Tell me a fact"] * 3)print("LLM result", llm_result)comet_callback.flush_tracker(llm, finish=True)Scenario 2: Using an LLM in a Chain​from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandlerfrom langchain.chains import LLMChainfrom langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatecomet_callback = CometCallbackHandler( complexity_metrics=True, project_name="comet-example-langchain", stream_logs=True, tags=["synopsis-chain"],)callbacks = [StdOutCallbackHandler(), comet_callback]llm = OpenAI(temperature=0.9, callbacks=callbacks)template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.Title: {title}Playwright: This is a synopsis for the above play:"""prompt_template = PromptTemplate(input_variables=["title"], template=template)synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)test_prompts = [{"title": "Documentary about Bigfoot in Paris"}]print(synopsis_chain.apply(test_prompts))comet_callback.flush_tracker(synopsis_chain, finish=True)Scenario 3: Using An Agent with Tools​from langchain.agents import initialize_agent, load_toolsfrom langchain.callbacks import CometCallbackHandler, StdOutCallbackHandlerfrom langchain.llms import OpenAIcomet_callback = CometCallbackHandler( project_name="comet-example-langchain", complexity_metrics=True, stream_logs=True, tags=["agent"],)callbacks = [StdOutCallbackHandler(), comet_callback]llm = | https://python.langchain.com/docs/integrations/providers/comet_tracking |
0ab66dd386c3-4 | tags=["agent"],)callbacks = [StdOutCallbackHandler(), comet_callback]llm = OpenAI(temperature=0.9, callbacks=callbacks)tools = load_tools(["serpapi", "llm-math"], llm=llm, callbacks=callbacks)agent = initialize_agent( tools, llm, agent="zero-shot-react-description", callbacks=callbacks, verbose=True,)agent.run( "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")comet_callback.flush_tracker(agent, finish=True)Scenario 4: Using Custom Evaluation Metrics​The CometCallbackManager also allows you to define and use Custom Evaluation Metrics to assess generated outputs from your model. Let's take a look at how this works. In the snippet below, we will use the ROUGE metric to evaluate the quality of a generated summary of an input prompt. %pip install rouge-scorefrom rouge_score import rouge_scorerfrom langchain.callbacks import CometCallbackHandler, StdOutCallbackHandlerfrom langchain.chains import LLMChainfrom langchain.llms import OpenAIfrom langchain.prompts import PromptTemplateclass Rouge: def __init__(self, reference): self.reference = reference self.scorer = rouge_scorer.RougeScorer(["rougeLsum"], use_stemmer=True) def compute_metric(self, generation, prompt_idx, gen_idx): prediction = generation.text results = self.scorer.score(target=self.reference, prediction=prediction) return { "rougeLsum_score": | https://python.langchain.com/docs/integrations/providers/comet_tracking |
0ab66dd386c3-5 | return { "rougeLsum_score": results["rougeLsum"].fmeasure, "reference": self.reference, }reference = """The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building.It was the first structure to reach a height of 300 metres.It is now taller than the Chrysler Building in New York City by 5.2 metres (17 ft)Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France ."""rouge_score = Rouge(reference=reference)template = """Given the following article, it is your job to write a summary.Article:{article}Summary: This is the summary for the above article:"""prompt_template = PromptTemplate(input_variables=["article"], template=template)comet_callback = CometCallbackHandler( project_name="comet-example-langchain", complexity_metrics=False, stream_logs=True, tags=["custom_metrics"], custom_metrics=rouge_score.compute_metric,)callbacks = [StdOutCallbackHandler(), comet_callback]llm = OpenAI(temperature=0.9)synopsis_chain = LLMChain(llm=llm, prompt=prompt_template)test_prompts = [ { "article": """ The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 | https://python.langchain.com/docs/integrations/providers/comet_tracking |
0ab66dd386c3-6 | measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct. """ }]print(synopsis_chain.apply(test_prompts, callbacks=callbacks))comet_callback.flush_tracker(synopsis_chain, finish=True)PreviousCollege ConfidentialNextConfluenceInstall Comet and DependenciesInitialize Comet and Set your CredentialsSet OpenAI and SerpAPI credentialsScenario 1: Using just an LLMScenario 2: Using an LLM in a ChainScenario 3: Using An Agent with ToolsScenario 4: | https://python.langchain.com/docs/integrations/providers/comet_tracking |
0ab66dd386c3-7 | Using an LLM in a ChainScenario 3: Using An Agent with ToolsScenario 4: Using Custom Evaluation MetricsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/providers/comet_tracking |
42c8d05ca787-0 | GitBook | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/providers/gitbook |
42c8d05ca787-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale | https://python.langchain.com/docs/integrations/providers/gitbook |
Subsets and Splits